An exponential increase in telework prompted by the COVID-19 pandemic has led to a parallel increase in cyberattacks, requiring companies to actively monitor cyber risks. On Pillsbury’s Industry Insights podcast series, colleague Brian Finch, a partner in the Public Policy group and co-leader of the COVID-19 taskforce, discussed two types of threats that have skyrocketed in the current crisis. The following describes three key takeaways on the increased risk for cybersecurity and measures businesses should take to mitigate threats in the case of a cyberattack.
Companies use a variety of causes of actions to protect their websites from competitors or others wanting to “scrape” data from their site using automated tools. Over the years, legal doctrines such as copyright infringement, misappropriation, unjust enrichment, breach of contract, and trespass to chattels have all been asserted, though many of them have limited applicability or are otherwise imperfect options for site owners. One of the most commonly used tools to protect against scraping has been a federal statute: the Computer Fraud and Abuse Act (CFAA). The CFAA is a cybersecurity law passed in 1986 as an amendment to the Comprehensive Crime Control Act of 1894. Originally drafted to address more traditional computer “hacking,” the CFAA prohibits intentional access to a computer without authorization, or in excess of authorization. Due to both the criminal and civil liability that it imposes, the CFAA has been an effective tool to discourage scraping, with website operators arguing that by simply stating on the site that automated scraping is prohibited, any such activity is unauthorized and gives rise to CFAA liability. An ongoing case between data analytics company hiQ Labs Inc. and LinkedIn questions the extent to which companies may invoke the CFAA as it pertains to scraping of this type of data.
In November 2018, the U.S. Department of Justice rolled out the China Initiative. This new policy includes plans to “identify priority Chinese trade theft cases, ensure we have enough resources dedicated to them, and … bring them to an appropriate conclusion quickly and effectively.” The new Attorney General, who has a master’s degree in Chinese Studies, supports the Initiative and intends to continue to advance it.
On the heels of a January 2019 announcement that it was charging nine persons with participation in a scheme that allowed them to hack into the SEC’s confidential database of public filings, commonly known as EDGAR. On February 28, the SEC named Gabriel Benincasa as its first-ever Chief Risk Officer (CRO). Although the two events have no direct causal link, they serve as useful reminders that the SEC is determined to re-emphasize its mission to ensure the smooth operation of the U.S. securities markets and to root out and punish instances of fraud and market manipulation, be it by traditional methods or where digital tools are implicated and databases are compromised. The position of CRO is a new one at the SEC. Created by SEC Chairman Jay Clayton to strengthen the agency’s risk management and cybersecurity efforts, Benincasa’s office will help to coordinate efforts to identify, monitor and address risks facing the agency.
Social media companies like Facebook and Twitter have written “white papers” and devoted considerable resources to projects intended to create services that encourage trust and a sense of familiarity on the part of users. Messages, photos and personal information are easily shared with groups of friends and co-workers, or in response to solicitations tailored to a user’s trusted brands, thus creating an environment of perceived safety and intimacy among users. However this communal atmosphere can be, and often is, exploited by “black hat” hackers and malware that lurk behind a façade of trust. In its April 27, 2017 White Paper entitled “Information Operations and Facebook,” and its September 6, 2017 “An Update on Information Operations on Facebook,” the company noted that there are, “three major features of online information operations that we assess have been attempted on Facebook.” Those features include: (1) targeted data collection such as hacking or spearfishing; (2) content creation including the creation of false personas and memes; and (3) false amplification by creating false accounts or using bots to spread memes and false content which, in turn sow mistrust in political institutions and spread confusion. Ironically, these techniques used to spread “fake news” and malware designed to amplify divisive social and political messages, are enhanced and made effective by the very environment of trust cultivated by social media sites.
As we discussed recently, the Equifax data breach has inevitably brought a great deal of scrutiny and legal action against the credit reporting agency. Amidst the numerous brewing class actions and other reactions from government agencies and state AGs, it’s worth pointing out another front on which the company—and more importantly, individuals within the company—may face legal consequences.
Since September 7, 2017, Equifax, one of three credit rating agencies in the United States, has been dealing with the fallout from one of the largest (known) data breaches of personal information, putting 143 million Americans at risk from fraud and identity theft (roughly 44% of the U.S. population).
After counter-protests ended in tragedy, a small group of social media users took to Twitter to expose the identities of the white supremacists and neo-Nazis rallying in Charlottesville, Va. Since last Sunday, the @YesYoureRacist account has been calling on Twitter users to identify participants in the rally. Twitter users identified several white supremacists, including Cole White. Users revealed White’s name and place of residence and his employer reportedly fired him from his job at a restaurant in Berkeley, Calif. Several other employers fired employees identified online as attending the rally. In the wake of what will likely be just the latest incident where such behavior will be exhibited and subsequently called out on social media, it’s a good time to look at doxing and the legal environment in which it exists.