Companies use a variety of causes of actions to protect their websites from competitors or others wanting to “scrape” data from their site using automated tools. Over the years, legal doctrines such as copyright infringement, misappropriation, unjust enrichment, breach of contract, and trespass to chattels have all been asserted, though many of them have limited applicability or are otherwise imperfect options for site owners. One of the most commonly used tools to protect against scraping has been a federal statute: the Computer Fraud and Abuse Act (CFAA). The CFAA is a cybersecurity law passed in 1986 as an amendment to the Comprehensive Crime Control Act of 1894. Originally drafted to address more traditional computer “hacking,” the CFAA prohibits intentional access to a computer without authorization, or in excess of authorization. Due to both the criminal and civil liability that it imposes, the CFAA has been an effective tool to discourage scraping, with website operators arguing that by simply stating on the site that automated scraping is prohibited, any such activity is unauthorized and gives rise to CFAA liability. An ongoing case between data analytics company hiQ Labs Inc. and LinkedIn questions the extent to which companies may invoke the CFAA as it pertains to scraping of this type of data.
Much like humans, bots come in all shapes and sizes. In social media networks, these bots can like what you post and even increase your followers. Companies use bots for all types of things—from booking a ride to giving makeup tutorials. Some bots can even solve your legal problems. Besides saving time and money, bots have the potential to reduce errors and increase a business’s customer base. But what happens when bots spy on users and share personal information? Or when they make racial slurs and offensive comments?
We’ve previously written about “tweet-less, picture-less,” computer-operated accounts or bots, that make one appear more popular—a.k.a. influential on social media—than one actually is. Recently, legislators and law enforcement agencies have moved to crack down on bots, their evil cousins known as sock puppets, and other deceptive social engagement practices. Specifically, California passed a law that goes into effect in July 2019 banning the undisclosed use of bots to communicate or interact with a person for knowingly deceiving that person to influence commercial transactions or vote in an election. Meanwhile, New York and Florida announced settlements with Devumi LLC, a company that grossed over $15 million in revenue by creating, packaging and selling fake social media likes, followers and posts after the media exposed Devumi’s deceptive activities. The Devumi settlements mark the first of their kind indicating that such activity constitutes illegal deception of the public and, to the extent Devumi used stolen identities for its online activities, illegal impersonation.
Tweet nicely to the Twitter bot, “LnH: The Band”—a newcomer in artificial intelligence music generation—and the bot will automatically compose melodies for you. The AI-based band is “currently working on their first album,” according to LnH Music, but who will own the rights and royalties to the album? Or what about Mubert, which is touted by its creators as the world’s first online music composer, and which “continuously produces music in real-time … based on the laws of musical theory, mathematics and creative experience?” In other words, if a computer program generates a creative work—be it a song, book or other creation—is there a copyright to be owned? If so, who owns and gets to collect on the copyright?