It might be a little meta to have a blog post about a blog post, but there’s no way around it when the FTC publishes a post to its blog warning companies that use AI to “[h]old yourself accountable—or be ready for the FTC to do it for you.” When last we wrote about facial recognition AI, we discussed how the courts are being used to push for AI accountability and how Twitter has taken the initiative to understand the impacts of its machine learning algorithms through its Responsible ML program. Now we have the FTC weighing in with recommendations on how companies can use AI in a truthful, fair and equitable manner—along with a not-so-subtle reminder that the FTC has tools at its disposal to combat unfair or biased AI and is willing to step in and do so should companies fail to take responsibility.
Google v. Oracle, Fair Use and the Decreasing Value of Code Over Time
Earlier this month, in what many consider the copyright case of the decade, the Supreme Court released its much-anticipated decision in Google v. Oracle. In it, the Court ruled that Google’s copying of 11,500 lines of declaring code from Java SE for use in Google’s Android platform, was fair use. Having recently reviewed the history of the fair use defense in copyright infringement cases, we now turn to the case itself.
A Short History of the Fair Use Defense in the Software Industry
Last month, the Supreme Court released its much-anticipated decision in Google v. Oracle. The Court ruled that Google’s copying of 11,500 lines of declaring code from Java SE, for use in Google’s Android platform, was fair use.
While we examine the Supreme Court’s decision in another post, let’s first take a look at the history of the fair use defense in the software industry.
Playboy, NFTs and Monetizing a Traditional Media Portfolio
Just as video killed the radio star, so did the digital transformation kill (or at least convert) traditional media. While “going digital” became the bane of many traditional media companies that struggled to make the leap to an online world, NFTs may be the digital savior that some of these companies need. Imagine that you are a company with a known brand and sizeable catalog of media with potential historical and cultural significance. Yet, you’ve found it difficult to monetize these assets in a world that abhors paywalls and often takes an overly broad view of what constitutes “fair use.” If only there were a way to highlight the unique significance of these assets and tap into the latent collector in all of us. Anyone who follows us already knows that NFTs can serve this very function.
With Facial Recognition, Responsible Users Are the Key to Effective AI
As part of our on-going coverage on the use and potential abuse of facial recognition AI, we bring you news out of Michigan, where the University of Michigan’s Law School, the American Civil Liberties Union (ACLU) and the ACLU of Michigan have filed a lawsuit against the Detroit Police Department (DPD), the DPD Police Chief, and a DPD investigator on behalf of Robert Williams—a Michigan resident who was wrongfully arrested based on “shoddy” police work that relied upon facial recognition technology to identify a shoplifter.
False Advertising, Trademark Infringement and TOS: Three Common Hashtagging Mistakes for Companies to Avoid
When the first social media hashtag was used in 2007, users had no idea how ubiquitous hashtags would become. Today, hashtags are an essential part of our lives (and a subject we’ve been writing about for years). From marketing a business to garnering support for a cause, hashtags have become an essential part of our society. This may even be an understatement. For instance, from May 26, 2020, until June 7, 2020, alone, the #BlackLivesMatter hashtag was used over 47 million times on Twitter. 47 million. Talk about impact.
Everything in Moderation: Artificial Intelligence and Social Media Content Review
Interactive online platforms have become an integral part of our daily lives. While user-generated content, free from traditional editorial constraints, has spurred vibrant online communications, improved business processes and expanded access to information, it has also raised complex questions regarding how to moderate harmful online content. As the volume of user-generated content continues to grow, it has become increasingly difficult for internet and social media companies to keep pace with the moderation needs of the information posted on their platforms. Content moderation measures supported by artificial intelligence (AI) have emerged as important tools to address this challenge.
Smart Technology in Commercial Real Estate
“Hey Siri…” “Alexa…” “Okay Google…” These are just some of the buzzwords and phrases that have entered day-to-day vocabulary as a result of the explosion of smart technology. Internet of Things (IoT) devices are in our cars, in our workplaces and on our bodies. But nowhere is smart technology more prevalent than in our homes. The array of services that are available coupled with the growing number of companies and service providers eager to innovate, should only grow this technology’s market share in the coming years.
Defying Data Gravity: Vertical Cloud Computing, Hybrid Tools and Usage Rights
The last decade saw explosive growth in enterprise migration to the cloud, a trend driven by the promise of lower overhead costs and greater scalability. Given this, many have made the leap and moved both non-mission-critical workloads and mission-critical functionality into the cloud.
This is where “data gravity,” a phrase coined by Dave McCrory comes into play. Data gravity is the “effect that attracts large sets of data or highly active applications/services to other large sets of data or highly active applications/services, the same way gravity attracts planets or stars.” So, in the simplest terms, data gravity is the idea that increasing volumes of data can cause data to function like an anchor, making it increasingly difficult to move as the data in question continues to increase.
The Misinformation of Capitol Hill: Section 230 and the Weaponization of Social Media
Social media has experienced an unprecedented growth in popularity and usage since its inception. This is owed in large part to Section 230 of the Communications Decency Act. Unlike their print counterparts, internet publishers enjoy an increased level of freedom and immunity under Section 230 for the content they publish. It is Section 230 that gives social media companies, large and small, the ability to manage or host third-party content without fear of lawsuit. As intermediaries, Section 230 ensures these companies will not be liable for filtering decisions that allow them to establish their own standards and delete or modify content they consider obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable—regardless of constitutional protections. But Section 230 also protects them if they decide not to filter such content. However, recent events suggest that Section 230’s until now all-encompassing shield could soon be less so.
Internet & Social Media Law Blog



