As the world grapples with the impacts of the COVID-19 pandemic, we have become increasingly reliant on artificial intelligence (AI) technology. Experts have used AI to test potential treatments, diagnose individuals, and analyze other public health impacts. Even before the pandemic, businesses were increasingly turning to AI to improve efficiency and overall profit. Between 2015 and 2019, the adoption of AI technology by businesses grew more than 270 percent.
The 2020 election will be unprecedented in many respects. More people will be voting by mail, and there will likely be more democratic participation online than ever before. Internet platforms and communication services will become more influential forums as people are restricted from in-person conventions and debates. But even before the pandemic pushed these operations online, some lawmakers were already seeking to monitor misinformation in political discourse on the internet, specifically in the context of deepfakes. Deepfakes are manipulated photo, video or audio clips generated by computers, often with the assistance of artificial intelligence algorithms. They can make someone appear to say or do something that never actually happened or create realistic images of people who do not exist.
Copyright bots, otherwise known as content recognition software, are automated programs that can analyze audio and video clips uploaded to a platform, then compare those Clips against a database of content provided by copyright owners to identify matches. The copyright owners can then review the identified matches to assess if they are actually copies, if they are authorized or not, and if any action is warranted. Some programs utilizing copyright bots offer their own enforcement procedures to customers, while other programs are partnered with law firms that will act on behalf of the copyright owners to enforce their rights, including sending demand letters and filing lawsuits.
Much like humans, bots come in all shapes and sizes. In social media networks, these bots can like what you post and even increase your followers. Companies use bots for all types of things—from booking a ride to giving makeup tutorials. Some bots can even solve your legal problems. Besides saving time and money, bots have the potential to reduce errors and increase a business’s customer base. But what happens when bots spy on users and share personal information? Or when they make racial slurs and offensive comments?
When it comes to photos destined for the web, I’d rather be behind the camera than in front of it. However, on a recent trip to Tokyo I was reminded that photos of me, and specifically my face, are often being captured and processed by systems that are increasingly being embedded in our modern life.
In another case of the law trying to keep pace with evolving technology, legislators are introducing bills to punish those who attempt to create false images that purport to be real. Targeting the rise of automated computer-generated imagery that has become increasingly accessible to the public, on February 14, 2019, California Assemblyman Marc Berman introduced a bill to create a criminal cause of action for making or distributing a “deepfake.” Deepfakes are multimedia, often audiovisual recordings, that seem real but that are generated by computers, often utilizing artificial intelligence-enhanced algorithms.
Online platforms battle the trolls (and ad blockers); Pokemon GO! creator Niantic promises to watch things a bit more closely; blockchain is not as “unhackable” as many think; and more …
In the popular Netflix series Ozark, money launderer Marty Byrde expends a lot of time and energy mitigating the risks that relate to his work, including his drug cartel client, a pair of farmers, the local pastor, and his own employee and her relatives—but financial regulators never appear to be a blip on his radar. Would the series turn out differently if Marty’s bank had used artificial intelligence to examine his deposits?