A sponsored post popped up on my Instagram last week that captured my obsession with statement jewelry and my periodic check on developments in facial recognition technology: “Artist Designs Metal Jewelry to Block Facial Recognition Software from Tracking You”. Statement jewelry? Check. An indication of how stressed out people are by facial recognition technology? I think so. While an experimental project, it’s not a far stretch to imagine the design actually being sold and purchased.
Much like humans, bots come in all shapes and sizes. In social media networks, these bots can like what you post and even increase your followers. Companies use bots for all types of things—from booking a ride to giving makeup tutorials. Some bots can even solve your legal problems. Besides saving time and money, bots have the potential to reduce errors and increase a business’s customer base. But what happens when bots spy on users and share personal information? Or when they make racial slurs and offensive comments?
(Note, this post has spoilers for Avengers: Endgame.)
Perhaps one of the most mesmerizing scenes in Avengers: Endgame is where all the MCU superheroes (including those on Titan) come through Dr. Strange’s portals to enter the battle against Thanos. In Avengers: Infinity War, Dr. Strange didn’t use these portals to send Iron Man and the others on Titan back to Earth before everyone got dusted, but that alternative storyline certainly may have been one that fans would have enjoyed. Understandably, one enormous limiting factor to alternative storylines is cost—especially when $600-800 million was spent to create the two movies as they are. Future advances in artificial intelligence technologies may change that. Indeed, a number of large tech companies are already interested in creating interactive content to personalize storytelling (e.g., Black Mirror’s “Bandersnatch” episode), and recent developments in machine learning algorithms (including those fueling the creation of photorealistic images) may bring us closer to that reality sooner than later. If so, under what circumstances will companies own and get to collect on the copyright?
When it comes to photos destined for the web, I’d rather be behind the camera than in front of it. However, on a recent trip to Tokyo I was reminded that photos of me, and specifically my face, are often being captured and processed by systems that are increasingly being embedded in our modern life.
In another case of the law trying to keep pace with evolving technology, legislators are introducing bills to punish those who attempt to create false images that purport to be real. Targeting the rise of automated computer-generated imagery that has become increasingly accessible to the public, on February 14, 2019, California Assemblyman Marc Berman introduced a bill to create a criminal cause of action for making or distributing a “deepfake.” Deepfakes are multimedia, often audiovisual recordings, that seem real but that are generated by computers, often utilizing artificial intelligence-enhanced algorithms.
No one knows your face as well as your iPhone does. All the unique variances of your face that make it yours and yours alone, these are all data points that your iPhone uses to unlock your phone using a face in place of a thumbprint. This same data that the iPhone collects can be used by the underlying tech—facial recognition technology—in a vast array of applications, from border control to photo tagging to law enforcement. But is this data (the measurement of the space between the eyes, the texture of the skin, etc.) open data? Or do individuals have a right to protection of an image of their face?
In his 2013 book, Our Final Invention, documentary filmmaker James Barrat explores the perils and promises of artificial intelligence (AI). The book’s ominous subtitle, Artificial Intelligence and the End of the Human Era, echoes similar dire sentiments regarding the ultimate consequences of mankind’s quest for fully functioning AI, including from celebrated theorists such as Stephen Hawking (“The development of full artificial intelligence could spell the end of the human race”) and Claude Shannon (“I visualize a time when we will be to robots what dogs are to humans.”).
In the popular Netflix series Ozark, money launderer Marty Byrde expends a lot of time and energy mitigating the risks that relate to his work, including his drug cartel client, a pair of farmers, the local pastor, and his own employee and her relatives—but financial regulators never appear to be a blip on his radar. Would the series turn out differently if Marty’s bank had used artificial intelligence to examine his deposits?