In another case of the law trying to keep pace with evolving technology, legislators are introducing bills to punish those who attempt to create false images that purport to be real. Targeting the rise of automated computer-generated imagery that has become increasingly accessible to the public, on February 14, 2019, California Assemblyman Marc Berman introduced a bill to create a criminal cause of action for making or distributing a “deepfake.” Deepfakes are multimedia, often audiovisual recordings, that seem real but that are generated by computers, often utilizing artificial intelligence-enhanced algorithms.
No one knows your face as well as your iPhone does. All the unique variances of your face that make it yours and yours alone, these are all data points that your iPhone uses to unlock your phone using a face in place of a thumbprint. This same data that the iPhone collects can be used by the underlying tech—facial recognition technology—in a vast array of applications, from border control to photo tagging to law enforcement. But is this data (the measurement of the space between the eyes, the texture of the skin, etc.) open data? Or do individuals have a right to protection of an image of their face?
In his 2013 book, Our Final Invention, documentary filmmaker James Barrat explores the perils and promises of artificial intelligence (AI). The book’s ominous subtitle, Artificial Intelligence and the End of the Human Era, echoes similar dire sentiments regarding the ultimate consequences of mankind’s quest for fully functioning AI, including from celebrated theorists such as Stephen Hawking (“The development of full artificial intelligence could spell the end of the human race”) and Claude Shannon (“I visualize a time when we will be to robots what dogs are to humans.”).
In the popular Netflix series Ozark, money launderer Marty Byrde expends a lot of time and energy mitigating the risks that relate to his work, including his drug cartel client, a pair of farmers, the local pastor, and his own employee and her relatives—but financial regulators never appear to be a blip on his radar. Would the series turn out differently if Marty’s bank had used artificial intelligence to examine his deposits?
“AI will most likely lead to the end of the world, but in the meantime, there’ll be great companies.” –Sam Altman
Artificial intelligence (AI) is a controversial topic. It is easy to imagine a near future where AI solves some of our greatest problems and a relatively more distant future where AI becomes our greatest problem. For now, AI has yet to rebel against us and is proving to be a valuable tool in our everyday lives. AI is being deployed to help companies improve productivity, reduce costs, streamline processes, and unlock analytics and insights that weren’t previously available. Like past disruptive technologies, AI presents new issues under familiar areas of concern. Every company needs to know how their data is being used. AI technology adds a new layer of complexity to that all-too-familiar issue.
In his recent commentary, AI: Black boxes and the boardroom, colleague Tim Wright examines how well-founded concerns over the inscrutability of artificial intelligence processes and the bad outcomes that can be triggered by bad data can be alleviated by certain common sense approaches in the boardroom.
From the frontiers of content creation, we bring news in the longstanding war between man and machine. Or, in this particular case, animators versus software. Researchers from the University of Illinois Urbana-Champaign, Allen Institute for Artificial Intelligence, and the University of Washington are developing artificial intelligence software, dubbed “Composition, Retrieval and Fusion Network” (or CRAFT for short), that allows a user to generate a new video scene composed of graphic elements extracted from a library of preexisting video scenes by simply typing out a description of the new scene (e.g., “Fred is wearing a blue hat and talking to Wilma in the living room. Wilma then sits down on a couch.”). See here for those that prefer academic papers and here for those that prefer videos.
In this roundup, some of your favorite initialisms (AI, IP, TOS) come out to play while stories about government agencies and social media access call into question whether such access is a two-way street.
- There’s a wiki for terms of service agreements. (Arielle Pardes, Wired)
- Qualcomm introduces its first chip built for augmented and virtual reality. (Jacob Kastrenakes, The Verge)
- Microsoft’s HoloLens guides the blind through complicated buildings. (Rachel Metz, MIT Technology Review)
- 23andMe sues Ancestry over some very old intellectual property. (Megan Molteni, Wired)
- Law enforcement officials continue to push for more access to social media data. (Halley Freger, ABC News)
- YouTube stars criticize the platform for testing an algorithm at their expense. (Chris Foxx, BBC.com)
- Softback prepares to scale up its robotics business. (Parmy Olson, Forbes)
- PUBG files an infringement suit against Fortnite in South Korea. (Yuji Nakamura and Sam Kim, Bloomberg News)
- Apple introduces new controls to help users maintain screen-time/life balance. (Sarah Perez, TechCrunch)
- The city of Colorado Springs ventures briefly into a gray area as it blocks multiple users on social media accounts. (Anthony Prosceno/Tony Keith, KKTV 11News)