Apple gets around to AR, the NHL enters esports, the Internet of Things may bring new meaning to “workers unite,” so many medical records, and more …
It’s difficult finding an industry that doesn’t stand to be transformed in some way by artificial intelligence. Yet no matter how gleaming the potential, some industries are naturally more cautious than others. In her latest post, “Artificial Intelligence: A Boon for Insurance Underwriting?”, Ashley E. Cowgill touches on the insurance industry’s reluctance while pointing to some areas where AI stands to be more quickly embraced.
Be you a founder, would-be investor or acquirer, correctly valuing the intellectual property of a company is rarely a simple task, but it can be even more challenging when that IP involves artificial intelligence or machine learning. See what our colleague Josh Tucker has to say about the challenges and importance of protecting underlying IP on 7 Mile Advisors’ Deal Talk podcast, “How Patents, AI and Machine Learning Affect Value.”
A sponsored post popped up on my Instagram last week that captured my obsession with statement jewelry and my periodic check on developments in facial recognition technology: “Artist Designs Metal Jewelry to Block Facial Recognition Software from Tracking You”. Statement jewelry? Check. An indication of how stressed out people are by facial recognition technology? I think so. While an experimental project, it’s not a far stretch to imagine the design actually being sold and purchased.
Much like humans, bots come in all shapes and sizes. In social media networks, these bots can like what you post and even increase your followers. Companies use bots for all types of things—from booking a ride to giving makeup tutorials. Some bots can even solve your legal problems. Besides saving time and money, bots have the potential to reduce errors and increase a business’s customer base. But what happens when bots spy on users and share personal information? Or when they make racial slurs and offensive comments?
(Note, this post has spoilers for Avengers: Endgame.)
Perhaps one of the most mesmerizing scenes in Avengers: Endgame is where all the MCU superheroes (including those on Titan) come through Dr. Strange’s portals to enter the battle against Thanos. In Avengers: Infinity War, Dr. Strange didn’t use these portals to send Iron Man and the others on Titan back to Earth before everyone got dusted, but that alternative storyline certainly may have been one that fans would have enjoyed. Understandably, one enormous limiting factor to alternative storylines is cost—especially when $600-800 million was spent to create the two movies as they are. Future advances in artificial intelligence technologies may change that. Indeed, a number of large tech companies are already interested in creating interactive content to personalize storytelling (e.g., Black Mirror’s “Bandersnatch” episode), and recent developments in machine learning algorithms (including those fueling the creation of photorealistic images) may bring us closer to that reality sooner than later. If so, under what circumstances will companies own and get to collect on the copyright?
When it comes to photos destined for the web, I’d rather be behind the camera than in front of it. However, on a recent trip to Tokyo I was reminded that photos of me, and specifically my face, are often being captured and processed by systems that are increasingly being embedded in our modern life.
In another case of the law trying to keep pace with evolving technology, legislators are introducing bills to punish those who attempt to create false images that purport to be real. Targeting the rise of automated computer-generated imagery that has become increasingly accessible to the public, on February 14, 2019, California Assemblyman Marc Berman introduced a bill to create a criminal cause of action for making or distributing a “deepfake.” Deepfakes are multimedia, often audiovisual recordings, that seem real but that are generated by computers, often utilizing artificial intelligence-enhanced algorithms.
No one knows your face as well as your iPhone does. All the unique variances of your face that make it yours and yours alone, these are all data points that your iPhone uses to unlock your phone using a face in place of a thumbprint. This same data that the iPhone collects can be used by the underlying tech—facial recognition technology—in a vast array of applications, from border control to photo tagging to law enforcement. But is this data (the measurement of the space between the eyes, the texture of the skin, etc.) open data? Or do individuals have a right to protection of an image of their face?