Much like humans, bots come in all shapes and sizes. In social media networks, these bots can like what you post and even increase your followers. Companies use bots for all types of things—from booking a ride to giving makeup tutorials. Some bots can even solve your legal problems. Besides saving time and money, bots have the potential to reduce errors and increase a business’s customer base. But what happens when bots spy on users and share personal information? Or when they make racial slurs and offensive comments?
When it comes to photos destined for the web, I’d rather be behind the camera than in front of it. However, on a recent trip to Tokyo I was reminded that photos of me, and specifically my face, are often being captured and processed by systems that are increasingly being embedded in our modern life.
In another case of the law trying to keep pace with evolving technology, legislators are introducing bills to punish those who attempt to create false images that purport to be real. Targeting the rise of automated computer-generated imagery that has become increasingly accessible to the public, on February 14, 2019, California Assemblyman Marc Berman introduced a bill to create a criminal cause of action for making or distributing a “deepfake.” Deepfakes are multimedia, often audiovisual recordings, that seem real but that are generated by computers, often utilizing artificial intelligence-enhanced algorithms.
Online platforms battle the trolls (and ad blockers); Pokemon GO! creator Niantic promises to watch things a bit more closely; blockchain is not as “unhackable” as many think; and more …
In the popular Netflix series Ozark, money launderer Marty Byrde expends a lot of time and energy mitigating the risks that relate to his work, including his drug cartel client, a pair of farmers, the local pastor, and his own employee and her relatives—but financial regulators never appear to be a blip on his radar. Would the series turn out differently if Marty’s bank had used artificial intelligence to examine his deposits?
The IoT is a criminal cryptojacker’s delight; new patents suggest Walmart may have an eye on virtual reality in its future; MIT Technology Review has a few technologies worth thinking about in 2018; and more …
If you haven’t seen Sundar Pichai’s presentation on Google Duplex, watch it. The technology is fascinating.
Google is developing software that can assist users in completing specific tasks such as making reservations by telephone. The software uses anonymized phone conversations as the basis for its neural network and in conjunction with automated speech recognition and text-to-speech software can have independent phone conversations with other people. Incredibly, the software requires no human interaction—at least by the user requesting the service—to complete its task. The result is that you can task the software to setup a haircut appointment for you, or book a table at a restaurant where it is difficult to get reservations, with no further input needed. It can also work with different scheduling options if your preferred time is not available. And importantly, the conversations seem natural—it is very difficult to tell that one of the participants in the conversation is a computer.
In a time where “fake news” is common parlance and tensions rise in response to the smallest media slight, is it time for algorithms to take the place of humans in moderating news? This New York Times article seems to think so. What role, and to what extent, should algorithms be used in regulating and implementing everyday business ventures, government agency routine processes, health care management, etc.? Who should take responsibility in the event of a problem or negative consequence, if it is all verified by an algorithm? And, importantly, what will enhanced monitoring of algorithms do to the progress and profitability of companies whose bottom line depends on the very algorithms that can cause unforeseen, sometimes very harmful, problems?