Posted

Driving Questions, Human Error and Growth Industries for Machine Learning

Machine-learning-1079012838-300x209One of the biggest obstacles self-driving cars have to get around is the one between our ears. Even as these vehicles are hitting the streets in pilot projects, three out of four Americans aren’t comfortable with the idea of their widespread use.

The industry advocacy group that conducted the poll, Partners for Automated Vehicle Education, attributes public skepticism to inexperience. The more autonomous vehicles that people see, PAVE says, the more people will believe what research has already shown: that humans are far more likely than bad algorithms to cause auto accidents. Whether or not drivers realize it, many of them are already relying on algorithmic software to keep them from rear-ending the car in front of them or drifting into the next lane. A vehicle that takes over the controls altogether is just the next logical step.

So far, though, public opinion on the subject seems to have stalled, despite the fact that the technology and its commercial deployment continue moving forward.

Is our inexperience really to blame, as PAVE contends? Or is it the opposite—our experience of having been promised that, say, a state-of-the-art ship is unsinkable, only to see it capsize spectacularly on its maiden voyage?

In the end, it wasn’t bad engineering that sunk the Titanic. It was combination of human errors, not the least of which was the failure to think about the unthinkable. (Why put lots of lifeboats on an unsinkable ship?)

Machine learning may drive the car, but we drive machine learning, which is vulnerable to the possibilities we don’t consider.

With self-driving technology, the vulnerability isn’t that the car might not see the broken-down truck in the road, but that it might not see the black or brown person in the crosswalk. The probability of that happening is small—about five percentage points higher than if the pedestrian were light skinned—but statistics are cold comfort when you’re on the Titanic.

Machine learning permeates the public and private sectors beyond just the automotive industry. Other sectors that rely on AI include

  • Financial services: Banks use predictive data to drive investment decisions, to prevent fraud and to identify potentially risky loans.
  • Health care: Wearable monitors allow providers to track their patients’ vital signs remotely, in real time. Doctors are making diagnoses with the help of predictive algorithms.
  • Retail: Targeted ads based on algorithmic recommendations have been shown to significantly influence online shoppers’ buying choices.
  • Government: Municipalities are using algorithmic software for a variety of applications, from “predictive policing” to improving efficiency in public utilities.
  • Transportation: Data analytics is now integral to the transportation industry, which can adjust routes in response to real-time and predicted conditions.
  • Oil and gas: These industries are reducing costs, increasing safety and boosting efficiency through machine-learning technology.

With data having been called “the new oil … arguably the world’s most valuable resource,” leveraging that resource through algorithms seems an inescapable future. Thus, while an algorithm is ultimately just a tool—and while not all “bad” algorithms have dire consequences—algorithms all come with a challenge for the creator and the user: Think about the unthinkable.


RELATED ARTICLES

Retooling AI: Algorithm Bias and the Struggle to Do No Harm

Who Is Ultimately in the Driver’s Seat with Autonomous Vehicles?

The 5G-Enhanced Potential of Augmented Reality Comes with Interesting Legal Issues