New and emerging technologies have always carried a host of potential risks to accompany their oft-blinding potential. Just as dependably, those risks have often been ignored, glossed over or just missed as public enthusiasm waxes and companies race to bring a product to market first and most effectively. Automobiles promised to get people (and products) from one place to another at life-changing speeds, but also posed a danger to life and limb while imposing a new burden on existing infrastructure. Even as technology leaps have transitioned from appliances and aircraft to computers, connectivity and large language models (LLMs), new and untested technologies continue to outpace the government and the public’s ability to moderate them. But while one can debate what constitutes an acceptable gap between the practical and ideal when it comes to regulating, mandating and evaluating the pros and cons of new technology, societies tend to generate their own methods of informing the public and attempting to rein in the more harmful aspects of the latest thing.
Articles Posted in Algorithms
News of Note for the Internet-Minded (2/21/23) – Chatbots, Robots and Qubits
News of Note for the Internet-Minded (1/17/23) – AI Bans, Crypto Clamp Downs and Quantum Excitons
In today’s News of Note, we look at a potential boon for Sweden in the race for rare earth metals. We also ponder whether copyrights are solely for humans, or whether AI deserves equal protection for its artistic endeavors, plus the many unexpected applications (and pitfalls) surfacing as AI generators take the world by storm, and much more.
The Black Box Conundrum: Go Weak or Stay Strong?
Even as artificial intelligence (AI) has become more commonplace and relied upon by businesses in different industries, it still faces criticism on whether it can be implemented in a safe and ethical manner and, related, how the bias often inherent in the underlying algorithms can be detected and reduced.
Shining Light on the Algorithms in Your Company’s Black Box
Artificial intelligence has long since evolved from a technology with exciting potential to a near ubiquitous and integral component in the day-to-day conduct of many businesses. Take the automotive and aerospace industries—each is undergoing massive changes and movements toward more competitive, efficient and innovative uses of technology and AI in order to meet consumer demands, create more efficient factories, optimize supply chains, and achieve better performance in operations and production. Using modern software and AI has become essential across many companies.
Regulators Zero In on AI
As previously discussed, financial services regulators are increasingly focused on how businesses use artificial intelligence (AI) and machine learning (ML) in underwriting and pricing consumer finance products. Although algorithms provide opportunities for financial services companies to offer innovative products that expand access to credit, some regulators have expressed concern that the complexity of AI/ML technology, particularly so-called “black box” algorithms, may perpetuate disparate outcomes. Companies that use AI/ML in underwriting and pricing loans must therefore have a robust fair lending compliance program and be prepared to explain how their models work.
News of Note for the Internet-Minded (5/27/22) – Ransomware Attacks, Crypto Crashes and Genetic NFTs
In this week’s News of Note, ransomware continues to ravage institutions—including a 157-year-old college and the government of Costa Rica—AI learns to accurately predict a patient’s race based on their medical images, cryptocurrency crashes, and more.
Artificial Intelligence and Regulatory Compliance
Regulators at the state and federal level are increasing their scrutiny of businesses’ use of artificial intelligence (AI). For example, recently, representatives from the Office of the Comptroller of the Currency and the New York State Department of Financial Services discussed the need for developing additional AI guidance.
The FTC Urges Companies to Confront Potential AI Bias … or Else
It might be a little meta to have a blog post about a blog post, but there’s no way around it when the FTC publishes a post to its blog warning companies that use AI to “[h]old yourself accountable—or be ready for the FTC to do it for you.” When last we wrote about facial recognition AI, we discussed how the courts are being used to push for AI accountability and how Twitter has taken the initiative to understand the impacts of its machine learning algorithms through its Responsible ML program. Now we have the FTC weighing in with recommendations on how companies can use AI in a truthful, fair and equitable manner—along with a not-so-subtle reminder that the FTC has tools at its disposal to combat unfair or biased AI and is willing to step in and do so should companies fail to take responsibility.
With Facial Recognition, Responsible Users Are the Key to Effective AI
As part of our on-going coverage on the use and potential abuse of facial recognition AI, we bring you news out of Michigan, where the University of Michigan’s Law School, the American Civil Liberties Union (ACLU) and the ACLU of Michigan have filed a lawsuit against the Detroit Police Department (DPD), the DPD Police Chief, and a DPD investigator on behalf of Robert Williams—a Michigan resident who was wrongfully arrested based on “shoddy” police work that relied upon facial recognition technology to identify a shoplifter.