New and emerging technologies have always carried a host of potential risks to accompany their oft-blinding potential. Just as dependably, those risks have often been ignored, glossed over or just missed as public enthusiasm waxes and companies race to bring a product to market first and most effectively. Automobiles promised to get people (and products) from one place to another at life-changing speeds, but also posed a danger to life and limb while imposing a new burden on existing infrastructure. Even as technology leaps have transitioned from appliances and aircraft to computers, connectivity and large language models (LLMs), new and untested technologies continue to outpace the government and the public’s ability to moderate them. But while one can debate what constitutes an acceptable gap between the practical and ideal when it comes to regulating, mandating and evaluating the pros and cons of new technology, societies tend to generate their own methods of informing the public and attempting to rein in the more harmful aspects of the latest thing.
Governments regulate, nonprofits advocate, the media investigates, and, sometimes, private individuals communicate in ways that inform the activities of the other three institutions. Together, these forces form a microcosm—sometimes coordinating with and supporting each other even as they often approach public safety in differing ways—and via a variety of methodologies bolster public awareness of the risks associated with new technologies.
Federal: One need not look further than the emergence of the internet to observe the federal government’s soft-touch approach to regulating new technologies. With no overarching law regulating online privacy, the internet is regulated by federal and state privacy or contract laws, some of which pre-date the internet, such as the U.S. Privacy Act of 1974. The federal approach to regulating new technologies remains unchanged in its tendency to rely on piecemeal actions established by agencies and to promote best practices, policies and guidance in lieu of proscriptive rulemaking; however, because the federal government is also deeply invested in ensuring that businesses and technological innovation will continue to flourish, it has not hesitated to pass legislation, such as the CHIPS Act and the DOE’s Civil Nuclear Credit Program, that support these interests. For now, emerging technologies seem likely to follow previous regulatory patterns, with riskier technologies garnering more proposals than regulations.
Global: Other countries have been far more aggressive in their attempts to regulate the internet. In 2018, the EU established the General Data Protection Regulation (GDPR) to protect the data privacy of EU citizens through rules governing the collection and processing of personal information. It is one of the most exacting data security laws in the world, and companies that have failed to comply with the law have received eye-watering fines. It appears likely that the EU’s no-nonsense approach to enforcing the GDPR will continue. In March 2023, one EU regulator—Italy’s Garante—blocked Open AI’s ChatGPT for breaching data laws. And the EU is not alone in assuming this protective stance. On April 11, 2023, China’s Cyberspace Administration of China (CAC) issued proposed regulations that focus on the cybersecurity and data privacy risks associated with generative AI services.
State: States often take the lead on issues that impact public safety, such as with PFAS contamination and COVID-19 health guidelines, and though these regulations may conflict with other states and the federal government, they are often responsible for creating practical and effective regulatory measures that will protect and promote the specific interests of their citizens and businesses. Some states take a more hands-on approach to regulating new and emerging technologies and have added a variety of new laws to the patchwork U.S. regulatory fabric. In 2018, California passed the California Consumer Privacy Act (CCPA), which extends privacy protections to the internet and is similar in some ways to the EU’s GDPR, and some states have already established oversight commissions or passed bills that regulate AI. In addition, companies that wish to market new technologies in the United States not only have to comply with federal licensing requirements but also must address each state’s licensing requirements, which tend to be more exacting.
Nonprofits: Mothers Against Drunk Driving (MADD) is one of the foremost examples of how a nonprofit can effectuate a regulatory sea change. Formed in the early 1980s by victims of drunk drivers who began rallying to increase public awareness and change state and federal laws, it quickly drew volunteers and the attention of lawmakers. By 1982, the Howard Barnes Alcohol Traffic Safety Law was passed to provide incentive grants to states willing to lower the blood alcohol concentration levels from .15 percent to .10 percent, along with other countermeasures. The public has good reason to believe nonprofit advocacy is a necessary tool in the public safety toolbox, and the variety of support they offer also provides an added degree of protection. While the Center for Internet Safety promotes cybersecurity by offering professional advice to businesses and best practices for ensuring IT systems and data, the Electronic Privacy Information Center’s (EPIC) brand of advocacy “focuses public attention on emerging privacy and civil liberties issues and protects privacy, freedom of expression, and democratic values in the information age.”
Media: One of the most persistent and visible watchdogs is the media, which has long acted as a gatekeeper, using its investigative muscle to scrutinize any issues or events that may impact public health and safety. New and emerging technologies are frequently the target of investigative reports, and the impact these technologies have on privacy has been the subject of numerous headlines. When Yahoo first announced that Russian hackers had stolen the personal information and emails of as many as 500 users in 2014 this proved to be only a portion of the story. The media continued pursuing the story and the resulting FBI investigation, and in 2016, news reports revealed the full scope of the data breach: all three billion user accounts were compromised. In addition to investigative reporting, certain publications focus almost solely on consumer safety through product testing. In 1909, Good Housekeeping magazine established its seal of approval and began offering a limited warranty for products that passed its stringent testing standards, and Consumer Reports remains a go-to source for consumers who want to compare products before they buy or use them and is well-known for compiling easy-to-use lists of products that meet certain criteria, including the publication’s latest list of electric vehicles that qualify for the Inflation Reduction Act tax credit.
Individual Watchdogs: In 1965, Ralph Nader published Unsafe at Any Speed, an exposé of the automobile industry of such significance that it led to the passage of the National Traffic and Motor Vehicle Safety Act the following year. Nader leveraged this publicity to establish a number of watchdog groups, including the Center for Auto Safety and Private Citizen. Though sometimes overlooked, individuals—whether hobbyists, activists or enthusiasts—often act as informal guardrails for the societies in which they live. These individuals tend to communicate and connect with each other, sometimes forming official groups to pursue their common interests. Numerous watchdog organizations have similar origin stories. The Electronic Frontier Foundation (EFF) was founded in 1990 by a small electronic community that began sharing their concerns for civil liberties following an incident in which the United States Secret Service seized a private business’ electronic equipment and subsequently accessed and deleted its stored data. In the ensuing years, this small group has evolved into a formidable nonprofit dedicated to ensuring that “technology supports freedom, justice, and innovation for all people of the world.”
It may be a given that technology will always outpace our ability (and willingness) to fully control it via laws and regulations, but that doesn’t indicate a complete absence of response. Paying attention to such reactions, to this immune system of sorts for the public weal, can help individuals and companies alike regain a step or two in efforts to anticipate—and mitigate—harmful byproducts of the next big thing.