Regulating a tea kettle

AI regulation is becoming a priority for governments worldwide due to concerns about safety, transparency, bias, and privacy. The EU AI Act and US efforts such as the Blueprint for an AI Bill of Right and the Biden's White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence aim to establish foundational frameworks to protect the public as AI rapidly advances. These efforts are comprehensive and attempt to build a foundation from which further regulation and laws can grow.

Common to these efforts is an acknowledgement that existing rules protecting the public from other industries can be leveraged for AI. A common analogy is the automotive industry. Auto safety regulations were developed in response to accidents and injuries. As vehicles were first brought to market there weren't a lot of regulatory restrictions. When people started getting launched through windows on hard stops, it was clear that regulations were lax. Regulations were crafted requiring safety belts.

AI is a vastly more complicated technology than the automobile. Regulations must consider issues like algorithmic racial bias and data privacy. Although these types of regulatory concerns are new for a technology (can a Toyota Tacoma express racial bias?), there are already many laws and regulations in place to help defend the civil rights of the public from industry. AI governance frameworks often build from existing laws.

Spending time with these nascent regulatory frameworks reveals a wide landscape of concerns. Take the EU's work. From their own site, they describe that their priority is to "make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly." All of these pillars of concern are multi-leveled as well. For example, "safe" encompasses both the regulation of automated systems that could impact a person's physical safety such as cars, aviation, and medical devices, as well as the "cognitive behavioural manipulation of people or specific vulnerable groups."

Regulations are then further considered through a prism of risk-levels. "Unacceptable risk" implementations of AI will be banned outright. Use cases here include real-time and remote biometric identification systems and systems that enforce social scoring that can classify people "based on behaviour, socio-economic status or personal characteristics."

High risk applications are those "that negatively affect safety or fundamental rights" but wouldn't be banned. These applications are further sub-categorized down into 2 levels. One encompasses cases that can be regulated through existing consumer product regulations. The other level deals with more sensitive (sort of more high risk?) cases that are flagged for special considerations.

Since many high-risk use cases fall under the purview of existing consumer product regulations, it is useful to review an example. Let's look at the humble tea kettle.

In the EU, electric tea kettles must comply with the Low Voltage Directive (LVD) and electromagnetic compatibility regulations. They also need to meet EN 60335-2-15, which is a specific standard for household electric cooking appliances. While a traditional stovetop kettle may seem simple, it too has a set of regulations typically focusing on material safety, food-grade requirements, heat-resistance handle design, and anti-scalding spout design specifications.

It is true that history can pivot on a good cup of tea, so this level of regulation seems legitimate. I can only imagine the depth of complexity when a similarly detailed regulation gets applied to an AI system.

Yet, AI is not a tea kettle, a car, nor an electric saw. It may not even be a tool. According to a recent paper by lead authors Robert Long and Jeff Sebo titled "Taking AI Welfare Seriously", there is a realistic possibility that some AI systems will be "conscious and/or robustly agentic in the near future."

The authors argue "that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise, there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not."

How do we craft regulations for a tea kettle that may hurt? What about the kettle that has a positive experience about assisting with your Earl Grey or offers a suggestion on how to make it just so?

Long and Sebo don't predict that AI will become sentient and acknowledge that the complexity surrounding the science of consciousness makes any determination of sentience fraught. They do say there is a possibility that either intentionally or unintentionally, a conscious AI will be developed and ignoring this risks inflicting great moral harm.

The regulatory frameworks for AI being developed now consider the impacts of a technological tools on moral beings with rights. If the tools also have rights, the vectors of governance would need to change.

For now, considering how to regulate a tea kettle with moral patienthood may get in the way of the governance we need to develop. Current AI regulatory efforts rightly prioritize protecting the public from harm. However, the possibility that some AI systems may develop consciousness or agency in the near future adds a new dimension to our governance challenges. We must start grappling with difficult questions about the moral status of AI. One day soon that tea kettle we were monitoring will have a lovely whistle that we haven't quite heard before.

Subscribe to Two Steps Forward

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe