Can AI be regulated?
International standards panels would be key, and Congress should have a role

AI is a broad concept that has been around since at least the 1970’s when computer programs attempted to perform medical diagnostics and other applications. These attempts performed poorly, and artificial intelligence languished.
The immense growth of computer power, the development of neural net algorithms, the development of parallel processers, and mountains of data being collected by the government and big tech changed that.
In 2023, the introduction of highly impressive chat bots grabbed public awareness and imagination. Now, AI is a prominent public issue even as it creeps into everyday life and activities.
In March 2023, many tech moguls, such as Elon Musk, published an open letter calling for a moratorium on AI development. But AI is far beyond being controlled by moratoria. For one thing, with the lack of trust and accountability, who would believe that tech moguls would abide by their own calls to halt development?
Some of the issues with AI that concern a large number of people, including the technology moguls that published the open letter, are legitimate. Will AI become intrusive and controlling? Will AI bias harm ethnic or economic groups? Will AI widen the already vast chasm between the rich and poor? Will AI take my job or result in mass unemployment? Will AI enable the development of ghastly biological weapons? In the end, will AI be the end of humanity?
These and other questions are in tension with the drive for beneficial AI. This tension has given rise to a scramble to create legislation erecting guardrails around AI. As of now, there have been at least 40 bills proposed in Congress on AI. The European Union just passed a major AI regulatory bill in Brussels. Many other countries are considering or have already passed AI legislation.
The rush to develop AI is also being fueled by international competition to be the world leader in AI. The countries that lead AI will have strong economic and military advantages over countries that do not.
There is a precedent that illuminates the difficulty with federal AI legislation: privacy.
Big tech, government, health care agencies, public safety agencies and other entities all have massive amounts of data on individuals. Federal privacy law should be a predecessor to and lay the groundwork for federal AI law. The U.S. Congress has attempted to pass privacy legislation, but it got hung up by political wrangling, and not necessarily along party lines. States that have their own privacy laws are blocking a federal privacy law, they do not want to be preempted by federal law. Similar to privacy, states are now passing their own AI laws, more than 140 introduced so far, and they will not want to be preempted by federal law.
Another problem with AI legislation is that well-intended laws may result in negative unintended consequences. And even though AI is currently a bipartisan issue, I doubt if it will stay that way once serious legislation is ready.
Fortunately, President Joe Biden issued a comprehensive executive order on AI on Oct. 31 that begins the process of creating federal regulations. Executive orders do not take the place of legislation that gets passed into law, but this one can bring some order to the technology and give Congress time to create effective legislation. But in the longer term, executive orders are likely to be challenged in the courts and are liable to be overturned by successive administrations.
There is a way AI can be controlled: through AI standards created by international standards committees, working with a broad range of stakeholders and overseen by appropriate standards creating bodies. There are several issue areas of AI, including national security, displacements, bias, energy, public safety, education, intellectual property, watermarking and so on.
Each issue area needs a standards committee to do the hard work of creating effective and enforceable standards. This will take some time but can move forward, with the time created by the president’s order.
Congress has a significant role to play in creating and enforcing standards. First, it can authorize and fund appropriate standard creating bodies. Then it can require AI developers and applications to comply with the standards. Once standards are in place, standards committees can react quickly to changing technology, something Congress and the regulatory process cannot do. Moreover, industry-created standards will help control AI because providers that do not meet standards will not find a home in the marketplace or in government procurements.
Jerry McNerney was a Democratic member of Congress from 2007 to 2023, representing California’s 9th and 11th Districts. He was the chair of the Congressional AI Caucus. He is now a policy adviser at Pillsbury Winthrop Smith and Pittman LLC and CEO of The AI Trust Foundation, a nonprofit created to promote the beneficial use of AI. McNerney is also a candidate for the California state Senate.