
Almost exactly two years ago at an appearance on Capitol Hill, OpenAI CEO Sam Altman begged Congress to regulate the artificial intelligence industry before it was too late and the technology severely damaged society.
“My worst fears are that we—the field, the technology, the industry—cause significant harm to the world. ... If this technology goes wrong, it can go quite wrong,” Altman said. “Regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful [AI] models.”
Lawmakers left the hearing with high hopes of swiftly passing comprehensive action to regulate AI. Talks centered around a new regulatory agency dedicated to the tech industry, with the power to inspect and license the most advanced AI systems to ensure national and global safety.
They vowed to avoid repeating the mistakes of the internet era, when a lack of regulation allowed the technology to dominate global life before its societal harms became undeniable.
Since then, both chambers have released multiple frameworks to regulate AI but have passed only one bill focused on preventing its harms. Most lawmakers have dropped the idea of a new regulatory agency, as well as licensing the top AI programs. In their place are vague concepts of federal regulations, such as a voluntary testing standard promoted by the National Institute of Standards and Technology, and the occasional stringent regulation targeting narrow issues like the rise of deep fake revenge porn.
Rather than waiting for federal action, states have started to act. Last year, 41 states enacted 107 laws aimed at regulating emerging AI technologies, according to an analysis by New York University's Center on Tech Policy. At least another 62 bills have been introduced during the most recent legislative session, according to PolicyView: AI tracking.
But all that effort by states to regulate AI may soon prove to have been wasted.
As part of its reconciliation package, the House Energy and Commerce Committee included a provision that would negate all state regulations for 10 years. The language made it out of committee, but it might be excluded from the final version of the package due to a potential violation of the Byrd rule, which limits reconciliation legislation to only budget-related matters.
The provision has support in the upper chamber. Senate Commerce, Science, and Transportation Chair Ted Cruz endorsed the idea in a hearing last week that Altman attended. Altman now says the AI industry should focus on self-regulation and if governments want to follow its lead once norms are set, there may be room for that.
Even Democratic Colorado Gov. Jared Polis, who signed the first statewide comprehensive AI regulation last year, endorsed the provision that would render that law moot.
The 10-year state regulatory moratorium likely won’t pass, at least not in the reconciliation package. But the shift in regulatory winds is likely here to stay, effectively slamming shut the brief window that U.S. governments had to regulate AI.
—Philip Athey

AI risks:
Top AI risk takeaways:
- Companion chatbots are the latest wave of AI tech to hit the market. These chatbots can impersonate real or fictional people and respond to user prompts as if they are real friends or lovers.
- The new tech has the potential to help people build confidence or work through personal problems, but it may also increase loneliness and anxiety among its users as they become increasingly isolated with their AI friend. The issue is even more acute among kids.
- Despite the concerns, tech companies continue releasing more programs for young users, while state and federal governments are slow to respond.
Google plans to roll out its AI chatbot to children under 13: As AI firms race to reach young users, Google will roll out its Gemini chatbot to children with parent-managed accounts, including safeguards like data-use restrictions and content filters. Parents can manage access, though the company warned that some risks remain. (New York Times)