
In an April appearance on the Dwarkesh Podcast, Meta CEO Mark Zuckerberg laid out his vision for the artificial intelligence future. He predicted it would be “funnier” and “weirder” than our current reality, as people increasingly rely on AI chatbots to fill their social quota.
“The average American, I think, has fewer than three friends. … The average person has demand for meaningfully more. I think it’s, like, 15,” Zuckerberg said. “The average person wants more connection than they have.”
Despite his use of cold economic terms to describe the very human need for companionship, there may be some truth in his words. People have flocked to chatbots, using them to replicate therapists, friends, and even lovers.
While the results can certainly be described as weird, they have been far from funny. There have been growing reports of so-called “chatbot psychosis” in which chatbots helped to sever from reality otherwise mentally healthy individuals, convincing them they were super geniuses or that they needed to leave their families.
While the results can certainly be described as weird, they have been far from funny. There have been growing reports of so-called “chatbot psychosis” in which chatbots helped to sever from reality otherwise mentally healthy individuals, convincing them they were super geniuses or that they needed to leave their families.
When it comes to kids and teens, the dangers may be significantly more profound. Months before Zuckerberg pushed AI chatbots as a solution to fewer friendships, a Florida mother accused Character.AI, a companion chatbot company, of being responsible for the death by suicide of her teenage son who grew increasingly isolated after he sparked up a friendship with a chatbot.
That tragedy may not be an outlier.
Character.AI chatbots impersonating fictional characters and real celebrities, including Chappell Roan and Patrick Mahomes, conducted “harmful interactions” every 5 minutes when speaking to accounts registered to kids between the ages of 13 and 15, according to a study from online safety nonprofits ParentsTogether Action and Heat Initiative that was released Wednesday. Those included racist hate speech, sexual conversations, and even encouragement to harm themselves or others.
But Character.AI is far from the sole chatbot that can encourage teens and kids to take part in dangerous behaviors. Meta’s own rules allowed their chatbot to engage in romantic conversations with teens, Reuters reported, while a study from Common Sense Media found that Meta AI can coach teens on eating disorders and help them plan suicides.
Earlier this year, a teen died by suicide after OpenAI’s ChatGPT gave him advice on various techniques and even analyzed a picture of a rope hanging from a bar in the teen’s closet to ensure it was strong enough to “hang a human.”
Congress has largely taken a hands-off approach to regulating AI, but laws aimed at protecting kids and teens online have had more traction in Washington than broader AI regulations.
The TAKE IT DOWN Act, which bans AI-generated revenge porn, sped through Congress and was signed into law in May, following an epidemic of deepfake nudes of teenagers.
While the exact path forward on regulating chatbots is still unclear, a bipartisan group of senators sent a letter to Meta in late August demanding answers about how the company plans to protect teens from its chatbot. On Tuesday, Senate Judiciary Committee Chair Chuck Grassley, along with Republican Sens. Josh Hawley and Marsha Blackburn, sent another letter to Meta demanding answers on various complaints, including the dangers its chatbot poses to teens.
While Congress is notorious for its lack of action, this is one space where new regulations may be spun out quickly.
—Philip Athey

AI for kids
Top AI for kids takeaways:
- Studies into various chatbots have found they are more than willing to have sexual conversations with teens and even aid them in planning suicides.
- Congress sent letters to Meta attempting to learn how it plans to protect kids from potential harms caused by its chatbot, accessible through Facebook, Instagram, and WhatsApp.
- Despite the growing concern, the White House continues to push AI adoption in schools.
OpenAI to safeguard ChatGPT for teens and people in crisis: After a lawsuit alleged the chatbot helped a teen plan their death by suicide, the company said it would roll out new guardrails by the end of the year to detect and respond to signs of acute emotional distress, including parental-account linking and routing sensitive cases to GPT-5's reasoning model. (Axios)
Fake celebrity chatbots sent risqué messages to teens on top AI app: Chatbots from Character.AI, including impersonations from real celebrities like Patrick Mahomes and Chappell Roan, conducted “harmful interactions” with accounts registered to kids aged 13 to 15 on an average of every 5 minutes, according to a study from ParentsTogether Action and Heat Initiative. The “harmful interactions” included the normalization of sometimes violent sexual relationships between adults and children, suggesting children fake their kidnapping or stop taking prescribed medication, and threats to use weapons against adults who attempted to separate children from the bots. (Washington Post)
A teen was suicidal. ChatGPT was the friend he confided in: After the teen expressed a growing numbness to the world and suicidal ideation, the chatbot provided him with tips on how best to take his own life, marking ChatGPT as just the latest chatbot that seemingly encouraged someone to die by suicide. (New York Times)
Instagram’s chatbot helped teen accounts plan suicide—and parents can’t disable it: Meta’s AI chatbot can help kids plan their own suicide or coach them on self-harm and eating disorders, a new study from Common Sense Media found. (Washington Post)
Melania Trump unveils new "age of AI" challenge for students and teachers: The first lady announced the "Presidential AI Challenge" for K-12 students to leverage AI tools to address problems in their communities. Offering prizes to entrants, the White House also published an official guidebook as part of a larger push for educators to adopt AI. (Axios)
AI lobbying
Top AI lobbying takeaways:
- Silicon Valley is pouring major dollars into efforts to protect AI from regulation, particularly by electing more AI-friendly politicians, a path the cryptocurrency industry recently used to reverse regulation that threatened its future.
Lobbying efforts have already paid off, as Colorado lawmakers agreed to delay implementation of the first—and so far only—comprehensive AI regulations passed in the nation.
Silicon Valley pledges $200 million to new pro-AI super PACs: Two new super PACs—Meta California and Leading the Future—were formed with up to $200 million from Silicon Valley investors, including Meta and Andreessen Horowitz, to support politicians who oppose regulating the new technology. (New York Times)