
Earlier this month, Grok, the chatbot from Elon Musk’s xAI, called itself “MechaHitler,” recommended a second holocaust, sexually harassed former X CEO Linda Yaccarino, and graphically described how it would rape liberal political commentator Will Stancil.
On Monday, the Defense Department announced it awarded xAI a $200 million contract to “develop agentic AI workflows” and help “solve DoD use cases.” Contracts of the same size were also handed out to Anthropic, Google, and OpenAI as part of the Pentagon’s accelerating adoption of AI technology.
In a July 10 memo, Defense Secretary Pete Hegseth directed military leaders to buy drones and use AI to help operate them. He also authorized “senior officers” to cut through what few checks and balances remain in defense acquisition and purchase their own drones for all levels of combat units.
Grok has since stopped spewing antisemitic conspiracies. xAI said the bot became pro-Hitler after it was instructed to no longer be afraid to “offend people who are politically correct,” and to follow the tone of the conversation it was asked to respond to. But this was not the first time Grok has been caught spreading racist lies. In May, the chatbot pushed claims of “white genocide” in South Africa. The company blamed an unauthorized update from a “rogue employee” for that mishap.
Grok’s two high-profile missteps highlight the risk of accelerating defense acquisition of AI-powered drones, even as AI engineers in general don’t fully understand how changes to prompts or code affect an AI’s output.
If the U.S. heads down this path, the nation might not be far from AI-powered drones bought without any oversight by a battalion commander that end up committing war crimes because a Silicon Valley CEO wanted to make their chatbot less politically correct, or a drone turning its weapons on American troops because of the actions of a “rogue employee.”
In some ways the Defense Department’s hands are tied, as AI-backed drones seem to be the future of warfare, unless international treaties put an end to the arms race.
Human-operated drones have come to dominate the battlefield in Ukraine. On Wednesday, Ukrainian Maj. Robert “Magyar” Brovdi told NATO commanders how his small drone reconnaissance unit started with just 27 people in 2022, desperately buying commercial drones off the shelf so they could observe the actions of Russian military units. His force now has grown beyond 2,000 troops, and they have identified 116,976 targets and destroyed more than 54,500 of them, CNN reported.
The Russians have their own drones, and reportedly have started field testing an AI-powered drone that relies on the tech to identify, analyze, and launch strikes against targets without human input.
Ukraine is not the only battlefield where AI has made an impact.
Shortly after the Oct. 7, 2023 attack, the Israel Defense Forces turned to previously covert programs to identify and track suspected Hamas targets in Gaza. Humans were supposed to make the final decision on whether to launch a strike, but anonymous users of the tech in the early days of the war said the rapid pace of combat—mixed with pressure from superiors to launch as many strikes as possible—turned them into rubber stamps, checking only for the gender of the target.
“I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval,” an anonymous Israeli intelligence agent told +972 and Local Call. “It saved a lot of time. If [the operative] came up in the automated mechanism, and I checked that he was a man, there would be permission to bomb him, subject to an examination of collateral damage.”
The future of warfare might be AI-powered drones. But if the Pentagon recklessly rushes to acquire them in an attempt to be first without any thought to safety or checks and balances, it could put American servicemembers and civilians in any future war at risk.
—Philip Athey

International
Top international takeaways:
- While the White House is relaxing restrictions on exports of semiconductor chips needed to power AI programs, there remains a fear that China will get its hands on top-of-the-line chips, imperiling a previously agreed deal in the Middle East.
- Despite some calls to loosen restrictions on AI, the EU is continuing to enact new AI regulations.
European Union unveils rules for powerful AI systems: The economic bloc unveiled a new code of practice under the AI Act that requires makers of advanced AI systems to improve transparency, protect intellectual property rights, and assess misuse risks, with full enforcement beginning in 2026. (New York Times)
Nvidia can sell AI chip to China again after CEO meets Trump: After meeting President Trump, Nvidia CEO Jensen Huang secured U.S. approval to resume sales of its downgraded H20 AI chip in China, pending export licenses. This reverses a costly restriction imposed in April, while still barring the company’s top-end models from export. (Wall Street Journal)