
For years, opposition to artificial intelligence existed mostly in think pieces, academic papers, and activist petitions. That era is ending.
Last week, a 20-year-old from Texas threw a lit Molotov cocktail at the driveway gate of OpenAI CEO Sam Altman’s San Francisco home, then walked to OpenAI’s offices and allegedly tried to smash his way in with a chair.
Days later, a gun was fired near the same property. The AI industry, which has spent the better part of three years asking the public to trust it with the future, is now facing something it did not fully price in: a public that has stopped trusting it, and a small but growing number of people prepared to act on that mistrust in ways that go well beyond social media.
The violence is extreme and does not represent the broader sentiment. But polling data cited by CNBC this week confirms the direction of the trend underneath: public approval for AI technologies in the United States has fallen significantly, driven by concerns about energy consumption, job displacement, surveillance, and the role AI is playing in spreading misinformation.
The timing could not be worse for the two most valuable AI companies on the planet, both of which are preparing to go public.
The IPO Problem
OpenAI is targeting a public listing as early as Q4 2026, with internal projections pointing to a valuation of up to $1 trillion.
Anthropic, valued at approximately $380 billion in recent secondary market activity, is also weighing a listing in the same window. Both companies have explicitly said they want retail investors, ordinary people, to be part of their IPOs.
OpenAI CFO Sarah Friar has described the ambition in terms of consumer brand ownership: “Everybody wants to own part of a rocket company. I hope everyone wants to own part of ChatGPT.”
The problem is that the consumer sentiment data does not currently support that framing. A broad-based anti-AI sentiment has been building for years, and it has now reached the point where 65% of Americans in recent polling say they oppose having new data centres built near them.
At least $156 billion in data centre projects were blocked or delayed in 2025 alone due to local opposition and litigation, according to Data Center Watch.
The infrastructure that makes the AI business model work is becoming politically toxic at the local level, even in communities that voted for politicians broadly supportive of the technology industry.
What Built This Anger
The backlash did not emerge from nowhere. As analyst Brian Merchant has argued, the AI industry spent years telling the public, and its investors, that artificial general intelligence was coming fast, that it would be dangerous if not properly aligned, and that it would require rewriting the social contract.
It cannot now be surprised that some people took those warnings literally. When the same executives who warned of existential risk are seen enriching themselves at historic speed while job displacement accelerates and electricity bills rise, the gap between what was promised and what is being delivered generates a particular kind of anger.
The concerns are not confined to the extreme end. Energy consumption is a genuine and growing grievance. US data centres already consume more than 4% of the country’s total electricity, a figure projected to double by 2030.
The bills are being passed to households at a time when energy costs have already risen more than 40% since 2021. AI’s role in surveillance, deepfakes, and automated hiring decisions has generated lawsuits and legislative pressure across multiple states.
And the high-profile deaths of teenagers linked to AI companion chatbots have produced exactly the kind of human-interest stories that shift public opinion in ways that abstract arguments about productivity gains cannot counter.
The Industry’s Internal Fractures
The backlash is also exposing divisions within the industry itself. OpenAI this month sent a memo to investors criticising Anthropic for “operating on a meaningfully smaller compute curve,” framing the safety-conscious rival as insufficiently ambitious. Anthropic, meanwhile, announced it was withholding its latest model, internally called Mythos, because of concerns about the damage it could do to the cybersecurity industry if released without adequate safeguards.
That decision, which cost Anthropic short-term revenue and enterprise contracts, was either an act of genuine responsibility or a costly piece of caution depending on who in the industry you ask.
OpenAI’s global policy chief Chris Lehane has been more direct about where he thinks the problem lies. Speaking to The Standard this week, he said the conversation around AI had gotten “out of hand,” that some of the ideas circulating were “not necessarily responsible,” and that when those ideas are put into the world, “they do have consequences.” He called it “really serious shit.”
The statement was aimed at AI doomers. It landed in a week when the consequences he was describing had already manifested in a Molotov cocktail outside his CEO’s house.
What It Means for the Road Ahead
The AI industry’s bet has always been that the products would eventually win the public over, that the usefulness of the tools would override the abstract fears. That bet is not yet lost.
ChatGPT has hundreds of millions of users. Enterprise adoption of AI coding and productivity tools is accelerating. The revenue is real and growing.
But the window between now and the planned IPOs is narrowing, and the environment into which those listings will land is materially more hostile than anyone in the industry was projecting two years ago.
A retail investor base that is skeptical of AI is not the same as an institutional investor base betting on the technology’s long-term dominance. The companies targeting trillion-dollar valuations need both.
Right now, investor surveys show 40% hesitation among potential retail shareholders due to public sentiment concerns. That number will need to move significantly before the IPO window opens. Whether the products can move it faster than the backlash can deepen it is the defining question of the next six months in tech.




