On Tuesday evening, Micron Technology reported the most profitable quarter in the company’s 46-year history.
Revenue hit $23.86 billion, nearly triple what it made in the same quarter a year ago.
Earnings per share came in at $12.20, against Wall Street’s forecast of $9.31. Gross margins hit 74.9%. The company beat on every single metric that matters.
CEO Sanjay Mehrotra called it “record revenue and significant margin expansion across every business unit.” He wasn’t exaggerating.
The stock fell 5.8% the next morning.
Not because the results were bad. They were exceptional. The stock fell because Micron announced it was spending $25 billion on capital expenditure this year, up $5 billion from previous guidance, to keep up with demand it says isn’t slowing down.
And Wall Street looked at that number and blinked.
What Micron Actually Does, and Why It Suddenly Matters This Much
Memory chips have always been the unglamorous part of the semiconductor industry.
Commodity products. Thin margins. Violent boom-bust cycles driven by oversupply.
Three years ago Micron was losing money. A year ago it was recovering. Now it’s the only company among the ten most valuable US tech firms whose stock has gone up in 2026, up 63% since January, 357% over the past year.
The reason is simple and structural. Every Nvidia GPU that gets built into a data center needs memory, high-bandwidth memory, the kind that can feed data to AI chips fast enough to keep them running at capacity.
The more powerful the GPU, the more memory it requires. Nvidia’s latest Blackwell chips use significantly more HBM than the previous generation.
And Micron, along with Samsung and SK Hynix, is one of only three companies on earth that can make HBM at scale. Three. In the entire world.
So when the AI buildout accelerates, when Meta commits $65 billion in capex, when Microsoft signs another data center deal, when Google expands its TPU clusters, all of that compute needs memory.
And the memory market is tight. Really tight. Prices are rising. Supply is constrained. Contracts that used to be signed quarter-to-quarter are now being locked in for years.
Micron’s cloud memory revenue alone, the chunk that goes into data centers, rose 160% to $7.75 billion. That’s one segment. In one quarter.
The Paradox That Spooked Investors
Everything about Micron’s results should make investors happy. The numbers are enormous. Demand is real. The competitive moat, being one of three companies that can make the specific memory AI needs, doesn’t get much wider than that.
But $25 billion in capital spending is a serious number. Building semiconductor fabs is not like building software. It takes years. It costs billions before a single chip rolls off the line.
The question investors are quietly asking is: what if they build all this capacity and AI spending slows? What happens to a company that just committed $25 billion to a boom that hasn’t shown cracks yet, but booms always eventually do?
Memory has been here before. In 2021 and 2022, chip companies expanded aggressively to meet pandemic-era demand.
Then consumers stopped buying laptops and phones. Inventory piled up. Prices crashed. Micron lost $5.8 billion in fiscal 2023.
The muscle memory of that cycle is still fresh for anyone who’s been in semiconductors for more than three years.
AI demand feels different, more structural, driven by enterprise spending rather than consumer whims, but it’s not immune to pullbacks.
Microsoft paused some data center leases earlier this year. Meta’s capex commitments spooked its own investors when announced. The signal from the AI buildout is not unanimously bullish, even if Micron’s results this quarter were.
What Mehrotra Said That Matters Most
“As AI evolves, we expect compute architectures to become more memory-intensive,” Micron’s CEO said on the earnings call. He’s right, and it’s worth sitting with that for a second.
Every step forward in AI capability tends to require more memory, not less. Bigger models. Longer context windows. More agents running simultaneously.
Agentic AI workflows, where models take multi-step actions autonomously, are significantly more memory-intensive than simple chatbot interactions. If that trend holds, and there’s no structural reason to think it won’t, Micron isn’t chasing a wave. It’s positioned at the base of one.
Guidance for next quarter: $33.5 billion in revenue at the midpoint. Against this quarter’s $23.86 billion. That’s 40% sequential growth being forecast. In one quarter. If that lands anywhere close to target, the 5.8% stock drop this week will look like noise.
The Bigger Picture
Micron’s results are a clean window into what the AI buildout actually looks like from the inside, not the model releases and the benchmark scores and the press conferences, but the physical, industrial, capital-intensive reality of building AI at scale.
It takes chips. Those chips take memory. That memory takes fabs. Those fabs take years and billions of dollars and a handful of companies that know how to build them.
Revenue almost tripled. The stock fell. The company is spending $25 billion to keep up with demand that shows no sign of easing. And the three companies that make the memory the entire AI industry depends on are already running flat out.
That’s not a bubble. That’s a supply chain scrambling to catch up with a transformation that isn’t waiting for it.
