When Sriram Krishnan, a senior White House policy advisor on artificial intelligence, appeared onstage at an event in Washington last month, he listed the Trump administration’s priorities for advancing the AI revolution.
At the top of the list? More construction.
“Let’s make sure we build our infrastructure,” Krishnan said. “‘Build, baby, build’ is what we tell people.”
That rallying cry is echoing across Silicon Valley. Executives at Meta say they expect to spend $600 billion on AI infrastructure, including massive data centers, through 2028. OpenAI and Oracle have announced plans to put $500 billion into a data center project dubbed Stargate, while Amazon plans to spend more than $30 billion on capital expenditures, or capex, in each of the next two quarters.
The problem: The business case for AI remains untested, and it’s unclear whether the revenue from AI products will justify the ever-growing spending. If it does, it could push the economy onto a higher growth curve and transform entire industries. If it doesn’t, the fallout could reshape the economy — from stock market crashes to communities left with vast, vacant data centers.
Earlier this year, Business Insider published an investigation into the data center industry, creating the most comprehensive map to date of where data centers are in the US. The investigation found 1,240 data centers in America already built or approved for construction at the end of 2024, an almost fourfold increase since 2010. The data didn’t include any projects that received permits this year.
This year, four of the five biggest energy users from our tally, Amazon, Meta, Microsoft, and Google, could spend an estimated $320 billion on capex, primarily for AI infrastructure, according to an analysis of financial statements by Business Insider. That’s more than the GDP of Finland and just shy of the total revenue ExxonMobil earned in 2024.
The scale of the investment is sparking concerns about a bubble and the potential for a pop that could bring the stock market crashing down from record heights.
So far, the concerns haven’t spooked investors. The tech-heavy Nasdaq is up 19% this year, with Nvidia, the world’s largest company by market capitalization, Google, and Microsoft all up more than 25%. Oracle has seen its stock rise 75% this year.
The money spent this year on AI infrastructure and software has contributed more to GDP growth than consumer spending, according to Renaissance Macro Research’s reading of Bureau of Economic Analysis data.
It’s a construction boom unlike any in living memory, transforming landscapes, drawing on scarce water resources, and taxing America’s already strained electricity grid. The sheer ambition has drawn comparisons to the Apollo space program, the interstate highway system of the mid-1900s, and the fiber-optic bust around the turn of this century that left miles of “dark fiber.”
Sarah Friar, OpenAI’s finance chief, recently underscored the size of the industry’s goals, when she spoke on the sidelines of a Goldman Sachs conference in September. AI spending, she said, wasn’t like the 19th-century railroad system — it was like the early days of it.
“A lot of people have compared the AI era to things like the railway build-out, because it is a very capital-intensive build-out,” Friar told Yahoo Finance. “I think we are just beginning. We’ve maybe laid some track from New York to Baltimore, but we’re ultimately going to blanket the US, and ultimately blanket the world.”
Related stories
If the bet pays off, the returns could make the money sunk into data centers seem small. The data-center boom would become the foundation of the next tech cycle, letting Amazon, Microsoft, Google, and others rent out intelligence the way they rent cloud storage now. AI agents and custom models could form the basis of steady, high-margin subscription products.
If it doesn’t pay off? Look no further than the railroads. Overinvestment in rail lines, including the Transcontinental Railroad, sparked not one but two banking crises in the latter half of the 1800s. The industry struggled with overspending, eventually leaving investors and the banks that had backed them with heavy losses.
Despite the investment, the business case for AI is still untested.
The AI infrastructure boom is built on a fundamental assumption about how machines learn best. The approach involves feeding massive amounts of data into large language models, which then look for patterns, or signals, buried in the noise. To improve, the theory goes, the models must ingest and analyze increasing amounts of data, which requires more and more computing power.
That need for scale has shaped the competitive landscape. If better models depend on more computing power, the obvious next step is to secure as much of it as possible. And that means more data centers — large warehouses filled with graphics processing units, or GPUs, that handle the complex computations required.
In a recent blog post, OpenAI CEO Sam Altman imagined what might be possible if AI systems had virtually unlimited computing power. AI could cure cancer or give every student on Earth a personalized tutor. “If we are limited by compute,” he wrote, “we’ll have to choose which one to prioritize; no one wants to make that choice, so let’s go build.”
Gary Marcus, a New York University professor emeritus of psychology and neural science and a vocal critic of Silicon Valley’s approach to developing AI, has argued against the strategy that building bigger is better. Marcus has repeatedly argued that the fundamental theory driving Big Tech companies to build computing capacity, the scaling laws, is wrong.
“The meaning of the word scaling has been greatly devalued,” Marcus wrote. “‘Scaling’ used to mean ‘I can predict pretty much exactly how well a given LLM will do, by knowing the size of its database and the amount of compute being used to train it, and those increases will be exponential.” Not anymore, he wrote.
OpenAI’s August launch of GPT-5, its latest model, underscored Marcus’ skepticism, as many perceived it as an incremental step and a challenge to the idea that adding more computing power makes better models. The leading language models still make simple mistakes, or hallucinations, that hurt their reliability. The problem persists even as companies have poured exponentially more computing power into training them, according to Bain & Company.
Big-spending corporations have yet to show meaningful results aided by AI, several studies have concluded. Researchers at the Massachusetts Institute of Technology published a report this summer suggesting 95% of early corporate AI initiatives had yet to deliver a return. Tech stocks briefly slumped on the report, among other reasons.
Researchers at BetterUp Labs and the Stanford Social Media Lab coined the term “workslop” to describe the substandard AI-assisted product that more and more employees are producing. Because the models still lack accuracy and nuance, the task of fact-checking and cleaning the output falls to colleagues. Writing in the Harvard Business Review, the researchers said 40% of the 1,150 workers they surveyed reported having received “workslop” from colleagues over the past month.
With its utility still in question, it’s not clear when or if the revenue to come from AI products will justify its spending. Bain estimates that by 2030, annual capex spending will be $500 billion to meet the industry’s computing needs. To justify that, companies would need to generate $2 trillion in annual revenue, Bain said. That’s about $800 billion more than companies can save by using AI to make sales, marketing, customer support, and R&D more efficient. In other words, the consulting firm concluded, the industry will come up short unless it can find new products or services to sell.
In September, Friar, OpenAI’s CFO, said the company will triple revenue this year to about $13 billion. That same month, the company agreed to pay Oracle an average of $60 billion a year for data center capacity — nearly five times more than it expects to bring in.
Related stories
McKinsey & Company acknowledged, in an April report about capex spending, the difficult position the industry is in: Spend too little and risk missing out on a technology that may be one of the most transformative in history. Spend too much and risk wasting tens or hundreds of billions of dollars.
“The stakes are high,” McKinsey wrote in the report. “Overinvesting in data center infrastructure risks stranding assets, while underinvesting means falling behind.”
Meta CEO Mark Zuckerberg put it more bluntly when he admitted how much money he was prepared to waste to win the AI arms race. “If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate,” Zuckerberg told a podcast interviewer. At the same time, he underscored the risk of losing out if productive AI usage gains traction.
“I actually think the risk is higher on the other side,” he said.
A quarter of a century ago, a similar pattern emerged when firms rushed to lay the backbone of the emerging internet. Companies like WorldCom and Global Crossing spent tens of billions of dollars in the five years through 2001 laying fiber optic cables and installing other networking capabilities, only to crash the following year when the dot-com bust led lenders who had financed the construction to demand their money.
Shareholders in the companies leading the fiber-optic build-out lost $2 trillion in value, while 500,000 workers lost their jobs, according to analysis conducted by the Brookings Institution. While the fiber optic cable was eventually put to use, largely by the streaming video revolution pioneered by Netflix, many of the companies that laid it weren’t around to see it.
Like fiber optic cables and the railroads of an earlier era, data center construction and the purchase of GPUs are being financed by Wall Street.
Some of the money is coming through familiar channels. Oracle recently sold $18 billion worth of bonds to finance its data center expansion plans, while data center upstart CoreWeave, which went public in March, has tapped the public debt and equity markets to the tune of $25 billion since last year to fund its own expansion.
Increasingly, companies are also turning to less traditional lenders. Meta recently raised $29 billion for its latest data center project from firms including the Pacific Investment Management Company and Blue Owl Capital. The deal allowed Meta to raise billions without having to report the full amount on its balance sheet.
When Meta CFO Susan Li was asked about the company’s plans for non-traditional financing methods at a recent conference, she said the company had built its own data centers for most of its history. Now, she said, as “the ambition of our infrastructure capacity unfolds ahead of us, it kind of dwarfs what we’ve built before, and we need to be sort of more expansive in the way that we are thinking about this.”
Other data center developers are using the securitized bond markets to keep the boom going. Once a project is built and leased, they’re turning the leasing revenue into bonds and selling to investors eager to be part of the AI boom. That frees up more money to put into future projects and fuels the spending, even if demand for mature data center projects starts to wane, said Paul Kedrosky, a financial blogger and advisor to hedge funds who recently wrote about the development.
“Developers build data centers, lease access to hyperscalers, and then package the rental income into bonds. Proceeds are recycled into the next round of construction,” Kedrosky wrote on his website. “In short: the structured-credit world, not the real estate equity one, now finances the data-center boom.”
Increasingly, Silicon Valley itself is helping to finance the boom. In September, Nvidia agreed to invest $100 billion into OpenAI to help with its data center build-out, with the expectation that OpenAI would turn around and use the money to purchase Nvidia’s GPUs for its data centers. Microsoft invested in OpenAI early, and OpenAI used the money to rent out Microsoft’s computing capacity. On Monday, chipmaker AMD announced that OpenAI has agreed to buy GPUs to power as many as 6 gigawatts of computing capacity in return for a warrant that would give OpenAI a 10% stake in the chipmaker if it hits certain milestones.
“This partnership is a major step in building the compute capacity needed to realize AI’s full potential,” Altman said in the statement announcing the deal.
The circular flow of money is yet another element of the AI infrastructure bet that underscores its scale, as well as its risks. If the hype around AI proves fleeting — if more computing power doesn’t equate to better models, or the industry can’t find more ways to make more money — the interlocking investments could magnify the fallout.
Even proponents of AI say the math is daunting.
“We are bullish on this in a big way,” said David Crawford, the chairman of Bain’s technology practice. “The point we were trying to make is that it’s expensive.”