OpenAI Adds 3GW Compute Capacity in Last 90 Days Alone

OpenAI has exceeded its initial commitment of ten gigawatts of AI infrastructure by 2029, surpassing that milestone just over a year after announcing the plan in January 2025. The company reports adding more than three gigawatts of compute capacity in the last 90 days, a substantial build-out that demonstrates the current intensity of the AI race and suggests previous estimates of demand were significantly underestimated. This expansion is part of “Stargate,” OpenAI’s long-term effort to establish the compute foundation for artificial general intelligence. “The only responsible way to meet this demand is to build more compute, faster,” OpenAI stated, emphasizing that increased capacity is essential for training better models, lowering costs, and delivering the benefits of AI to a wider audience.

Stargate Initiative Surpasses 3GW AI Compute Capacity

OpenAI has dramatically exceeded expectations for its artificial intelligence infrastructure build-out, surpassing the ten gigawatt milestone it initially projected for 2029, just over a year after the January 2025 announcement. This rapid expansion is anchored by “Stargate,” the company’s long-term initiative focused on establishing a robust compute foundation for advanced AI development and deployment. The company’s strategy centers on scaling compute resources to fuel increasingly sophisticated AI models. OpenAI asserts that increased capacity allows for better model training, improved performance, and reduced costs over time, creating a positive feedback loop where better models drive increased usage, revenue, and further investment in infrastructure. OpenAI emphasizes a partner-centric approach, recognizing that no single entity can build the necessary infrastructure alone, and is collaborating with utilities, chipmakers, and construction firms.

Beyond simply increasing capacity, OpenAI is actively evaluating new data center locations across the United States, with a focus on responsible development. “AI infrastructure should create clear local benefits,” the company states, outlining commitments to job creation, local revenue, and responsible resource management. The Abilene, Texas site exemplifies this approach, utilizing closed-loop cooling to minimize water consumption; the initial fill required the equivalent of two Olympic-sized swimming pools, with annual usage comparable to a medium-sized office building. OpenAI’s latest model, GPT‑5.5, was trained at this Abilene facility, demonstrating a direct link between infrastructure investment and AI capability.

Partner-Centric Ecosystem Drives Infrastructure Expansion

OpenAI’s ambitious infrastructure build-out, dubbed “Stargate,” is increasingly reliant on collaborative partnerships to meet rapidly escalating demand for artificial intelligence compute. This expansion isn’t solely about scale; OpenAI is actively cultivating an ecosystem encompassing utilities, chipmakers, construction firms, and even skilled trades unions. The strategy acknowledges that “no single company can build the infrastructure for the Intelligence Age alone,” and emphasizes shared success. A recent donation to the Port Washington-Saukville Education Foundation in Wisconsin, alongside Vantage Data Centers and Oracle, exemplifies this commitment to community investment. This approach extends to workforce development, partnering with North America’s Building Trades Unions to create pathways for skilled workers into the emerging AI economy. The company believes that “building with communities” is essential, and that the benefits of the Intelligence Age should be shared by all.

Abilene, Texas: Model for Responsible Data Center Development

OpenAI’s rapid expansion of artificial intelligence infrastructure is increasingly exemplified by its approach in Abilene, Texas, where the company is demonstrating a commitment to responsible data center development alongside significant compute gains. This isn’t simply about scaling capacity; it’s about building an ecosystem that benefits local communities. The Abilene facility serves as a blueprint for this philosophy, prioritizing resource management and community engagement. This attention to detail extends beyond water, encompassing a broader vision for local economic development. Crucially, the latest model, GPT‑5.5, was trained at the Abilene site, operating on Oracle Cloud Infrastructure and utilizing NVIDIA GB200 systems, demonstrating that responsible infrastructure directly translates into more capable AI systems and wider access to their benefits.

GPT-5.5 Training & Closing the AI Capability Gap

The arrival of increasingly sophisticated artificial intelligence is now directly linked to tangible infrastructure development, with OpenAI’s latest model, GPT‑5.5, demonstrably benefiting from a massive expansion of compute capacity. Trained at the company’s Stargate facility in Abilene, Texas, GPT‑5.5 represents a significant step towards bridging what OpenAI terms the “capability gap,” the disparity between those who effectively utilize AI and those who do not. This isn’t simply about algorithmic advancement; it’s about the physical resources underpinning that advancement.

This rapid build-out underscores the intensity of the current AI race and suggests earlier demand estimates were significantly underestimated. The company frames this expansion as “Stargate,” a long-term effort to establish a robust compute foundation for broadly delivering the benefits of artificial general intelligence. “More compute enables better models, better models drive more usage, more usage improves products and revenue, and that allows us to reinvest in more infrastructure,” explains OpenAI, outlining a self-reinforcing cycle of improvement. “From infrastructure to intelligence,” the company asserts, highlighting the direct correlation between physical capacity and AI capabilities, and emphasizing that the communities involved in building this infrastructure “should share in the upside.”

“Compute is the critical input that makes advanced AI possible. It is what allows us to train better models, serve them reliably, improve performance, lower costs over time, and bring more powerful tools to more people.”

The Neuron

The Neuron

With a keen intuition for emerging technologies, The Neuron brings over 5 years of deep expertise to the AI conversation. Coming from roots in software engineering, they've witnessed firsthand the transformation from traditional computing paradigms to today's ML-powered landscape. Their hands-on experience implementing neural networks and deep learning systems for Fortune 500 companies has provided unique insights that few tech writers possess. From developing recommendation engines that drive billions in revenue to optimizing computer vision systems for manufacturing giants, The Neuron doesn't just write about machine learning—they've shaped its real-world applications across industries. Having built real systems that are used across the globe by millions of users, that deep technological bases helps me write about the technologies of the future and current. Whether that is AI or Quantum Computing.

Latest Posts by The Neuron: