On Monday, OpenAI agreed to purchase $38 billion worth of capacity from Amazon Web Services. Amazon stated in a press release that OpenAI will utilize hundreds of thousands of Nvidia graphics processing units (GPUs) in the United States to run workloads on AWS infrastructure, planning to increase capacity in the coming years.
Amazon stated that the rapid advancement of AI technology has created an unprecedented demand for computing power. According to Amazon, the deal’s initial phase will utilize existing AWS data centers, and Amazon will subsequently expand its OpenAI infrastructure.
In an interview, AWS Vice President of Compute and Machine Learning Services Dave Brown stated that “It’s entirely separate capacity that we’re putting down.” He added that OpenAI is utilizing some of that capacity that is already available.
OpenAI strengthens cloud ties beyond Microsoft
🚨 JUST IN: Amazon $AMZN has signed a $38 billion cloud partnership with OpenAI.
The deal gives OpenAI massive access to Amazon Web Services — its first-ever collaboration with the cloud giant — signaling that the $500 billion #AI leader is expanding beyond its dependence on… pic.twitter.com/EARpnyiRsK
— Defcon7 (@Defcon7_) November 3, 2025
Amazon and OpenAI’s partnership comes less than a week after the ChatGPT developer altered its relationship with Microsoft. Microsoft, which first invested $13 billion in OpenAI in 2019, has an exclusive cloud relationship with the artificial intelligence (AI) startup. Microsoft announced in January that it would no longer be the exclusive OpenAI cloud provider and was switching to a system that would give it the first say in any new requests.
Microsoft’s special standing under its recently agreed commercial terms with OpenAI ended last week, allowing the developer of ChatGPT to expand its partnerships with the other hyperscalers. The AI startup had already signed cloud agreements with Google and Oracle, but AWS leads the market by a wide margin.
Amazon stated in a press release that AWS’s infrastructure deployment for OpenAI features an advanced architecture design optimized for optimal AI processing performance and efficiency. Clustering NVIDIA GPUs, including both GB200s and GB300s, on the same network using Amazon EC2 UltraServers enables low-latency performance across linked systems.
The clusters also enable the ChatGPT developer to run workloads effectively and at peak performance. The press release explained that the clusters are designed to handle a range of workloads, from training next-generation models to providing inference for ChatGPT with the ability to adapt to OpenAI’s evolving requirements.
“Scaling frontier AI requires massive, reliable compute. Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
–Sam Altman, OpenAI CEO
Altman claimed that as OpenAI continues to expand the realm of what is feasible, its AI goals will be supported by AWS’s top-notch infrastructure. He added that the scope and instant accessibility of optimized compute illustrate why AWS is uniquely positioned to handle the extensive AI workloads of OpenAI.
Amazon exceeded analyst forecasts in its earnings release last week, announcing a more than 20% year-over-year increase in AWS sales. However, Microsoft and Google had quicker growth, reporting 40% and 34%, respectively, in cloud expansion.
OpenAI expands global AI infrastructure projects
Recently, OpenAI has been making numerous deals. The ChatGPT developer announced buildout agreements worth approximately $1.4 trillion with companies such as Nvidia, Broadcom, Oracle, and Google. OpenAI plans to construct 30 gigawatts of computer power, equivalent to the electricity used in nearly 22.5 million homes in the U.S. The ChatGPT developer will also be completing a corporate reorganization that will value the company at $500 billion and position it for what could be Silicon Valley’s most significant IPO.
In January, President Trump, Oracle CEO Larry Ellison, and SoftBank’s Masayoshi Son announced OpenAI’s $500 billion data center plan, Stargate. Since then, the project for the AI data center has undergone significant growth.
In September, OpenAI announced five additional data center locations in the United States, bringing the total capacity to approximately 7 gigawatts. Over the next three years, the company plans to invest over $400 billion in U.S.-based data center and AI infrastructure projects.
Oracle delivered the first Nvidia GB200 racks in June, and OpenAI is already executing early training workloads on next-generation research at the flagship location in Abilene, Texas. In addition to the announcement of Stargate Argentina in October, OpenAI is developing additional AI computer infrastructure locations in Shackelford County, Texas.
The smartest crypto minds already read our newsletter. Want in? Join them.
									 
					
