After confirming its partnerships with NVIDIA, AMD, and Oracle, OpenAI has now announced a collaboration with Broadcom to develop its custom AI accelerator chips and systems, which are expected to begin deployment in the second half of 2026 and reach full implementation by 2029.
This announcement further validates earlier market reports suggesting a partnership between OpenAI and Broadcom. The customized AI accelerator chips developed with Broadcom’s assistance will be used to power OpenAI’s core infrastructure as well as the data centers of its strategic partners.
According to OpenAI, this collaboration involves a multi-billion-dollar contract encompassing 10 gigawatts of computing capacity. This aligns with previous remarks from Broadcom CEO Hock Tan, who revealed that the company had received a $10 billion order from an unnamed client — now confirmed to be OpenAI.
Prior to this, OpenAI had entered into major agreements with NVIDIA and AMD. Its partnership with NVIDIA includes an investment valued at $100 billion, aimed at delivering 10 gigawatts of AI infrastructure, while its agreement with AMD covers 6 gigawatts of computing power and involves OpenAI acquiring approximately 10% of AMD’s equity in a multi-billion-dollar deal.
Additionally, in July, OpenAI signed a partnership with Oracle under the Stargate Project, securing access to 4.5 gigawatts of data center resources.
In a statement to employees, CEO Sam Altman revealed his ambition to build a total of 250 gigawatts of computing capacity within the next eight years — a dramatic leap from the 2 gigawatts expected by the end of this year. However, achieving such an immense expansion could require up to $10 trillion in total investment. Altman acknowledged that significant new funding will be essential to realize this vision.
Even with the financial backing of NVIDIA, Microsoft, and other key industry players, sustaining such colossal funding demands remains an immense challenge. Given that OpenAI’s projected annual revenue for this year is approximately $13 billion, the company will likely need to pursue additional financing channels to meet its ambitious goals.
OpenAI’s move toward developing its own AI accelerator chips and systems also indicates a strategic effort to reduce dependence on external suppliers such as NVIDIA, while aligning its hardware capabilities more closely with its own AI computational demands. In the long term, OpenAI could potentially commercialize these custom computing systems, opening new streams of revenue.
Nevertheless, whether the broader expansion of AI applications can truly generate the sustained market growth needed to support such massive infrastructure investments remains uncertain. With mounting competition from Google, Meta, and other major players, OpenAI now faces the dual challenge of transforming its AI technologies into substantial revenue drivers while simultaneously financing the enormous costs of building the next generation of computational power.
Related Posts:
- NVIDIA to Invest $100 Billion to Power OpenAI’s AI Future
- Meta’s $100B AI Push: Gigawatt Data Centers Spark Water Crisis & Community Tensions
- Red Hat Unveils llm-d: Scaling Generative AI for the Enterprise
- Elon Musk’s xAI Eyes Saudi Arabia: Seeking Gigawatts of AI Compute Power in New Global Expansion
- NVIDIA’s New NVLink Fusion Initiative Is Building a “Trillion-Scale AI Superfactory”