Under the near-monopoly that NVIDIA holds in the AI acceleration market, Amazon has unmistakably carved out a path of its own. According to CEO Andy Jassy, AWS’s in-house AI compute chip business built around Trainium has already reached multi-billion-dollar revenues, signaling strong market endorsement of Amazon’s strategy to drive down the cost of AI computation.
Jassy noted that the momentum behind the Trainium2 line is particularly strong. AWS’s official figures show that production has surpassed one million units, and more than 100,000 enterprises now rely on Trainium2—primarily through the Amazon Bedrock platform—to power their AI workloads.
Jassy was blunt: Trainium’s rise within AWS’s vast cloud customer base stems from its superior price-performance ratio compared with competing GPU offerings. In other words, by providing a solution that is cheaper than NVIDIA’s while delivering comparable—and, for certain workloads, even superior—performance, Amazon has become an attractive option for cost-sensitive enterprises. AWS CEO Matt Garman confirmed in an interview that the company’s strategic partner Anthropic has played a pivotal role.
Garman revealed that under the joint Project Rainier initiative, Anthropic deployed more than 500,000 Trainium2 chips to train and build the next generation of Claude models. This heavy commitment explains Amazon’s willingness to invest billions in Anthropic in exchange for its pledge to use AWS as its primary model-training platform. At AWS re:Invent, Amazon also unveiled Trainium3, boasting four times the performance of its predecessor and improved energy efficiency.
Yet with NVIDIA’s formidable moat built around the CUDA software ecosystem, Amazon appears to be embracing a more flexible strategy. The upcoming Trainium4 is being designed to interoperate seamlessly with NVIDIA GPUs within the same system via NVLink Fusion, breaking down the traditional silo between proprietary chips and GPU ecosystems and offering customers a more versatile hybrid architecture.
This signals a shift away from a direct “replacement” strategy toward a hybrid compute model, allowing customers on AWS to combine the universality of NVIDIA’s platform with the cost advantages of Trainium.
Judging from its current revenue trajectory, Amazon’s in-house chip business is no longer an experiment — it is rapidly becoming a firmly established pillar of its cloud strategy.