
In April 2025, OpenAI unveiled its reasoning model, o3, which achieved top-tier results across a wide range of benchmark tests. The model also supports integration with external tools, such as web browsing and the use of a Python interpreter.
Although its performance was commendable, the API pricing was notably steep at launch, with costs set at $10 per million input tokens and $40 per million output tokens—substantially higher than those of competing AI models.
However, the pricing of the o3 model has since been significantly reduced. Following a strategic collaboration between OpenAI and Google’s cloud platform, GCP, and an expansion of infrastructure capacity, it is likely that this partnership contributed to the lowered operational costs.
The revised pricing for the OpenAI o3 model is as follows:
- Original: $10 per million input tokens
- Original: $40 per million output tokens
- New: $2 per million input tokens
- New: $8 per million output tokens
This substantial price adjustment renders the o3 model far more appealing to developers. Considering its impressive performance and now significantly reduced pricing, o3 presents a compelling option, likely intensifying competitive pressure on rivals such as Google’s Gemini and Anthropic’s Claude. Whether these companies will respond with price cuts of their own remains to be seen.
Additionally, OpenAI has introduced the o3-pro model today. For now, it is available exclusively to ChatGPT Pro and Team subscribers under a limited quota. API access for the o3-pro model is expected to become available later.
Related Posts:
- OpenAI Unveils o3 & o4-mini Models, Announces GPT-5 Plans
- OpenAI to Integrate o3 Model into GPT-5, Offering Free Access to All Users
- OpenAI Introduces Flex API for More Affordable AI Model Access
- OpenAI Increases ChatGPT Plus Quotas, Doubling Limits for o3 and o4-mini Models
- OpenAI’s Next-Gen AI: O3-Pro’s Enhanced Reasoning Power