OpenAI launched GPT-4o in the API—our new flagship model that’s as smart as GPT-4 Turbo and much more efficient. We’re passing on the benefits of the model’s efficiencies to developers, including:

  • 50% lower pricing. GPT-4o is 50% cheaper than GPT-4 Turbo, across both input tokens ($5 per 1 million tokens) and output tokens ($15 per 1 million tokens).
  • 2x faster latency. GPT-4o is 2x faster than GPT-4 Turbo.
  • 5x higher rate limits. Over the coming weeks, GPT-4o will ramp to 5x those of GPT-4 Turbo—up to 10 million tokens per minute for developers with high usage.

GPT-4o in the API currently supports text and vision capabilities. It has better vision capabilities and improved support for non-English languages compared to GPT-4 Turbo. It has a 128k context window and has a knowledge cut-off date of October 2023. We plan to launch support for GPT-4o’s new audio and video capabilities in the API to a small group of trusted partners in the coming weeks.

We recommend that developers using GPT-4 or GPT-4 Turbo consider switching to GPT-4o. You can access GPT-4o in the Chat Completions API and Assistants API, or in the Batch API where you get a 50% discount on batch jobs completed asynchronously within 24 hours.

To get started, test the model in Playground, which now supports vision capabilities, and check out our API documentation. To learn how to use vision to input video content with GPT-4o today, check out the Introduction to GPT-4o cookbook. If you have questions, please reach out in the OpenAI developer forum