Sam Altman-led OpenAI has launched a new cost-efficient small AI model. Dubbed GPT-4o mini, it is 60% cheaper than GPT-3.5 Turbo. GPT 4o-mini is priced at 15 cents per million input tokens and 60 cents per million output tokens.
GPT 4o-mini features
In a press release, the Microsoft-backed company said that the new GPT-4o mini can outperform GPT-4 on chat preferences in LMSYS leaderboard. It scored 82% on Massive Multitask Language Understanding (MMLU), OpenAI said.
OpenAI’s GPT-4o mini enables a broad range of tasks, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots).
Currently, GPT-4o mini supports text and vision in the API. The company plans to add support for text, image, video and audio inputs and outputs in the future.
According to OpenAI, GPT 4o-mini scored 87.0%, compared to 75.5% for Gemini Flash and 71.7% for Claude Haiku in math reasoning. Similarly, GPT-4o mini scored 87.2% on HumanEval, which measures coding performance, compared to 71.5% for Gemini Flash and 75.9% for Claude Haiku.
Availability and pricing
GPT-4o mini is available as a text and vision model in the Assistants API, Chat Completions API, and Batch API. Developers pay 15 cents per 1M input tokens and 60 cents per 1M output tokens (roughly the equivalent of 2500 pages in a standard book).
In ChatGPT, Free, Plus and Team users will be able to access GPT-4o mini starting today, in place of GPT-3.5. Enterprise users will also have access starting next week.
Safety measures in GPT-4o mini
OpenAI says that the GPT-4o mini has the same safety mitigations built-in as GPT-4o. More than 70 external experts in fields like social psychology and misinformation tested GPT-4o to identify potential risks. Insights from these expert evaluations, the company says, have helped improve the safety of both GPT-4o and GPT-4o mini.
GPT 4o-mini features
In a press release, the Microsoft-backed company said that the new GPT-4o mini can outperform GPT-4 on chat preferences in LMSYS leaderboard. It scored 82% on Massive Multitask Language Understanding (MMLU), OpenAI said.
OpenAI’s GPT-4o mini enables a broad range of tasks, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots).
Currently, GPT-4o mini supports text and vision in the API. The company plans to add support for text, image, video and audio inputs and outputs in the future.
According to OpenAI, GPT 4o-mini scored 87.0%, compared to 75.5% for Gemini Flash and 71.7% for Claude Haiku in math reasoning. Similarly, GPT-4o mini scored 87.2% on HumanEval, which measures coding performance, compared to 71.5% for Gemini Flash and 75.9% for Claude Haiku.
Availability and pricing
GPT-4o mini is available as a text and vision model in the Assistants API, Chat Completions API, and Batch API. Developers pay 15 cents per 1M input tokens and 60 cents per 1M output tokens (roughly the equivalent of 2500 pages in a standard book).
In ChatGPT, Free, Plus and Team users will be able to access GPT-4o mini starting today, in place of GPT-3.5. Enterprise users will also have access starting next week.
Safety measures in GPT-4o mini
OpenAI says that the GPT-4o mini has the same safety mitigations built-in as GPT-4o. More than 70 external experts in fields like social psychology and misinformation tested GPT-4o to identify potential risks. Insights from these expert evaluations, the company says, have helped improve the safety of both GPT-4o and GPT-4o mini.
Source: Microsoft-backed OpenAI launches cheaper GPT 4o-mini AI model: How it is different from