OpenAI has announced a significant update to its popular ChatGPT platform. This update introduces GPT-4o, a powerful new AI model that brings “GPT-4-class chat” to the app. Chief Technology Officer Mira Murati and OpenAI employees showcased their newest flagship model, capable of real-time verbal conversations with a friendly AI chatbot that can speak like a human. OpenAI’s announcement comes just a day before the Google’s annual developers conference — Google I/O starting on May 14.
“GPT-4o provides GPT-4 level intelligence but is much faster,” Murati said on stage. “We think GPT-4o is really shifting that paradigm into the future of collaboration, where this interaction becomes much more natural and far easier.”
What is ‘o’ in GPT-4o
In GPT-4o — the “o” stands for omni — combines voice, text and vision into a single model, allowing it to be faster than its predecessor. The company said that the new model is two times faster and significantly more efficient.
GPT-4o pricing and availability
OpenAI CTO Mira Murati said in a livestream announcement on Monday. It’ll be free for all users, and paid users will continue to “have up to five times the capacity limits” of free users, Murati added at the launch event in San Francisco. The update brings a number of features to free users that so far have been limited to those with a paid subscription to ChatGPT. These include the ability to search the web for answers to queries, speak to the chatbot and hear responses in various voices, and command it to store details that the chatbot can recall in the future. OpenAI is beginning to roll out GPT-4o’s new text and image capabilities to some paying ChatGPT Plus and Team users today, and is offering those capabilities to enterprise users soon. The company will make the new version of its “voice mode” assistant available to ChatGPT Plus users in the coming weeks.
GPT-4o’s key capabilities
GPT-4o goes beyond traditional text-based communication and can now “see” the world around it through vision capabilities. These capabilities include:
* Desktop screenshots: GPT-4o can analyse screenshots taken directly on your Mac.
* Mobile app integration: An iPhone app (with a Windows app coming soon) allows uploading videos and screenshots for GPT-4o to process.
How GPT-4o is “more human than ever”
The OpenAI demo mainly featured the company’s employees asking questions to the voiced ChatGPT, which responded with jokes and human-like banter.
* Real-time interaction: Unlike previous models, GPT-4o allows for back-and-forth conversation without waiting for the model to finish its statement.
* Harmonised speech synthesis: GPT-4o can generate different voices and even harmonise them for a more natural dialogue experience.
* Sophisticated conversations: GPT-4o offers “normal” conversational interactions, translations, and more, all with the sophistication expected from GPT-4 level technology.
“GPT-4o provides GPT-4 level intelligence but is much faster,” Murati said on stage. “We think GPT-4o is really shifting that paradigm into the future of collaboration, where this interaction becomes much more natural and far easier.”
What is ‘o’ in GPT-4o
In GPT-4o — the “o” stands for omni — combines voice, text and vision into a single model, allowing it to be faster than its predecessor. The company said that the new model is two times faster and significantly more efficient.
GPT-4o pricing and availability
OpenAI CTO Mira Murati said in a livestream announcement on Monday. It’ll be free for all users, and paid users will continue to “have up to five times the capacity limits” of free users, Murati added at the launch event in San Francisco. The update brings a number of features to free users that so far have been limited to those with a paid subscription to ChatGPT. These include the ability to search the web for answers to queries, speak to the chatbot and hear responses in various voices, and command it to store details that the chatbot can recall in the future. OpenAI is beginning to roll out GPT-4o’s new text and image capabilities to some paying ChatGPT Plus and Team users today, and is offering those capabilities to enterprise users soon. The company will make the new version of its “voice mode” assistant available to ChatGPT Plus users in the coming weeks.
GPT-4o’s key capabilities
GPT-4o goes beyond traditional text-based communication and can now “see” the world around it through vision capabilities. These capabilities include:
* Desktop screenshots: GPT-4o can analyse screenshots taken directly on your Mac.
* Mobile app integration: An iPhone app (with a Windows app coming soon) allows uploading videos and screenshots for GPT-4o to process.
How GPT-4o is “more human than ever”
The OpenAI demo mainly featured the company’s employees asking questions to the voiced ChatGPT, which responded with jokes and human-like banter.
* Real-time interaction: Unlike previous models, GPT-4o allows for back-and-forth conversation without waiting for the model to finish its statement.
* Harmonised speech synthesis: GPT-4o can generate different voices and even harmonise them for a more natural dialogue experience.
* Sophisticated conversations: GPT-4o offers “normal” conversational interactions, translations, and more, all with the sophistication expected from GPT-4 level technology.
Source: ChatGPT users are getting GPT-4’o’ free: What are new features, availability and more –