ChatGPT is soon getting ‘eyes and ears’. Microsoft-backed OpenAI has launched GPT-4o – its latest flagship model that is touted to be much better than any existing model at understanding and discussing the images that users share. As per the company, GPT-4o provides GPT-4-level intelligence but at a much faster pace. It gets improved capabilities across text, voice, and vision.
Think of ChatGPT powered by GPT-4o model – it pretty much works in a conversational way just like Google Assistant and Apple Siri in a human-like voice but gets the prowess of the familiar ChatGPT that people use in their everyday life.
ChatGPT with GPT-4o availability
OpenAI announced that it is rolling out GPT-4o to ChatGPT Plus and Team users, with availability for Enterprise users coming soon. Additionally, those who use the free version of ChatGPT will also get to use the flagship AI model but the access will be limited.
Those with Plus subscription will have a message limit that is up to 5x greater than free users, and Team and Enterprise users will have even higher limits. Those interested can download the app on their Android smartphones and iPhones.
ChatGPT with GPT-4o features
GPT-4o essentially brings voice and vision capabilities to ChatGPT. Previously, ChatGPT used to converse only in the text format; GPT-4o provides the AI chatbot the ability to speak in a natural way. For example, users can take a picture of a menu in a different language and talk to GPT-4o to translate it. They can also learn about the food’s history, significance and even get recommendations.
“In the future, improvements will allow for more natural, real-time voice conversation and the ability to converse with ChatGPT via real-time video,” the company said.
During the live event, OpenAI CTO Mira Murati demonstrated how people can talk to the chatbot like the way it is done with Google Assistant and Apple Siri. She uses, “Hey, ChatGPT..” to invoke the chatbot and asks a question in Italian, which ChatGPT responds to in English in real time.
OpenAI says it plans to launch a new Voice Mode with new capabilities in an alpha in the coming weeks. ChatGPT Plus users will get early access.
ChatGPT gets Indian language support
ChatGPT with GPT-4o is getting improved language capabilities in terms of quality and speed. ChatGPT also now supports more than 50 languages across sign-up and login, user settings, and more. These also include various Indian languages: Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil and Telugu.
Murati also demonstrated that apart from different languages, the AI chatbot can also bring emotions and voice modulation. It can perceive emotions by looking at a live image through the phone’s camera and even respond in different tones and emotions, such as drama, or robotic voice.
When using GPT-4o, ChatGPT Free users will now have access to features such as:
Think of ChatGPT powered by GPT-4o model – it pretty much works in a conversational way just like Google Assistant and Apple Siri in a human-like voice but gets the prowess of the familiar ChatGPT that people use in their everyday life.
ChatGPT with GPT-4o availability
OpenAI announced that it is rolling out GPT-4o to ChatGPT Plus and Team users, with availability for Enterprise users coming soon. Additionally, those who use the free version of ChatGPT will also get to use the flagship AI model but the access will be limited.
Those with Plus subscription will have a message limit that is up to 5x greater than free users, and Team and Enterprise users will have even higher limits. Those interested can download the app on their Android smartphones and iPhones.
ChatGPT with GPT-4o features
GPT-4o essentially brings voice and vision capabilities to ChatGPT. Previously, ChatGPT used to converse only in the text format; GPT-4o provides the AI chatbot the ability to speak in a natural way. For example, users can take a picture of a menu in a different language and talk to GPT-4o to translate it. They can also learn about the food’s history, significance and even get recommendations.
“In the future, improvements will allow for more natural, real-time voice conversation and the ability to converse with ChatGPT via real-time video,” the company said.
During the live event, OpenAI CTO Mira Murati demonstrated how people can talk to the chatbot like the way it is done with Google Assistant and Apple Siri. She uses, “Hey, ChatGPT..” to invoke the chatbot and asks a question in Italian, which ChatGPT responds to in English in real time.
OpenAI says it plans to launch a new Voice Mode with new capabilities in an alpha in the coming weeks. ChatGPT Plus users will get early access.
ChatGPT gets Indian language support
ChatGPT with GPT-4o is getting improved language capabilities in terms of quality and speed. ChatGPT also now supports more than 50 languages across sign-up and login, user settings, and more. These also include various Indian languages: Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil and Telugu.
Murati also demonstrated that apart from different languages, the AI chatbot can also bring emotions and voice modulation. It can perceive emotions by looking at a live image through the phone’s camera and even respond in different tones and emotions, such as drama, or robotic voice.
When using GPT-4o, ChatGPT Free users will now have access to features such as:
- GPT-4 level intelligence
- Get responses from both the model and the web
- Analyse data and create charts
- Chat about photos you take
- Upload files for assistance summarising, writing or analysing
- Discover and use GPTs and the GPT Store
- Build a more helpful experience with Memory
Source: ChatGPT maker OpenAI launches new AI model, GPT-4o: It has voice, vision and more – Times