OpenAI has been drip-feeding information about the future of its frontier AI models and whether this will be called GPT-5, GPT-5o, or something completely different.
The latest remarks from CTO Mira Muratti suggest within two years we’ll have something as intelligent as a professor. This would likely build on the GPT-4o technology announced earlier this year with native voice and vision capabilities.
“If you look at the trajectory of improvement, GPT-3 was maybe toddler level intelligence, systems like GPT-4 are smart high schooler intelligence and in the next couple of years we’re looking at PhD level intelligence for specific tasks,” she said during a talk at Dartmouth.
Some took this to suggest we’d be waiting two years for GPT-5 but looking at other OpenAI revelations, such as a graph showing ‘GPT-Next’ this year and ‘future models’ going forward and CEO Sam Altman refusing to mention GPT-5 in recent interviews — I’m not convinced.
The release of GPT-4o was a game changer for OpenAI, creating something entirely new from scratch that was built to understand not just text and images but native voice and vision. While it hasn’t yet unleashed those capabilities, I think the power of GPT-4o has led to big changes.
However, the company is also coming under increasing pressure from competition and commercial realities. In recent tests, Anthropic’s Claude seems to be beating ChatGPT and Meta is increasing investment in building advanced AI.
ChatGPT: What can we expect from the next generation?
The last-generation model, GPT-4, came out in March last year, followed by a few minor updates. Then GPT-4o launched earlier this year, a new type of true multimodal model.
Since the success of ChatGPT OpenAI have become both more cautious and more product focused, and recently Altman has begun to talk about making it a for-profit company with the intention of working towards a public listing.
Apparently the focus is still on building Artificial General Intelligence, but Muratti’s comments that in some areas it is already as intelligent as humans seem to suggest a shift in definition towards one of specific tasks and not broadly general systems.
How will OpenAI get to the next generation?
Muratti says there is a simple formula for creating advanced AI models. You need to take compute, data and deep learning and put them together. Scaling both data and compute leads to better AI systems. This discovery will lead to significant leaps going forward.
“We are building on decades and decades of human endeavour. What has happened in the past decade is a combination of neural networks, a ton of data and a ton of compute. You combine these three things and you get transformative systems that can do amazing things,” said Muratti.
Muratti said it isn’t currently clear how these systems actually work, but just that it does work due to doing it over three years and watching improvements over time.
“It understands language at a similar level we can,” she said. “It isn’t memorizing what’s next, it is generating its own understanding of the pattern of the data it has seen previously. We also found it isn’t just language. It doesn’t care what data you put in there.”
Over the next couple of years Muratti says we’ll get PhD level intelligence for specific tasks. We could even see some of this within the next year to 18 months. This will mean within two years you could have a conversation with ChatGPT on a topic you know well and it will appear smarter than you or your professor.
What happens when ChatGPT exceeds all human intelligence?
Muratti says safety work around future AI models is vital. “We’re thinking a lot about this. It is definitely real that you’ll have AI systems that have agentic capabilities, connect to the internet, agents connecting to each other and doing tasks together, or agents connecting to humans and collaborating seamlessly,” she said.
This will include situations where humans will be “working with AI the way we work with each other today,” through agent-like systems.
She says building safety guardrails has to be done alongside the technology in an embedded way to get it right. “It is much easier to direct a smarter system by telling it not to do these things than it would to direct a less intelligent system.”
“Intelligence and safety go hand-in-hand,” Muratti added. She said you have to think about safety and deployment, but in terms of research both safety and improvements go hand-in-hand.
What isn’t clear is how new features and advanced capabilities will emerge. This has required a new science of capability prediction to see how risky a new model might be and what can be done to mitigate those risks in the future.
More from Tom’s Guide
Source: ChatGPT could be smarter than your professor in the next 2 years