OpenAI is reportedly working on a cutting-edge approach to its artificial intelligence models in a project code-named "Strawberry", with claims of advanced human reasoning capabilities.
The Strawberry project was earlier known as Q*. The latter was widely reported about in 2023 and seen as a breakthrough in AI development. The project now has a renewed buzz surrounded by secrecy and speculation.
An OpenAI company spokesperson said in a generalised statement as reported by Reuters last week that "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time."
What can it do?
According to the Reuters exclusive report, Open AI's Strawberry models are aimed at "enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms 'deep research'."
Further, based on an exclusive view of the OpenAI documentation, the report stated "among the capabilities OpenAI is aiming Strawberry at is performing long-horizon tasks (LHT), the document says, referring to complex tasks that require a model to plan ahead and perform a series of actions over an extended period of time, the first source explained."
OpenAI specifically wants its models to use these capabilities to conduct research by browsing the web autonomously with the assistance of a "CUA," or a computer-using agent, that can take actions based on its findings, according to the document and one of the sources.
Strawberry includes a specialized way of what is known as "post-training" OpenAI's generative AI models. The post-training phase of developing a model involves methods like "fine-tuning," a process used on nearly all language models such as having humans give feedback to the model based on its responses and feeding it examples of good and bad answers.
Differences from current models
While the spokesperson did not directly address questions about Strawberry from Reuters, two sources described viewing earlier this year "what OpenAI staffers told them were Q* demos, capable of answering tricky science and math questions out of reach of today's commercially-available models."
While large language models (LLMs) can already perform various functions like summarisation, translation, information processing and content creation at extremely high speeds, the report stated "the technology often falls short on common sense problems whose solutions seem intuitive to people, like recognizing logical fallacies and playing tic-tac-toe. When the model encounters these kinds of problems, it often "hallucinates" bogus information."
More human?
OpenAI CEO Sam Altman said earlier this year, that in AI "the most important areas of progress will be around reasoning ability." Researchers reportedly interviewed said that "reasoning is key to AI achieving human or super-human-level intelligence."
Improving reasoning in AI models is seen as the key to unlocking the ability for the models to do everything from making major scientific discoveries to planning and building new software applications.
Standford professor Noah Goodman when asked about the technology by Reuters said "I think that is both exciting and terrifying…if things keep going in that direction we have some serious things to think about as humans." It was reported that Goodman is not affiliated with OpenAI and is not familiar with Strawberry.