Did OpenAI Secretly Create a Brain-Like Intelligence After All?

What We Know About Q* and OpenAI’s Potential AGI Breakthrough

Thomas Smith

Thomas Smith

Illustration by the author with components created via Midjourney

Earlier this year, OpenAI’s CEO Sam Altman sent the tech world into a frenzy with a five-word Reddit post: “AGI has been achieved internally.”

AGI stands for Artificial General Intelligence. It’s the holy grail of AI research. Essentially, true AGI would be a brain-like intelligence capable of reasoning, creative thought, and perhaps even consciousness.

Altman posting about achieving AGI was a huge deal — it would be akin in its importance to a top scientist posting “Fusion works,” or Donald Trump posting “I’m not running.”

Altman later said that the post was a joke. But the drama around his recent ouster calls that into question.

OpenAI’s board was reportedly warned of a major breakthrough right before Altman was fired. Leaked documents suggest that the discovery relates to a new model, codenamed Q*.

Could OpenAI have achieved AGI after all? What is Q*, and what does it mean for the future of AI? Let’s explore.

The Importance of Basic Math

Much of what we know about Q* (pronounced Q-star) is based on reporting by Reuters. The news outlet spoke to several anonymous sources at OpenAI, and obtained internal documents about the new alleged breakthrough, including a secretive letter to the company’s board from its top scientists.

Their reporting reveals that Q* caused a ruckus internally because it could do something that no Large Language Model had done before: basic math.

At first, that doesn’t seem like a big deal. Simple calculators have been able to do basic math since the 1950s. But achieving this milestone with a Large Language Model (or whatever architecture Q* might use) is actually a huge breakthrough.

Smolensky’s Solution Did Work

When I studied Cognitive Science at Johns Hopkins, my advisor was a professor named Paul Smolensky. He got into a shockingly heated (for academic circles, anyway) debate with a rival researcher over the nature of cognition.

It got so bad that his rival published a paper titled “Why Smolensky’s Solution Doesn’t Work”, and then followed it up with another paper years later titled “Why Smolensky’s Solution Still Doesn’t Work.”

Youch.

What was Smolensky’s solution? Basically, he was trying to explain how the human brain — which we know is made up of neurons connected together — can reason symbolically, performing operations like mathematics.

Symbolic reasoning usually requires a binary computer — one that can take in information and process it systematically, yielding a single, deterministic, correct answer that never varies.

Brains don’t work like that. They’re basically a jumble of wires. So how can they act like computers?

Smolensky proposed that human brains are indeed a jumble of wires and that we use the free-flowing nature of these connections for things that don’t require precision, like creative reasoning, or even vision.

When we need to reason symbolically, though, Smolensky proposed that our jumble of wires was somehow able to implement a symbolic “virtual machine.”

Basically, our neurons could somehow call up a virtual computer that was able to use symbols and reach deterministic answers when it needed to.

Crucially, this virtual computer still ran on the same neural hardware of connected neurons. Yet it was able to act like a binary computer, performing operations that a traditional neural network would struggle to get right.

Q* and the Brain

If Reuters’ reporting is correct, it suggests that OpenAI may have created such a hybrid system, using silicon and computer chips instead of neurons and biological wires.

Q*’s reported ability to reason symbolically — even if it’s at a very low level currently — suggests that it may have successfully developed the kind of connectionist “virtual computer” that Smolensky proposed.

Like the human brain, that would give the new model the ability to reason in an intuitive and creative way (using its “jumble of wires”, in the same way, that today’s LLMs do), but also to switch into a symbolic reasoning mode and solve problems that have a single correct answer, like math problems.

The ability to do basic math, then, isn’t really the point of OpenAI’s breakthrough. Rather, the fact that a neural network-based model has reportedly developed this capability suggests that the model has taken a major step towards mimicking the underlying abilities of the human brain.

That leap is likely what terrified OpenAI’s board. If Q* has taken a big step towards a more brain-like architecture, that suggests that as it expands, it could develop other capabilities that are similar to those of the human brain.

For AI doomsayers, that’s enough to cause some sleepless nights — or the panicked ouster of a popular CEO.

Not Going to End the World

Contrary to those doomsayers’ fears, Q* probably won’t end the world. Even if the model has achieved a breakthrough that previously was restricted to human brains, that doesn’t mean it’s super intelligent, conscious, or even on the level of an AGI.

What it does mean, if the reports are correct, is that OpenAI may have taken another big step ahead of the competition in terms of creating useful AI.

A model that is able to reason both intuitively and symbolically would be hugely important in fields like natural language processing, drug discovery, and mathematics. These fields require a blend of creativity and logic that LLMs currently struggle to deliver.

Understanding language, for example, requires understanding meaning, but also developing a grasp for deterministic basics, like grammar.

Today’s LLMs can understand language on a statistical basis, predicting words that are likely to be present in a given text. But they don’t truly understand the underlying, symbolic logic that all human languages possess or the web of interconnections to lead to meaning.

A model like Q*, if it can indeed blend symbolic and intuitive reasoning, could potentially break down a text into its constituent, grammatical parts, fully understand the context, and then create wholly new texts or ideas that have no statistical relation to its training data. That would give it enormous creativity.

Likewise, a system that could understand both the deterministic mathematics of processes like protein folding and the more intuitive aspects of how the human body functions could potentially invent new, useful medicines at lightning speed.

Even if Q* exists, it likely won’t be ready for public consumption for a while. Just as such a system could revolutionize drug discovery or creative writing, it could also excel at inventing bioweapons or spinning up convincing, undetectable propaganda. OpenAI will need to grapple with those risks before anything like Q* ever goes public.

If the reports about Q* are accurate, though, they suggest that Altman wasn’t just being coy when he hinted about AGI in his enigmatic Reddit post.

It may not be here yet, but a system like Q* would be a huge leap toward AI that functions more like the human brain. And that would make it a huge leap towards a general intelligence that can reason and create just as well as us humans.

I’ve tested thousands of ChatGPT prompts over the last year. As a full-time creator, there are a handful I come back to every day. I compiled them into a free guide, 7 Enormously Useful ChatGPT Prompts For Creators. Grab a copy today!

Leave a comment