Analogy, Grossman, Intelligence
It’s coming, they say. AGI here, AGI there. Can you feel it, they say. Of course you can’t, you’re not in SF.
AGI is such an ill-defined term that no one shares a clear understanding of it, and people would rather talk about when we will have AGI than what it even means.
AGI, the “G” is probably for Genius
One emerging trend in recent definitions of AGI is to associate it with the very highest levels of human intellectual achievement. For example, Sam Altman (see this discussion) has suggested that if “GPT-8 figured out quantum gravity and could tell you its story, that might qualify as true AGI”. Dario Amodei, in his essay Machines of Loving Grace, describes powerful AI—not strictly AGI—as a system that would be “smarter than a Nobel Prize winner across most relevant fields: biology, programming, math, engineering, writing, and so on.”
Along similar lines, Demis Hassabis suggested the following thought experiments in 3 steps to determine true AGI.
-
Train an AI only on knowledge available up to around 1911 (before Einstein’s theory).
-
Then ask whether the system could independently derive general relativity, like Albert Einstein did in 1915.
-
If it could rediscover the theory from earlier physics knowledge, that would be strong evidence of genuine AGI.
But there are only so many Einsteins in the world1. Requiring AGI to match the achievements of the most exceptional scientific minds may simply set the bar too high. Although progress certainly has been astonishing. GPT-2 was useless for any real-world task, GPT-3.5 became useful for beginners on some tasks, GPT-5 performance may match that of many professionals in several domains. We could measure this as a percentile on some imaginary scale of ability across the full population. But expecting AGI to sit at the very frontier of human intelligence—the extreme tail of the distribution—is the most demanding possible definition.
Climbing the Ladder: Three Levels of Intelligence
Before jumping directly to “Einstein-level” intelligence, it may be useful to think of scientific capability as a ladder of cognitive abilities, each building on the previous one.
Level 1 — Induction and Deduction
Induction emerges from large-scale statistical pattern recognition, and modern LLMs have become remarkably strong at it. This capability is well illustrated by systems such as AlphaEvolve2. Deduction is now achieved with the advent of reasoning models together with the scaling of formalized languages. A prominent example is AlphaProof.
Level 2 — Analogical reasoning
Analogical reasoning allows a system to transfer relational structure from one domain to another: recognizing that two situations share the same underlying pattern even if their surface details differ.
This ability enables systems to reuse conceptual structures, map ideas across fields, and generate new hypotheses.
Evidence that current language models possess robust analogical reasoning remains limited. They can often produce convincing analogies in language, but whether they truly perform structural analogy at the level required for scientific discovery is still debated.
Level 3 — Full scientific creativity
At the highest level lies scientific creativity: the ability to invent entirely new conceptual frameworks and evaluate competing theories. This is the territory of mythical figures like Albert Einstein or Alexander Grothendieck, where new mathematics or physics reshapes the field itself.
Analogical reasoning may be the missing middle step between statistical learning and genuine scientific invention.
AGI, the “G” is probably for Grossman
Breakthrough discoveries are often told as the triumph of a solitary genius. Eureka moments, whether bathing or napping. In practice, the story is sometimes less romantic but more instructive as it depends on something much more prosaic: the discovery of the right (mathematical) framework.
By the early 1910s, Einstein had already reached several of the essential physical insights behind the theory of general relativity. The equivalence principle suggested that gravity and acceleration were fundamentally related, and he suspected that gravity might not be a force in the Newtonian sense but rather a property of spacetime itself. The difficulty was mathematical. Einstein initially tried to express the theory using extensions of the formalism of special relativity and classical field theory, but these approaches had no natural way to describe curved spaces.
Einstein reached out to his old classmate from Zurich, Marcel Grossman3:
“Grossmann, you must help me, otherwise I’ll go crazy.”
And Grossman pointed Einstein towards Riemannian geometry and tensor calculus…
A similar pattern appears in the work of another genius, Richard Feynman. In the 1940s, he wondered whether quantum mechanics could be formulated directly from the action functional rather than through the Schrödinger equation or the operator formalism of Heisenberg.
During his 1965 Nobel Prize in Physics lecture, he tells the story that while stuck on this, it is by the most random chance that he met a colleague at a beer party that pointed him to the right path, that of Dirac’s of course.
“Listen, do you know any way of doing quantum mechanics, starting with action — where the action integral comes into the quantum mechanics?” “No,” he said, “but Dirac has a paper in which the Lagrangian, at least, comes into quantum mechanics. I will show it to you tomorrow.”
And this is what eventually led to the path integral formulation.
In both stories the decisive step was neither induction nor deduction, but representation: recognizing the framework in which the problem becomes solvable. The problem became tractable only once it was written in the right language.
This suggests a lower bar than the usual AGI rhetoric as we might not need a system capable of reproducing Einstein but a system that reliably plays the role of Grossmann: identifying relevant frameworks and proposing the right reformulation.
-
Though we are promised “a country of geniuses in a data center” by Dario Amodei, recently. ↩
-
As well as its cousins like ShinkaEvolve or the latest AdaEvolve. ↩
-
See the excellent Marcel Grossmann and his contribution to the general theory of relativity. ↩