Slow Is Fast, Fast Is Slow
AGI’s Forgotten Heritage in Classical Computational Complexity
Introduction
In “Slow Is Fast, Fast Is Slow: AGI’s Forgotten Heritage in Classical Computational Complexity,” we explore the intricate interplay among George Pólya’s dual-mode human mathematical problem-solving (Pólya, 1954), Daniel Kahneman’s dual-system human cognition (Kahneman, 2011), and the dual-step process of NP (Non-deterministic Polynomial-Time) problems as framed in computational complexity theory. We highlight a paradox where ‘fast’ intuitive guesswork, integral to human System 1 thinking, aligns with the ‘slow’ solution search step of NP problems — a significant challenge in computational complexity. Conversely, ‘slow’ and methodical System 2 thinking, used for logical proof demonstration, parallels the ‘fast’ solution verification step in NP problems.
Advancements in AI, especially in Large Language Models (LLMs), highlight this duality. LLMs predominantly operate in a manner akin to System 1 — fast, intuitive, and often automatic. Yet, they increasingly employ techniques such as Chain of Thought (Wei et al., 2022) to stimulate more analytical, deliberate, System 2-like reasoning processes.
This leads us to the neuro-symbolic approach in AI, exemplified by Google DeepMind’s AlphaGeometry (Trinh et al., 2024). In this approach, the neural network handles the ‘slow’ solution search in NP problem-solving — a challenge firmly rooted in computational complexity — while the symbolic method efficiently manages the ‘fast’ solution verification. This alignment suggests that while initially inspired by human cognition, the neuro-symbolic method has evolved to become a natural progression in computational problem-solving. The continued relevance of computational complexity in the era of AGI indicates that the development of AGI may find its heritage in these foundational principles and benefit from this understanding.
From AlphaGeometry to the Universal Theorem Prover
As AGI looms on the horizon, the potential evolution of systems like Google DeepMind’s AlphaGeometry (Trinh et al., 2024) becomes a compelling subject for speculation. Known for adeptly solving high-school-level geometry problems in International Mathematical Olympiad (IMO) competitions, AlphaGeometry represents a specialized form of AI-driven problem-solving intelligence. Could the future development of such systems pave the way to capabilities akin to the universal theorem prover envisioned in the Entscheidungsproblem?
The Entscheidungsproblem, a foundational concept in computational theory proposed by David Hilbert and profoundly explored by Alan Turing and Kurt Gödel, questioned the feasibility of a universal method to determine the truth of any mathematical statement. Turing’s work concluded the non-existence of such a universal theorem prover, given its requirement to address an unlimited range of mathematical statements, thus rendering it too powerful to exist. Gödel, from another perspective, reinterpreted the Entscheidungsproblem in computational terms, shifting the existential question to one of computational feasibility and situating the universal theorem prover within the realm of NP problems, characterized by their ‘hard to solve but easy to verify’ nature.
In the context of AGI, systems like AlphaGeometry might rekindle aspirations of achieving aspects of a universal theorem prover. Currently navigating the complex territory of NP problem-solving, these systems encompass both the solution search and verification steps inherent in NP problems, mirroring the theoretical challenges posed by Hilbert but within computational feasibility limits.
This exploration invites us to ponder how advancements in AI, particularly neuro-symbolic computing, might align with the core challenges of computational complexity. Could the future of AGI see the emergence of systems approaching universal theorem-proving capabilities, thereby bridging the gap between theoretical computational challenges and modern AI capabilities?
Understanding Computational Complexity
The computing revolution has ushered us into a new era of problem-solving by computers, giving rise to computational complexity theory, which studies the computational resources — particularly time and space — required to solve problems.
Central to this field are three classifications of problems: P, NP, and EXP:
- P (Polynomial-Time): This category includes problems solvable within a time frame that scales polynomially with input size, deemed ‘feasible’ for practical applications.
- NP (Non-deterministic Polynomial-Time): Encompassing problems whose solutions, once discovered, can be quickly verified, NP represents a realm of challenges that are hard, yet intriguing and potentially attainable. The defining characteristic of NP problems is that their solution search space tends to grow exponentially with the size of the input. It is this balance of challenge and attainability that places NP problems at the heart of computational complexity theory
- EXP (Exponential-Time): These problems require resources that grow exponentially with input size, typically considered ‘hopelessly infeasible’ for large inputs.
NP-complete problems are particularly interesting in this classification, the most challenging in the NP class. Efficiently solving any one NP-complete problem implies the ability to solve all problems in the NP category efficiently. Essentially, a single efficient NP-complete solver could theoretically address all NP problems.
The pivotal question in computational complexity theory is the P vs. NP problem. It asks whether problems that can be quickly verified (NP) can also be solved just as quickly (P). This question centers on the search for a single efficient NP-complete solver. The existence of such a solver would revolutionize fields like cryptography, algorithm design, and process planning, and would profoundly impact the foundations of AI, especially AGI. Much like the universal theorem prover, which is too powerful to exist due to its boundless requirements, a fast NP-complete solver, given its implications, is widely suspected to be elusive.
This backdrop sets the stage for our next exploration: How does the neuro-symbolic approach in AI relate to NP problem-solving, particularly through the lens of human problem-solving? In the following section, we will delve into the parallels and intersections between humans and computers in solving complex problems.
Understanding Human Problem-Solving
Central to our understanding of human problem-solving are the concepts of System 1 and System 2 thinking, as identified by psychologist Daniel Kahneman in 2011. System 1 represents our intuitive, automatic, and often subconscious mode of thinking. It’s fast and emotion-driven, enabling quick responses and decisions. In contrast, System 2 is our more deliberate and logical mode of thinking. It’s slower, requiring conscious effort and analysis, and it’s responsible for handling complex calculations and logical reasoning. These two systems of thought, although distinct, work together in our cognitive process and are integral to how we approach and solve problems.
Henri Poincaré (1854–1921), a notable French mathematician and physicist, emphasized the role of intuition in mathematical development, recognizing its importance despite its potential to mislead. Poincaré identified two types of mathematicians: analysts driven by logic and geometers guided by intuition. He famously articulated, ‘Logic, which alone can give certainty, is the instrument of demonstration; intuition is the instrument of invention’ (Poincare, 1969).
Building on this, George Pólya (1887–1985), a Hungarian mathematician influential in modern mathematical education, highlighted plausible reasoning as a critical component of mathematical intuition. Pólya’s approach to problem-solving involved making educated guesses, a process pivotal not only in mathematics but also in broader problem-solving contexts. He observed, ‘The result of the mathematician’s creative work is demonstrative reasoning, a proof; but the proof is discovered by plausible reasoning, by guessing’ (Polya, 1954). Pólya’s methods, like specialization and analogy, guide learners from general concepts to specific cases.
Poincaré and Pólya both recognized that human problem-solving involves two modes: intuitive and disciplined guesswork (akin to System 1 thinking), followed by articulate, step-by-step demonstration (resembling System 2 thinking).
AI and Classical Computational Complexity
AI’s progress in problem-solving is closely tied to how humans solve problems. Now, if we look at theorem proving as a computer would handle it, it’s a classic example of what’s known as an NP problem in computational complexity. In NP problems, the hard part is guessing or searching for a solution, but once you have a possible solution, checking it is relatively easy. This mirrors the two parts of human theorem proving.
By drawing parallels between human theorem proving (with its intuitive and logical steps) and computational theorem proving (with its search and verification steps), we can see how neuron-symbolic AI, such as Google DeepMind’s AlphaGeometry (Trinh, 2024), fits into the picture. The neural network part corresponds to the first model of human problem-solving and therefore is related to the solution-search step of NP problem-solving. Similarly, the symbolic part mirrors the second mode of human problem and, therefore has to do with the solution-verification step in a computational setting.
This shows that AI’s methods are not just influenced by human thinking but are also deeply intertwined with the core challenges of solving complex computational problems.
Conclusion
The trajectory of AI research has traditionally been marked by a focus on application-specific solutions. This trend stems partly from the nature of AI, which thrives on specific, well-defined problems and datasets. However, the emergence of large language models (LLMs) has initiated a shift toward more general-purpose capabilities in AI. These models represent a departure from the highly specialized tools of the past, offering a broader range of applications and problem-solving potential.
As we move closer to the realization of AGI, this trend towards generality is expected to accelerate. AGI, by its very definition, implies a system capable of understanding, learning, and applying its intelligence across a wide array of tasks and domains, akin to human cognitive abilities. In this context, terms like ‘general’ and foundational computational questions such as the P vs. NP problem might regain prominence. This potential shift suggests that the future of AI is not only about refining specific applications but also about embracing and integrating more generalized, versatile problem-solving approaches. Such an evolution in AI would not only represent technological advancement but also a reconnection with the broader, more theoretical questions that have long been at the heart of computational complexity.
Reference
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
- Trinh, T. H., Wu, Y., Le, Q. V., He, H., & Luong, T. (2024). Solving Olympiad geometry without human demonstrations. Nature. https://doi.org/10.1038/s41586-023-06747-5
- Pólya, G. (1945). How to Solve It: A New Aspect of Mathematical Method. Princeton University Press.
- Pólya, G. (1954). Induction and Analogy in Mathematics.
- Poincare, H. (1969). Intuition and Logic in Mathematics. The Mathematics Teacher, 62(3), 205–212.
- Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. ArXiv. /abs/2201.11903.