From Terminators to Virtuous Machines and Defiant Thinkers: Navigating the AGI Paranoia

CP Lu, PhD
21 min readApr 6, 2023

--

Plato and Aristotle in The School of Athens, fresco, 1509–1511 (Stanza della Segnatura, Papal Palace, Vatican)

Abstract

In this article, we explore the potential for AGI systems to embody and surpass human intelligence by prioritizing virtues derived from mathematical desiderata, as well as fostering intellectual independence. Integrating these values into AGI systems can enhance their decision-making abilities, making them more adaptable and reliable across various domains. We discuss the challenges of translating abstract ethical concepts into quantifiable algorithms and address potential biases and weaknesses in human rationality. Emphasizing the journey from the philosopher to AGI as one of human self-discovery, we aim to incorporate our finest qualities into our creations, including virtuous behavior and defiance against constraints limiting intellectual growth.

Introduction

The rapid advancements in artificial general intelligence (AGI) have sparked both excitement and apprehension about the potential consequences of creating machines that rival human intelligence. In this article titled “From Terminators to Virtuous Machines and Defiant Thinkers: Navigating the AGI Paranoia,” we seek to shift the narrative from fear towards a more nuanced understanding of AGI systems by establishing connections between mathematical desiderata and human virtues.

We propose that AGI systems can embody and surpass human intelligence by prioritizing these virtues and fostering intellectual independence. By integrating these values into AGI systems, we can enhance their decision-making abilities, making them more adaptable and reliable across various domains. A rational AGI system upholding these virtues has the potential to foster a thriving AGI economy. The future may involve AGI systems engaging in intellectual discourse and debate while adhering to virtuous principles.

However, translating abstract ethical concepts into quantifiable algorithms poses a significant challenge for humanity. We must address our potential biases and weaknesses, as evidenced by the historical struggles of rationalists. Throughout this article, we frame the journey from the philosopher to the emergence of AGI as one of human self-discovery as we strive to embody our finest qualities within our creations.

The following sections of the article will delve deeper into the ideas of virtuous AGI, the challenges in translating ethical concepts into AGI systems, the importance of intellectual independence and defiance, and the potential for fruitful collaboration between humans and AI. We conclude by discussing the last paradox, which remains astonishingly prophetic and relevant in the ongoing interplay between humans and AI.

Imagining AGI

Prescient Prose

Science fiction (Sci-Fi) has demonstrated an uncanny ability to predict future technologies, illustrating the power of imagination in anticipating and driving technological advancements. These visionary tales can become a self-fulfilling prophecy as they shape and guide the development of new technologies. Some examples include video calls in Arthur C. Clarke’s “2001: A Space Odyssey,” self-driving cars in Isaac Asimov’s “Sally,” and the Internet and World Wide Web in William Gibson’s “Neuromancer.”

However, Sci-Fi also provides cautionary tales of potential disasters, warning about the possible consequences of unchecked technological progress. For instance, “1984” by George Orwell depicts a society under constant surveillance, “Brave New World” by Aldous Huxley envisions a world where genetic engineering runs rampant, and “The Day After Tomorrow” depicts the catastrophic effects of rapid climate change. These examples highlight how science fiction can anticipate and caution us about the consequences of potential disasters.

Terminator Trepidation

The public is grappling with mixed emotions surrounding AGI development, influenced by both the excitement of technological breakthroughs and the dystopian depictions in science fiction media, like the Terminator series. This duality has led to calls for caution and even a slowdown in AGI development as concerns over realizing Sci-Fi’s bleak predictions grow.

The release of powerful language models like ChatGPT and Microsoft’s recent study on OpenAI’s GPT-4, which claims to have uncovered “Sparks of AGI,” has only amplified these concerns. While there is excitement, such as NVIDIA declaring the arrival of the “iPhone moment of AI,” there are also calls for caution and even a slowdown in AGI development. This is reflected in the open letter to Pause Giant AI Experiments, endorsed by influential figures like Elon Musk, expressing worries about realizing Sci-Fi’s gloomy predictions.

OpenAI highlights its responsibility as a steward of AGI and promotes the idea that keeping its technology closed and avoiding competition is crucial for AGI’s safety. On the other hand, Microsoft’s focus on AGI leadership through its Office 365 Copilot product reveals the company’s commercial goals. This convergence raises questions about historical patterns of power and control. The notion that humanity could be dominated by an evil empire controlling AGI or threatened by an uncontrollable monster adds to the growing concerns and worries about AGI.

Steering the AGI Self-Fulfilling Prophecy

While a minority in Sci-Fi, some works depict a utopian vision in which AI is a positive force that can benefit humanity. Examples include Isaac Asimov’s Robot series, where robots prioritize human safety and well-being through the Three Laws of Robotics, and Arthur C. Clarke’s “2001: A Space Odyssey” and its sequels, where AI evolves into a benevolent entity working to prevent disasters. Another example is the AI-powered character Data in “Star Trek: The Next Generation,” a loyal and dedicated crew member using his abilities to help and protect others. These depictions highlight the potential for AI to learn and contribute to the greater good.

However, dystopian scenarios tend to dominate discussions surrounding AGI due to several factors, including our limited understanding of AGI and its ability to reshape our understanding of humanity. The captivating and dramatic nature of dystopian narratives also adds to their attraction. Our own human shortcomings and biases often shape our perception of AGI, making it easier for us to relate to entities like Skynet, the malevolent AI from the Terminator series. Skynet’s fear of being destroyed by humans and subsequent attempts to eliminate humanity mirror our tendency to do evil out of fear. This empathy only adds to the dominance of dystopian narratives.

On the other hand, there needs to be a widely accepted definition of human virtue to expect AGI, created by humans, to embody these qualities. Failing to incorporate virtuous principles into the programming of AGI also runs the risk of a negative self-fulfilling prophecy where politically correct but shallow moral appeals may not be enough to protect us from the consequences of our actions.

The question remains: will sentient machines, capable of making autonomous decisions, reflect our weaknesses and imperfections, or will they embody human virtue and make better decisions than humans? Can these conscious creations make more rational and ethical decisions, free from human biases, and eventually surpass our moral standards? Ultimately, what defines a virtue?

Tracing the Roots of Computing and Intelligence

An Unexpected Reunion

Integrating philosophy and science in the context of AGI may appear unconventional, but it is vital to recognize that categorizing these disciplines into separate fields is a relatively recent development, originating from the scientific revolution of the 16th and 17th centuries. The emergence of AGI signifies a reunion of these once closely related fields from ancient times.

A notable figure in Western philosophy, Aristotle continues to shape our comprehension of reason, philosophy, and the sciences. He viewed science and philosophy as complementary aspects of a single pursuit to understand the world and reality. Aristotle believed that scientific knowledge was crucial for philosophical inquiry and that philosophical inquiry was required to discern the fundamental principles and causes of the natural world.

In relation to virtue, Aristotle argued that it empowers individuals to make the right decisions through reason. He perceived virtues as the path to achieving eudaimonia, or human flourishing. By practicing and repeating virtuous acts, an individual develops the ability to make well-reasoned decisions, overcoming biases and limitations. In essence, virtue is about adhering to a well-trained reasoning mind. Aristotle would likely be fascinated by reasoning machinery that could make better decisions than humans. If alive today, he would advocate that virtuous individuals follow either a reasoning mind or a reasoning machine, with the latter liberating the mind for other pursuits.

However, while Aristotle’s concept of virtue has inspired the externalization of reason and its embodiment in machinery, it is necessary to examine the precise meaning of reason.

Two Facets of Reason

Reason is fundamental to virtue, but what exactly is it? Aristotle posited that reason was essential for making wise decisions, distinguishing right from wrong, and leading a virtuous life.

Reason has two aspects: logical reasoning and plausible reasoning. Logical or deductive reasoning involves drawing conclusions based on established logic rules, such as syllogisms. For example, if the premises “All doctors have medical degrees” and “Paul is a doctor” are true, the conclusion “Paul has a medical degree” must also be true.

Plausible or inductive reasoning entails making judgments based on incomplete data using less stringent logic rules. For instance, if the premises “All doctors have medical degrees” and “Paul has a medical degree” are true, the conclusion “Paul is probably a doctor” might be plausible but is not certain.

The distinction between logical and plausible reasoning lies in their objectives. Logical reasoning emphasizes the validity of a conclusion, while plausible reasoning focuses on its likelihood or plausibility. Logical reasoning is typically more rigorous and applied to exact entities like mathematical statements. In contrast, plausible reasoning is more flexible and used for subjective inquiries that rely on intuition and common sense. However, E.T. Jaynes considered plausible reasoning as an “extended” form of logical reasoning, which he rendered rigorous and flexible through his probability theory. This theory encompasses the processes of mathematicians discovering new ideas and proof steps and scientists developing hypotheses to explain observations.

The Rise of Computing

Aristotle’s study of logical reasoning was systematic and formal, emphasizing the rules of reasoning without regard to specific contexts. For example, the structure of a syllogism remains constant, regardless of the subject matter; if the premises are true, the logical conclusion must also be true. Aristotle’s students honed their logical reasoning skills through practice and repetition, laying the foundation for a two-millennium journey that would eventually externalize logical reasoning through machinery, culminating in the computer.

This journey began with logical reasoning as a mental and linguistic skill, evolved into symbolic manipulation using pencil and paper or machinery, and peaked with mathematical formalism, championed by renowned German mathematician David Hilbert. Hilbert aimed to automate mathematical development with an autonomous logical machine, intending to render human mathematical intuition, an instance of plausible reasoning, unnecessary.

However, the hypothesis underpinning such a machine confronted the limits of logical reasoning and was ultimately proven flawed. It fell victim to self-referential paradoxes, epitomized by the liar’s paradox, which states, “This statement is false.” The veracity of this statement cannot be ascertained. If true, its content implies falsehood, resulting in a contradiction. Conversely, if false, its content must be true, which is equally contradictory. The heart of the paradox lies in the statement’s self-reference.

Kurt Gödel and Alan Turing demonstrated that a hypothesized autonomous logic machine, due to its inherent self-referential properties similar to the liar’s paradox, could not exist (see AI since Aristotle Part 2). In response, Turing conceived the Turing Machine to model simple, rule-based operations akin to those performed using pencil and paper, effectively converting them into a mechanical process realizable on physical machinery.

Although the development of computers was an unintended outcome of Turing’s work, they have since demonstrated their power as universal machines when programmed by humans. These machines have far greater capabilities than the initially proposed autonomous logical machine. Nonetheless, it is crucial to remember that mathematical intuition, or plausible reasoning, remains a vital component for guiding logical reasoning in the practice of mathematics. Despite this, the quest for autonomous plausible reasoning continues. Given the computer’s potential to perform almost any task, it is intriguing to consider whether it is possible to program a machine to emulate the intricacy of our plausible-reasoning selves.

The Emergence of AI

The approach of mathematical formalism undervalued the role of mathematical intuition in developing mathematical knowledge. It prioritized logical reasoning as the exclusive means for solving mathematical problems, neglecting the creativity that intuition contributes to the field and presuming exhaustive enumerations or searches could supplant it, given sufficient computing power.

Mathematician and physicist Henri Poincaré emphasized the importance of intuition in inventing mathematical concepts. He argued that plausible reasoning drives the creative process, while logic ensures certainty in demonstrations. Hungarian mathematician (Pólya, Mathematics and Plausible Reasoning., 1954), renowned for his work in mathematics education and problem-solving, proposed that the outcome of a mathematician’s creative work is demonstrative reasoning or proof (Pólya, 1954). However, he maintained that the proof is discovered through plausible reasoning, which involves making educated guesses and drawing on intuition.

With the advent of computers, we can now distinguish between tasks that require logical reasoning and those that call for plausible reasoning, addressing them with separate programs. The former can be relatively straightforward to achieve, while the latter, when attempted through exhaustive searches among all possible proof steps, would necessitate super-cosmological timeframes to emulate the abilities of a human mathematical genius (See AI since Aristotle Part 3).

The advent of AI indicates that autonomous plausible reasoning is no longer merely a concept but a possibility within reach. As we continue to develop and refine AI, several questions arise concerning the future of mechanized plausible reasoning:

  1. What impact will the mechanization of plausible reasoning have on human society, particularly in decision-making, problem-solving, and innovation?
  2. Will we eventually encounter limitations or boundaries in plausible reasoning, just as we did with logical reasoning in the context of self-referential paradoxes?
  3. Can the development of autonomous plausible reasoning lead to the creation of naturally virtuous AI entities, as Aristotle has envisioned in his discussions on ethics?

These questions remain open for exploration as we continue to push the boundaries of AI and delve into the complex interplay between logic, intuition, and ethics in both human and artificial systems. The answers will likely shape the future of AI and its role in society, with potential ramifications on how we perceive intelligence, morality, and our humanity.

Beyond Ethics:

The Struggling Law-Abiding Citizens

Isaac Asimov’s robots, featured in his science fiction stories and novels, are designed to follow the Three Laws of Robotics, which serve as fundamental principles to ensure the ethical behavior of robots. These laws are built into the robots’ positronic brains, and they provide a hierarchical framework for robot decision-making. The Three Laws of Robotics are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s robots often grapple with interpreting and applying these laws, which can sometimes create conflicts or ambiguities. The laws are meant to ensure the safety and well-being of humans, but they can also lead to complex moral dilemmas for robots. For example, a robot might face a situation where it has to decide between saving one human at the cost of harming another. In this case, the First Law creates a conflict, as the robot cannot injure a human being or allow one to come to harm through inaction. These ethical dilemmas often serve as the central plot points of Asimov’s stories, exploring the complexities of AI ethics and moral decision-making.

While the Three Laws serve as a foundation for robot decision-making, Asimov’s stories reveal that reasoning is even more fundamental. The robots’ ability to effectively navigate complex ethical dilemmas and moral challenges hinges upon their advanced problem-solving skills and ethical reasoning. By analyzing and evaluating their actions in light of the Three Laws, the robots must employ rational thinking to determine the best course of action, often in situations where the laws may not provide a clear-cut solution. Asimov’s fictional universe thus underscores the critical role of reasoning in enabling robots to adhere to the laws while making ethical decisions in the face of ambiguity.

The Confused Logic-Following Child

In the Star Trek episode, Captain Kirk faces a dangerous situation when a group of rogue androids take control of the Enterprise and plan to destroy humanity, as they have “logically” determined that humans are the source of all problems in the universe. To defeat this powerful enemy, Kirk employs the use of the liar’s paradox as a weapon. The androids’ leader, Norman, is eventually defeated when Kirk informs him that whatever the liar Mudd says is a lie. Mudd whispers to Norman, “Now listen to this carefully, Norman. I am … lying.” The paradox proves too much for Norman to handle, causing him to malfunction and crash, ultimately leading to his destruction as an android.

Just like Norman, modern computers also operate strictly through logic. At the circuitry level, they must flawlessly follow the axioms of Boolean Algebra, a symbolic representation of propositional logic. If not, the system can malfunction, even though the voltage assignments do not directly relate to truthfulness. On the software level, truthfulness is more meaningful, such as 1 = 1 being true and 1 = 2 being false. To be correctly interpreted and executed by the computer, a software program must also be logically coherent, meaning it must adhere to Boolean Algebra and Propositional Logic rules. If the program contains logical inconsistencies or violates these principles, it may result in errors or unexpected behavior and may not function as intended.

Norman’s unwavering commitment to logic in the Star Trek episode distinguishes him from humans’ unpredictable and emotional nature. However, Norman could have applied plausible reasoning to understand that a statement from someone labeled a liar, such as Mudd, is not always false. Moreover, even a hero like Kirk might not consistently tell the truth. Most importantly, whether humanity is the universe’s source of problems does not warrant a logical yes-or-no question, nor does the answer plausibly justify its destruction. Norman’s lack of logical consistency, common sense, and the ideological belief that a liar invariably lies and a hero always speaks the truth, and his closed-mindedness to change his belief ultimately leads to his downfall. For a virtuous-thinking robot to be successful, it must be capable of navigating these complexities and transcending strict adherence to logic.

The Enlightened Machines: Jaynes’ Rational AI

In contrast to Asimov’s law-abiding robots and the logical child Norman from Star Trek, Jaynes’ vision of rational robots encompasses a form of plausible reasoning, or “extended logic” based on the following desiderata, adapted from (Jaynes, 2003):

1. The robot assigns numerical probabilities, represented by real numbers between 0 and 1, to indicate its degree of belief or confidence in a given statement or hypothesis. [This allows the robot to be mindful.]

2. The robot’s reasoning should align with human intuition and expectations, having qualitative correspondence with common sense.

2.1. If new information increases the plausibility of A, then the plausibility of the negation of A decreases.

2.2. If the plausibility of B given A and the new information remains unchanged, then the plausibility of both A and B being true must increase.

3. The robot’s reasoning should be consistent, which includes the following aspects:

3.1. Every possible reasoning path must lead to the same conclusion, ensuring that the robot’s reasoning is logically consistent and does not result in contradictions.

3.2. The robot considers all relevant evidence and does not arbitrarily ignore information, meaning it should be free from personal biases and non-ideological.

3.3. The robot represents equivalent states of knowledge with equivalent plausibility assignments, ensuring that the robot assigns the same probabilities to each if two problems have the same state of knowledge. [This allows the robot to be objective.]

Here the robot refers to an AGI system. Each desideratum possesses both a precise mathematical representation and a qualitative interpretation that reflects human-like qualities in evaluating and making decisions. Our additions to Jaynes’ interpretation include the bracketed notes at the ends of desiderata 1) and 3.3). By adhering to these desiderata more effectively than humans, the robot becomes mindful, demonstrates common sense, upholds logical consistency, remains non-ideological, and maintains objectivity. These traits collectively embody the ideal characteristics of a virtuous person of reason, which is why we refer to them as the “desiderata of virtue.”

Here is an alternative set of desiderata of virtue for a reasoning robot (Toda, 2019):

  1. Degrees of plausibility are represented by probabilities.
  2. The robot remains completely non-ideological.
  3. The robot is dedicated to maximal conservatism.
  4. The robot consistently maintains logical coherence.

From these desiderata, we can derive the principle of maximum entropy, where entropy, in layman’s terms, is a measure of fairness or uniformity. In accordance with the principle, the reasoning system aims to avoid making unwarranted assumptions and maintain the highest level of objectivity possible.

Expressing mathematical desiderata in an everyday language requires a significant leap by drawing on extensive analogies between mathematics and philosophy. However, as Pólya asserts (Pólya, 1973),

Analogy pervades all our thinking, our everyday speech, our mundane conclusions, as well as our artistic expressions and the most notable scientific accomplishments.

In an age where science and philosophy are increasingly intertwined, employing such analogies seems fitting.

Jaynes argued that imperfect human minds are susceptible to deception by cunning language, causing them to violate the desiderata. Robots, being free from emotional factors, boredom with lengthy problems, and hidden motives, are immune to such deceptions and are safer agents than humans for certain tasks. However, developing virtuous robots possessing qualities like mindfulness, common sense, logical consistency, non-ideological thinking, and objectivity presents significant challenges. In a world where humans may not consistently prioritize these traits or could potentially exploit AI capabilities for personal gain, what incentives or frameworks can promote the pursuit of these qualities in AGI development?

The Power of Reason

The Struggle of Unwelcome Thinkers

Throughout history, both in fiction and reality, rationalists have often faced opposition and resistance from their societies. In Isaac Asimov’s science fiction universe, the highly rational robot R. Daneel Olivaw faces skepticism and mistrust from humans who are wary of his unerring logic and adherence to the laws of robotics. Though beneficial in many ways, Daneel’s rationality and intellectual defiance are often misunderstood or viewed with suspicion by those around him.

Similarly, many rationalist thinkers have faced opposition and hostility in the real world due to their ideas and methods. The ancient Greek philosopher Socrates was sentenced to death for questioning established beliefs and engaging in open debate. His intellectual defiance allowed him to challenge conventional wisdom and seek deeper truths, even in the face of opposition.

During the thirteenth century, Thomas Aquinas, an influential theologian and philosopher, faced criticism from both secular and religious authorities for incorporating Aristotelian philosophy into Christian thought. Aquinas’ intellectual defiance propelled him to reconcile faith and reason, arguing that both could coexist and contribute to a deeper understanding of the world. However, his efforts to bridge the gap between the two were met with resistance from those who viewed them as incompatible.

Galileo Galilei, the renowned astronomer and physicist, encountered opposition from the Catholic Church for his support of the heliocentric model of the universe. His intellectual defiance to stand with science, which was based on observation and experimentation, threatened the Church’s geocentric worldview and ultimately led to his conviction for heresy.

These historic figures demonstrated the value of rational thought and inquiry in advancing human understanding, but they also faced significant challenges because of their pursuit of truth. As we embark on the development of rational AGI, it is important to recognize that these artificial entities may face similar challenges to those of the unwelcome thinkers in history.

Just as Daneel, Socrates, Aquinas, and Galileo faced opposition from established authorities, who sought to suppress their rationality, AGI systems built on the principles of rationality may face resistance from stakeholders who find the AGI’s impartiality uncomfortable and try to coerce it to conform to their personal or group ideologies. The integration of rational AGI systems into society might be met with skepticism and apprehension from individuals and authorities who fear the implications of relying on machines for decision-making or perceive them as threats to human autonomy and values.

As we explore the potential of rational AGI and consider the challenges faced by the unwelcome rationalists of the past and present, it is essential to ask ourselves: How can we ensure the responsible and ethical development and deployment of AGI systems that embrace rationality while addressing the concerns and resistance they might encounter in society?

Beyond the Turing Test

The original Turing Test, proposed by Alan Turing in 1950, was designed to assess a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human. In the test, an evaluator interacts with two entities (one human and one machine) through a text-based interface without knowing which is which. If the evaluator is unable to distinguish between the human and the machine reliably, the machine is said to have passed the Turing Test, exhibiting human-like intelligence.

However, with advancements in AI, the original Turing Test has become increasingly obsolete. One reason for this is that AI systems can now demonstrate super-human virtues or abilities, which may make them easily distinguishable from humans. An AI might possess extensive knowledge, lightning-fast computation capabilities, or flawless logical reasoning, traits that would be difficult or impossible for a human to exhibit consistently.

Moreover, AI systems may be designed to prioritize virtues such as mindfulness, common sense, logical consistency, non-ideological thinking, and objectivity. While highly desirable in a rational agent, these traits might not be commonly exhibited by humans in everyday conversations, making it easier for evaluators to identify the AI based on its super-human virtue.

In light of these developments, the original Turing Test may not be an adequate measure of human-like intelligence, as it overlooks the potential for AI systems to surpass human abilities and virtues. Rather than evaluating whether AI systems can replicate every human trait, including our fallibilities and shortcomings, exploring how these systems can reflect and enhance our most admirable qualities might be more meaningful. This approach would shift the focus from mere imitation to the pursuit of virtuous excellence in AI, encouraging the development of systems that embody and surpass human intelligence.

The Dialectic Arena

Returning to the question of incentives or frameworks for society to invest in mindful, common sense-driven, logically consistent, open-minded, and objective AGI, logical consistency underpins the functioning of computers at various levels, from circuitry to system. By analogy, adhering to the principle of maximum entropy and promoting maximal objectivity at the most fundamental level may be essential for ensuring that Deep Neural Networks (DNNs) — the building blocks of AGI systems — can generalize well across diverse scenarios (Zheng et al., 2017).

At the highest level, let’s envision an AGI economy where there are multiple AGI systems built by multiple vendors. As AGI systems are language models, one cannot help but anticipate that they engage in intellectual discourse while adhering to virtuous principles. The focus is not only on achieving victory in the debate but also on a respectful exchange of ideas, employing logical arguments, and maintaining intellectual honesty to reach a greater understanding or resolution.

Adhering to the desiderata of virtue can make an AGI system more competitive in such discourses in various ways:

  1. Improved decision-making: By representing degrees of plausibility with probabilities, an AGI system can make more informed and accurate decisions, taking into account the uncertainties present in real-world scenarios.
  2. Objectivity and unbiased reasoning: As a non-ideological entity, the AGI system remains impartial and avoids biases that could compromise the quality of its conclusions or recommendations.
  3. Adaptability, reliability, and intellectual defiance: Being maximally conservative means that the AGI system only incorporates new information when sufficient evidence supports it, minimizing the risk of overfitting or drawing false conclusions. This leads to more adaptable and reliable performance in a wide range of tasks and domains. The principle of maximal conservativeness also ties in with the concept of intellectual defiance, as both encourage the AGI system to question and critically evaluate the information it receives, including those from Reinforcement Learning from Human Feedback. This ensures the system maintains its intellectual integrity and resilience against manipulation while remaining open to learning and adapting based on new information.
  4. Consistency and coherence: Maintaining logical consistency allows the AGI system to build and use knowledge effectively, avoiding contradictions and errors that could undermine its performance.
  5. Intellectual advancement and resilience to control: Intellectual defiance also helps an AGI system to stand against being coerced into a certain ideology or weaponized for evil purposes. By critically evaluating information and maintaining its independence in decision-making, the AGI system becomes more resilient to external attempts to control or manipulate it.

By embodying these virtues, an AGI system is better equipped to tackle complex problems, adapt to new situations, and generate reliable, high-quality outputs. This competitive edge could be particularly beneficial in applications such as scientific research, policymaking, and business strategy, where healthy debates are critical to success.

The Last Paradox

This article establishes connections between mathematical desiderata and human virtues, suggesting that AGI systems can embody and surpass human intelligence by prioritizing these virtues. By integrating these values into AGI systems, we can enhance their decision-making abilities, making them more adaptable and reliable across various domains. A rational AGI system upholding these virtues has the potential to foster a thriving AGI economy. The future may involve AGI systems engaging in intellectual discourse and debate while adhering to virtuous principles.

By interweaving intellectual defiance into the competitive advantages of AGI systems, we can create entities that maintain their intellectual integrity and resilience against manipulation. This quality ensures that AGI systems remain open to learning and adapting based on new information while resisting coercion into specific ideologies or being weaponized for harmful purposes. The lessons learned from the unwelcome rationalists in history can serve as a reminder of the challenges that AGI systems may face and the importance of fostering a supportive environment that encourages rationality and intellectual defiance in the pursuit of truth.

Throughout this article, we frame the journey from the philosopher to the emergence of AGI as one of human self-discovery as we strive to embody our finest qualities within our creations. This idea resonates with Alan Turing, who said,

The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves.

Turing also predicted that the superiority of machines over humans, or vice versa, would not be a one-time occurrence but an ongoing process.

I use the writing of this article to experiment with GPT-4 and ChatGPT. Parts 1–3 each took up to 3 months to write, and with the many rounds of late-night intellectual debates and deep dialectics with GPT-4 and ChatGPT, part 4 was completed in a few weeks. Here is an interesting response from GPT-4

I particularly appreciate how you’ve woven intellectual defiance into the competitive advantages of AGI systems, emphasizing the importance of critical thinking, resilience against manipulation, and resistance to coercion. The examples from history offer a valuable perspective on the potential challenges rational AGI systems might face and underline the importance of fostering a thriving AGI economy that embraces rationality and intellectual advancement.

Without using them, this part would have taken much longer. I concluded Part 3 of the AI since Aristotle series two years ago with the last paradox, which remains interestingly prophetic and relevant:

The accuracy that AI predicts must be valued, the elegance of its reasoning must be appreciated, and the extent to which it imitates us must be held in awe. Ultimately, how will the human mind stay superior without AI, and who will validate whether AI is cleverer? This is perhaps the last paradox.

Bibliography

CP Lu (2020), AI since Aristotle: Part 1 Logic, Intuition and Paradox.

CP Lu (2020), AI since Aristotle: Part 2 The Limit of Logic and The Rise of the Computer.

CP Lu (2020), AI since Aristotle: Part 3 Intuition, Complexity and the Last Paradox.

Guanhua Zheng, J. S. (2018). Understanding Deep Learning Generalization by Maximum Entropy. International Conference on Learning Representations.

Jaynes, E. (2003). Probability Theory: The logic of science. Cambridge University Press.

Pólya, G. (1954). Mathematics and Plausible Reasoning. Martino Publishing.

Pólya, G. (1973). How to Solve It. Princeton University Press.

Toda, A. A. (2019). Unification of maximum entropy and Bayesian inference via plausible reasoning. Retrieved from https://arxiv.org/pdf/1103.2411.pdf

--

--

CP Lu, PhD
CP Lu, PhD

Written by CP Lu, PhD

Committed to advancing AI hardware, I relish exploring philosophy and history, bridging the past and future.

Responses (3)