Can Machines Develop Morality?
A Causal-Theoretic Perspective
This is an extended excerpt from the main article, A Brief History of Causality from Homo Sapiens to AGI, which explores the evolution of causal reasoning from ancient philosophy to modern artificial intelligence.
Aristotle’s Distinction of Causes: The Foundation for Scientific Inquiry
Aristotle’s contribution to our understanding of causality is foundational. In his framework, Aristotle brilliantly distinguishes four types of causes that explain different aspects of why things happen:
- Material Cause: What something is made of.
- Formal Cause: The shape or pattern something takes.
- Efficient Cause: The mechanism or process by which something happens — how it happens.
- Final Cause: The purpose or reason why something happens.
Though the first two types of causes have been dismissed for centuries, the distinction of the last two allowed science to focus on efficient causes — the how — without getting entangled in philosophical discussions of why. As Aristotle said, “The final cause, then, is the purpose, that for the sake of which something exists or is done” (Aristotle, Metaphysics, 1998, p. 41).
For example, the efficient cause of a smartphone is how it’s made — through design, engineering, and manufacturing. The final cause is why it exists — to provide users with communication, entertainment, and access to information.
This separation helped establish modern science. But today, as we might one day expect AI to participate in ethical decision-making, it’s about time to bring back Aristotle’s final causes — in human actions. For machines to develop morality, they must learn to understand not only mechanisms (efficient causes) but also purpose.
Judea Pearl’s Causal Theory: The Backbone of Modern Causal Reasoning
Judea Pearl revolutionized causal reasoning theory by developing a structured framework for representing cause-and-effect relationships through causal graphs. These graphs visually represent the relationships between variables, showing how one variable influences another through direct or indirect mechanisms — efficient causes (Pearl & Mackenzie, 2018).
The Smoking-Lung Cancer Causal Graph: An Example of Efficient Causes
Let’s take a simple example using smoking and lung cancer to illustrate how Pearl’s causal graphs work. We can map the causal relationships as follows:

- Smoking → Tar Deposition: Smoking causes harmful substances, like tar, to accumulate in the lungs.
- Smoking → Lung Cancer: Smoking increases the likelihood of developing lung cancer.
- Tar Deposition → Lung Cancer: Tar accumulation damages lung cells, leading to cancer.
- Genetic Predisposition → Lung Cancer: Genetic factors predispose certain individuals to cancer, regardless of smoking.
This causal graph illustrates the chain of efficient causes. Pearl’s framework allows us to model and analyze these relationships mathematically, helping us predict outcomes based on interventions for quitting smoking.
Pearl’s Causal Hierarchy: Understanding Different Levels of Causal Reasoning
Pearl developed the Causal Hierarchy, which categorizes reasoning into three levels, each representing a deeper understanding of causality:
- Association (Seeing): At this level, AI systems recognize patterns and correlations. For instance, AI might observe that smokers are more likely to develop lung cancer but cannot infer the causal relationship.
- Intervention (Doing): The second level involves understanding how actions affect outcomes. AI can predict the outcome of specific actions, such as predicting that quitting smoking reduces lung cancer risk.
- Counterfactual Reasoning (Imagining): The highest level involves asking “what if” questions. For example, What if the person had never smoked? This enables AI to simulate alternate realities and explore the consequences of different decisions.
The Importance of Counterfactuals for Moral AI
Counterfactual reasoning is central to moral decision-making. It allows AI systems to simulate different possibilities and evaluate what happened and what could have happened under different circumstances. Pearl emphasizes that counterfactuals are crucial for assigning moral responsibility, stating: “Counterfactuals allow us to distinguish between correlation and causation, and to define responsibility, blame, and credit, which are vital for moral decisions” (Pearl & Mackenzie, 2018, p. 218).
For example, counterfactual reasoning could help determine whether a crime would have occurred under different circumstances in criminal law. This is essential for evaluating responsibility and intent, which are central to moral judgments. Pearl argues that counterfactual reasoning allows us to answer crucial moral questions like, What if I had done something differently?
Without counterfactuals, AI would be limited to understanding the immediate outcomes of actions without being able to assess alternative possibilities, which is fundamental for making ethical decisions.
Why Purpose Matters
While Pearl’s framework is powerful for understanding the how — the efficient causes of actions — it doesn’t address the why. In ethical reasoning, final causes — the motivations or purposes behind actions — are just as important. For example, two people may both smoke, but their reasons for doing so may vary: one might smoke to cope with stress, while another does so out of social pressure or addiction. These final causes shape our moral judgment of their actions.
To capture the full scope of moral decision-making, we need to modify our causal graph for smoking and lung cancer by adding final causes:
- Smoking → Tar Deposition (Efficient Cause): Smoking leads to tar deposition in the lungs.
- Why is the person smoking? (Final Cause): Is the person smoking to cope with stress, out of addiction, or due to social pressure? Each motivation carries different moral implications.
- Smoking → Lung Cancer (Efficient Cause): Smoking increases cancer risk.
- Genetic Predisposition → Lung Cancer (Efficient Cause): Some individuals are more susceptible to lung cancer due to genetics.
By incorporating final causes, AI can consider not just the mechanics of actions but also the intentions and motivations behind them. For example, if AI understands that someone smokes due to stress, it can recommend psychological support or stress-reducing interventions rather than focusing solely on nicotine cessation.
Are We Ready to Address Machine Morality?
We explore the applications of moral AI. In criminal justice, for instance, AI could assess the intent behind crimes. Two individuals may commit the same criminal act, but their motives may differ. Understanding these final causes is crucial for assigning moral and legal responsibility.
AI systems could consider patients' motivations and emotional needs in healthcare ethical decisions. For example, if a patient refuses treatment, AI could explore why—whether due to fear, financial constraints, or other personal factors—and help healthcare professionals provide more compassionate care.
These applications may sound futuristic, but they may be closer than we think. Pearl himself warns that moral reasoning requires understanding not just what happened but why it happened. The question we now face is whether AI systems are ready to incorporate these final causes into their decision-making processes.
We can extend Pearl’s causal framework by integrating final causes to create AI systems capable of making ethical decisions. Here’s how we may move forward:
- Incorporating Final Causes into Causal Models: AI systems need to model both efficient and final causes — understanding the motivations behind actions as well as their mechanical outcomes.
- Enhancing Context Sensitivity: AI must account for the social, cultural, and personal contexts that shape human motivations. Final causes don’t exist in a vacuum, and AI needs to be sensitive to these broader influences.
- Building Human-AI Interfaces: AI should provide insights into both efficient and final causes, allowing humans to contribute empathy, intuition, and moral judgment to the decision-making process.
Conclusion
Aristotle’s distinction between efficient and final causes allowed scientific inquiry to focus on the how of actions. For AI to take on legal and ethical decision-making roles, it’s time to revisit the why. By integrating Pearl’s causal reasoning with Aristotle’s final causes, we can create machines that develop both the mechanics of actions and their underlying purposes.
The question is: Are we ready to equip machines with the tools they need to navigate moral and ethical challenges and embrace the changes?
References
- Pearl, Judea, & Mackenzie, Dana. The Book of Why: The New Science of Cause and Effect. Basic Books, 2018.
- Pearl, Judea. Causality: Models, Reasoning, and Inference. 2nd ed., Cambridge University Press, 2009.
- Aristotle, Metaphysics. Translation by Hugh Lawson-Tancred, Penguin Classics, 1998.