The Ethics of the Ethics of AI?
Sudhir Tiku
Fellow & Founder Member
Vice President for Asia Pacific and China, Bosch Singapore
In a given situation, where we need to act or decide, the question our inner voice asks us is: “What should I do?”
Will this question change in the age of Artificial Intelligence or AI?
But what exactly is Artificial Intelligence? Is it a model—blind, curve-fitting probabilistic tool, or is it a bounded optimality? Or is it the idea of a rational agent that perceives and acts to maximize its expected utility? Or is it the science of making machines do things that would require intelligence if done by humans?
As machine learning-based AI systems become mainstream and more disruptive, they will create unprecedented risk and reward for humans. There is the risk of humans losing the race of intelligence. There is also the potential reward of solving global challenges, like climate change, through intelligent systems.
The study of Artificial Intelligence is meaningful because it merges fields like computer science, neuroscience, behavioral economics, and social physics. Birds inspired flight, and now airplanes carry thousands of people. Likewise, AI—inspired by our neural networks—could evolve into something bigger, better, or potentially more dangerous than us. Thus, AI represents an effort to understand what it means to be human and how we behave.
To build a meaningful AI ethics framework, it’s essential to create common understanding of concepts like values, morals, principles, and morality.
- Values are what individuals and cultures hold important, like integrity.
- Principles are universal, permanent guardrails that guide behavior.
- Morals are context-specific behavioral norms.
- Morality is a psychological adaptation that enables cooperation.
Ethics in artificial intelligence is about determining what is right to do, and what rights we have. It addresses normative questions about what behavior is morally right.
AI ethics emerged in response to the harms caused by misuse, poor design, or unintended consequences of AI technologies. It involves not just engineering but also moral leadership.
AI ethics frameworks guide developers using widely accepted ethical principles:
- Beneficence: promoting well-being, preserving dignity, and sustaining the planet
- Non-maleficence: ensuring privacy, security, and caution
- Autonomy: preserving human choice
- Justice: ensuring fairness and equity
- Explicability: enabling understanding and algorithmic transparency
These align with bioethics principles and offer a foundation for global AI governance.
For deeper value alignment, we must consider:
- Deontological ethics: focus on right action regardless of outcome
- Consequentialism: judge actions by their outcomes
- Virtue ethics: character and values determine morality
AI Ethics emerged as a response to the range of individual and societal harm that the misuse, abuse, poor design or negative unintended consequence of Artificial Intelligence may cause. Ethics of AI involves engineering and most importantly moral leadership.AI Ethics is the set of values, principles and beliefs that employ widely accepted standards to guide moral conduct in the development and use of AI technologies. Hence, The Ethics of the Ethics of AI is to choose and implement values, principles and guardrails to guarantee human wellbeing. Phenomenology is the philosophy of experience and for Phenomenology the ultimate source of meaning is the lived experience of human beings. Teleology emphasizes of the study of purpose and purposeful behaviour while seeking a goal. For a meaningful value alignment of humans and machines, phenomenological and teleological frameworks have to be satisfied for correct benchmark and execution. We are a diverse species with unique socio- cultural leanings, varied views of what is good and bad and hence in the applied Ethics of AI, we need to explore diverse types of ethical frameworks.
The selected ethical framework for AI depends on the use case, model robustness, and explainability. A useful thought experiment here is the Veil of Ignorance, which prevents bias by removing personal context when making moral judgments.
However, the core question remains: Who decides these values? In a pluralistic society, we need global convergence on values, similar to how the Universal Declaration of Human Rights emerged after World War II.
Academic and technical work over recent years has led to agreement on key AI ethical principles. These principles help guide AI development that supports human well-being, prevents harm, and ensures fairness and transparency.
In practice, we must ask:
- Will the AI system enhance individual dignity?
- Will it promote open inclusion and richer human connection?
- Will it create sustainable well-being?
If value alignment in AI is neutral, honest, and humane, it has the potential to drive positive societal impact. Without ethics, AI systems may amplify inequality, create new problems, or cause unintended harm.
Therefore, to steer AI toward positive outcomes, we don’t just need regulations, we need an actionable ethical AI framework grounded in shared human values.
That is exactly what is the ETHICS of the Ethics of AI.
