Can AI Be Ethical?
Or Do We Need Ethical Humans First?
By Sudhir Tiku
Fellow AAIH & Editor AAIH Insights

The contents presented here are based on information provided by the authors and are intended for general informational purposes only. AAIH does not guarantee the accuracy, completeness, or reliability of the information. Views and opinions expressed are those of the authors and do not necessarily reflect our position or opinions. AAIH assumes no responsibility or liability for any errors or omissions in the content.
Artificial intelligence has moved quickly from science fiction to daily life. Cars drive themselves on busy roads, apps give medical advice and chatbots answer questions with an ease that once belonged only to teachers, friends, or parents. With every advance comes the same uneasy question: can these machines be trusted to make the right choices?
Put simply, can AI be ethical?
It is an attractive idea. When a self-driving car brakes before hitting a pedestrian, it seems as though the machine has acted responsibly. When a chatbot refuses to promote hate speech, it looks as though it has chosen kindness. But beneath the surface, something important is missing. Machines do not care. They do not wrestle with guilt, pride or compassion.
They predict, they calculate, they reproduce patterns in the data we feed them. If they appear moral, it is only because humans have built morality into their training. AI is not a moral agent; it is a mirror. And mirrors reflect what is put before them.
Long before algorithms, humans struggled with how to decide what is right or wrong. The great philosophers did not agree with each other, but they offered frameworks that still shape our thinking. Aristotle in ancient Greece believed morality came from building good habits of courage, honesty and moderation. You became good by practicing goodness.
Immanuel Kant in Europe insisted that morality was about duty, following universal rules that apply in every situation. Jeremy Bentham and John Stuart Mill, the utilitarians, argued that the best action is the one that brings the most happiness or the least harm. In African thought, the philosophy of Ubuntu teaches that we are ethical only in community with others: “I am because we are.”
These approaches reappear in the way we build AI. Reinforcement learning, where systems are “rewarded” for good behavior and “punished” for bad, echoes utilitarian ideas. Training a model with ongoing feedback to improve its answers looks a little like Aristotle’s vision of habit and practice. Efforts to design AI that responds to community values recall Ubuntu. But there is a gap.
A machine can mimic the structure of an ethical theory. It cannot feel the weight of choice. It cannot regret. It cannot decide differently tomorrow because its conscience has grown.
Making Philosophy Simple
Philosophy can seem abstract, but its lessons can be explained in ordinary life. Think about Kant’s duty ethics. Imagine a playground rule: always share your toys. Even if you don’t feel like sharing today, the rule applies and there are no excuses. That’s Kant’s idea of duty.
Utilitarianism is closer to a family dinner. If five people want pizza and two want pasta, utilitarian thinking says order pizza because it makes more people happy overall. The “greater good” matters more than individual preference.
Ubuntu, in contrast, is like a neighborhood where everyone looks out for each other. If one house burns, the whole community feels the loss. Your well-being depends on mine, and mine on yours.
AI designers try to use these moral logics, but for machines, they are recipes without meaning. A chatbot that follows rules does not believe in duty. A system that maximizes happiness does not know joy. They are clever imitations but not moral commitments.
Whose Morality Gets Embedded?
This leads to a harder question: Whose ethics are we embedding in AI?
When an American company designs a chatbot, it often reflects Western views on free speech and safety. When a Asian company builds one, it carries a different set of cultural and political values. An African or Latin American team might design around community or solidarity rather than individual freedom. What counts as ethical is not the same everywhere. Yet global technology companies often act as though it is.
We can see this in content moderation. What you are allowed to post on social media is determined by AI systems built largely in Silicon Valley. Their standards may make sense in California but feel alien or unfair elsewhere. The risk is that AI becomes not just a tool of convenience but a subtle tool of cultural dominance, spreading one set of values across the world in the name of “universal ethics.”
Take a simple example: humor. A joke that is harmless in one country may be offensive in another. An AI trained mainly on Western data might misjudge, flagging local humor as hate speech or letting offensive material pass because it doesn’t register as such in American culture. What looks like fairness in one place can be unfair in another.

The Invisible Labor Behind AI
Another layer of the ethics puzzle is hidden in plain sight. This is the people who make AI work. Behind the curtain of automation is an army of human workers, often in the Global South, labeling images, reviewing chatbot responses and filtering toxic content. They spend hours staring at violent or disturbing material so that AI systems can learn to block it. Many are paid only a few dollars an hour. Some experience long-lasting trauma.
A 2023 report on AI moderation work in Kenya revealed that workers earning less than $2 an hour had to read streams of violent and abusive posts to “teach” an AI what to block. The companies selling the AI as “safe” rarely mentioned these workers, who bore the emotional scars of exposure to content the rest of the world would never see.
So when companies claim that their AI systems are ethical because they produce polite answers or avoid dangerous advice, we should ask: what about the ethics of the system that employs invisible workers in difficult conditions?
Can an AI product be considered ethical if its creation depends on exploitation? This question is rarely asked, yet it should be central.
The Real-World Consequences of Flawed AI
Despite all process and due diligence, it is not hard to find examples of AI systems gone wrong.
A major technology company once tested an automated resume screener that learned to prefer male candidates because past data reflected biased hiring.
Facial recognition systems misidentified people of color at much higher rates than white people, leading to wrongful arrests. None of these harms were accidents of mathematics. They were human biases baked into data, then scaled up by machines that treated them as truth.
Healthcare offers more cases. In Asia, an AI system built to predict tuberculosis outbreaks failed because the data came mainly from urban hospitals, ignoring rural realities where the disease was most severe. In the US, a health risk algorithm used by insurers underestimated the needs of Black patients because it used past spending as a proxy for need.
Since less money had historically been spent on their care, the algorithm assumed they needed less help. The result was less support for people who were already underserved.
The lesson is clear that if we do not fix the human systems that produce the data, AI will magnify injustice. The mirror does not change the face; it enlarges it.
The idea of AI as a mirror may be the simplest way to explain its ethical limits. A mirror does not smile if you frown. It does not invent a new image. It reflects faithfully, and sometimes cruelly.
If you want the reflection to change, you must change yourself. In the same way, AI shows us who we are. If we have built fair societies, AI may reflect fairness. If we live in unjust systems, AI will reflect injustice, and it will do so with speed, scale, and apparent authority.
The danger is not that machines will suddenly become evil. It is that we will look at their reflection of our own biases and mistake it for truth.
The Displacement of Responsibility
Talking about “ethical AI” can be a convenient distraction. It allows governments and corporations to shift the debate from their own responsibilities to the supposed agency of machines. It is easier to say, “How do we make the algorithm fair?” than to ask, “Why did we design a system that benefits from unfairness in the first place?”
A government using AI surveillance against protesters can claim the system is neutral. A company outsourcing data cleaning to underpaid workers can claim the AI product is safe and responsible. A bank using credit algorithms that deny loans to the poor can claim it is simply following the data. In each case, the human decision is hidden behind a screen of technology.
But the truth remains: machines cannot carry moral responsibility. Only humans can.
Global Perspectives on AI Ethics
Different parts of the world are wrestling with AI ethics in different ways.
The European Union has passed the AI Act, which sets strict rules for “high-risk” AI systems like facial recognition and health diagnostics. It is the most ambitious attempt so far to regulate AI in the name of human rights.
China has focused on using AI for social stability, regulating algorithms that affect public opinion and requiring companies to align systems with state values. Critics may this as control rather than ethics, but it reflects a different vision of what “responsibility” means.
In Africa, scholars have emphasized Ubuntu, the philosophy of interdependence. The idea is that AI should be built for collective well-being, not just individual convenience. This perspective is underrepresented in global debates but could offer a richer, more inclusive path forward.
The United States, meanwhile, has preferred voluntary guidelines and innovation first, regulation later. This reflects the influence of large corporations and the culture of the tech industry.
The deeper challenge is not technical but moral. It is about courage. It is about whether leaders will slow down product launches if they are unsafe, whether regulators will resist lobbying pressure, whether societies will demand accountability.
Technology magnifies intent. If the intent is careless or greedy, AI will multiply harm. If the intent is thoughtful and inclusive, AI can multiply good. But the choice comes first from us. Machines cannot decide our values. They can only project them.
Imagining Two Futures
Consider two possible futures. In the first, ethics is ignored. AI systems trained on biased data determine who gets jobs, who receives medical care, and who is watched by police. Surveillance spreads quietly. The poor are denied credit because of “neutral” algorithms. Human workers continue to be exploited in the shadows. The machines do not rebel; they simply make unfair systems more efficient.
In the second future, ethics is prioritized. Governments create strong oversight, companies design AI with transparency and accountability, and citizens are involved in shaping values. AI helps doctors in rural clinics, supports teachers in crowded classrooms, and enables fairer access to financial services. In this world, AI does not solve morality but supports it.
Which future we move toward depends not on machines but on us.
The Question Turned Back on Us
So we return to the starting question: can AI be ethical?
The honest answer is NO; not on its own. AI is not a moral being. It is a tool, and like any tool, it reflects the hands that wield it.
The real question is harder: are we willing to be ethical in the way we build and use AI?
That is a question about politics, economics, and culture not about code. If our institutions are not fair, if our priorities are profit above all, then no algorithm will save us. If we are willing to choose dignity, fairness and justice, then AI may help us extend those values.
Machines will never carry the burden of morality. That weight belongs to us and always will.
Author – Sudhir Tiku -Refugee, TEDX Speaker, Global South Advocate. Fellow AAIH & Editor AAIH Insights

