Call for Expression of Interest (EOI) - A research study in Designing Humane AI Solutions   AAIH President to deliver Keynote Address on Gen AI at the 20 th ASEAN Ministerial Meeting on June 7th.  AAIH President, Dr. Anton Ravindran, and AAIH Founding member & Fellow Prof Liz Bacon have been invited to speak at the MENA ICT Forum 2023 which will be held at the Dead Sea Jordan on November 20th and 21st 2024 under the patronage of His Majesty King Abdullah II. Dr. Anton Ravindran has been an invited speaker previously at the MENA ICT Forum in 2022, 2020 and 2018.

The Ethics of the Ethics of AI?

Sudhir Tiku
Fellow & Founder Member
Vice President for Asia Pacific and China, Bosch Singapore

 

In a given situation, where we need to act or decide, the question our inner voice asks us is:

“What should I do?”

Will this Question change in the age of Artificial Intelligence or AI?

But what exactly is Artificial Intelligence or AI?

Is it a model- blind, curve fitting probabilistic tool, or is it a bounded optimality? Or is it an idea of a rational agent that perceives and acts to maximise its expected utility? Or is it the science of making machines do things that would need intelligence if done by men and women. As machine learning based artificial intelligence models become mainstream and more disruptive, it will create unprecedented risk and reward for the humans. There is risk of humans losing in the race of Intelligence. There is also the possible reward of our tough problems, like climate change, getting a credible solution. The study of Artificial Intelligence is also becoming meaningful as it is creating a collaborative overlap among the fields of computer science, social physics, neuroscience and behavioural economics. Birds gave us the inspiration to fly and we ended up making sophisticated and long-haul aeroplanes which now carry thousands of people across the continents. In a similar manner, AI, which gets the inspiration of its design from the neural networks of our brains can end up creating a digital something which is bigger, better or bitter than us. Hence, we can say that AI is understanding of what is means to be a human and how humans behave. Knowing is awareness while understanding is interpretation, processing and application of that awareness.

To create a framework of ethics for the design, development and delivery of Artificial Intelligence it is important to create common understanding of terms like values, morals, principles and morality. Values are what we hold important (for example integrity). It is the personal sense of Right and Wrong. Values are understood as ideals shared by members of a culture about what is good or bad. Principles are rules that are broadly unchanging, permanent and universal in nature. Principles are guardrails that guide our behaviour. Behaviour is a specific way of how we act or conduct ourselves. Morals are context specific behavioural norms formed through peer pressure and individual experience and morality is set of psychological adaptions that allows otherwise selfish individuals to reap the benefit of co-operation. And to connect these terms together we can summarise that Ethics is what is RIGHT to do and what we have RIGHT to do. Ethics investigates normative questions about what people ought to do or which behaviour is morally right.

AI Ethics emerged as a response to the range of individual and societal harm that the misuse, abuse, poor design or negative unintended consequence of Artificial Intelligence may cause. Ethics of AI involves engineering and most importantly moral leadership.AI Ethics is the set of values, principles and beliefs that employ widely accepted standards to guide moral conduct in the development and use of AI technologies. Hence, The Ethics of the Ethics of AI is to choose and implement values, principles and guardrails to guarantee human wellbeing. Phenomenology is the philosophy of experience and for Phenomenology the ultimate source of meaning is the lived experience of human beings. Teleology emphasizes of the study of purpose and purposeful behaviour while seeking a goal. For a meaningful value alignment of humans and machines, phenomenological and teleological frameworks have to be satisfied for correct benchmark and execution. We are a diverse species with unique socio- cultural leanings, varied views of what is good and bad and hence in the applied Ethics of AI, we need to explore diverse types of ethical frameworks.

 

a. Deontological Farmwork: Right action is not solely determined by consequences. Moral action is the key.
b. Consequentialism Framework: Right action is solely determined by consequences and larger good is a practical guideline.
c. Virtue Ethics Framework: For the Right action, Character and Values matter above all.

What framework is selected for an ethical development of AI models depends on downstream factors like the use case of the model, users of the model, robustness of the model and explainability of the model. The applicability of a particular model can be analysed by a thought experiment in which impacted stakeholders select the choice from behind a “Veil of Ignorance” – a method which prevents them from knowing their own moral beliefs or their position against the model. While we can choose an ethical framework for the value alignment of AI system, it is also crucial to be ready with the values that we wish to encode in the models.

But who decides these values?

We are a plural society with multi-stakeholder interests and we are having different levels of economic development and perception of morality. Hence a global convergence and overlapping consensus on the Values is needed, like we had consensus for the Universal basic human rights after the world wars. In the last few years, lot of academic and technical work has happened in applied AI Ethics space and we find global convergence around the below five ethical principles for AI use ecosystem.

These are:

  •  Beneficence: promoting wellbeing, preserving dignity and sustaining the planet.
  • Non-maleficence: privacy, security and capability caution
  • Autonomy: The power to decide or the choice to not decide.
  • Justice: Promoting prosperity, preserving solidarity and avoiding unfairness
  • Explicability: Enabling the understanding of the model through Intelligibility and Accountability.

The four principles of bio-ethics namely Autonomy, Justice, Non-maleficence and Beneficence finds resonance with these five ethical principles and that grants credibility for acceptance. Respect for human autonomy, prevention of harm, fair outcomes and explainability provide a good start point for the convergence on display for value alignment. The Ethics of the Ethics of AI starts with a vocabulary which should enable the AI designers to answer the below questions:

 

a. Will the AI enhance the dignity of individual persons who use it?

b. Will it enhance the richness of human connection with open inclusion?

c. Will it create wellbeing? 

These questions can be answered if the value alignment of AI is neutral, honest and humane. The development and the extensive use of AI holds the potential for both positive and negative impact on societies and communities as it can either amplify the existing inequalities; cure old problems or worst case, cause new problems. To steer a solution, where AI is seen creating positive societal impact, we do not only need regulation and standards but also framework of ethical principles, within which, concrete next steps and actions can be articulated.

That is exactly what is the ETHICS of the Ethics of AI.

The Ethics of the Ethics of AI-5

Leave a Reply

Your email address will not be published.

You may use these <abbr title="HyperText Markup Language">HTML</abbr> tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*