Call for Expression of Interest (EOI) - A research study in Designing Humane AI Solutions   AAIH President to deliver Keynote Address on Gen AI at the 20 th ASEAN Ministerial Meeting on June 7th.  AAIH President, Dr. Anton Ravindran, and AAIH Founding member & Fellow Prof Liz Bacon have been invited to speak at the MENA ICT Forum 2023 which will be held at the Dead Sea Jordan on November 20th and 21st 2024 under the patronage of His Majesty King Abdullah II. Dr. Anton Ravindran has been an invited speaker previously at the MENA ICT Forum in 2022, 2020 and 2018.

Artificial Intelligence – The Curious Case of Ethical Alignment

By Sudhir Tiku, Fellow of Alliance for AI & Humanity (Fellow AAIH)

 

Introduction:

From a high-level view, AI (Artificial Intelligence) provides us an opportunity to understand who we are as Humans. On the other hand, Ethics plays a role in the analysis of human behaviour. Hence Ethics and AI are deeply intertwined and this interdisciplinary approach will become the harbinger of course-correction in both human and technological progress of the Future. Normative Ethics is based on the creation of theories that provide us moral rules, which define human behaviour. This is the theme of Utilitarianism or the Kantian Ethics, which proposes that, over the long-term, technologies should be steered to create maximum comfort and happiness for the largest set of people possible. If Ethics was to be compared to the subject of sports, then the Normative Ethicist would be like a Referee who sets up the Do’s and Don’ts and governs the game. Metaethics is that part of ethics which guides us how to engage and provide judgement. The meta ethicist is like a sports commentator who comments on how the ethical game is being played without getting involved in the game as a player. Applied Ethics is the study of how we should act in specific areas of our lives and the hence the applied ethicist is the player himself. As the dividing line between Artificial Intelligence and Human society is becoming blurred, it is important that the diverse stake holders in the space (researchers, academia, non-profit, governments etc) have technical gravitas to create sustainable technological scale and ethical orientation to secure human values, purpose and interests. This fine balance will decide whether AI will create abundance or dystopia. This will be also dependent on our performance to solve the Alignment problem in AI.

1. What is Alignment Problem?

For a common understanding, we can think of biological Intelligence as a neural trait, which enables us to achieve goals in a wide set of environments. The artificial intelligence is attributed to a computer system, which accept input or training data, understands patterns, and then makes decisions to maximise the chances of achieving a goal. To further this, Machine learning is a statistical and algorithmics approach which is used to train a model so that it can perform intelligent actions. If machine learning techniques gain access to powerful compute, the models can learn from experience. They can also learn from labelled or unlabelled data, without humans having to do explicit programming. The architecture of this design is based on neural states of human brain and, hence, we have Neural networks in Artificial intelligence, which can find much deeper relationships in the data and patterns. As an example, these neural networks can look at the past matches of chess players and then beat the current champions convincingly. These neural networks also get incentives if they reduce the loss function as is done in the reinforcement learning technique of Artificial intelligence. The agent learns to maximise reward through trial-and-error and attempts to achieve the goal. We should also emphasize that AI systems function as very powerful optimisers and that creates a safety risk. What if AI were to optimise for something that we did not want, and what happens if the output of AI has serious and dangerous consequences for Humanity? The neural networks operate with phenomenal speed and most importantly operate with Autonomy. Hence, the task of embedding artificial agents with moral values is important as computers are now capable of operating with greater autonomy. It is also very difficult to evaluate if each action is performed in a responsible or ethical manner. This is the hard problem of Alignment. Built into the Alignment problem is the Control problem which is,” Can the Machine become so intelligent that it starts creating its own sub-goals and creates outputs which were not asked for? However, there is a big chance that if we solve the Alignment problem, we will have a good track to solve the Control problem.

1.1. What to Align?

Alignment problem has two dimensions. The first dimension of this Alignment problem is technical and means how do we technically steer the artificial agents so that they execute what they are expected to do. We have seen the examples of AI bots, in the past, promoting racial content and encouraging abusive language. This is clear technical misalignment. This needs framework for collection of training data, red teams, outcome validation and explainability. The safety of the AI model should supersede its commercial potential. The second dimension of Alignment is based on Normative ethics and debates which moral principles, norms and rules should be encoded into artificial agents. Should the Value alignment be based on a maximalist approach or a minimalistic approach? How do we ensure that we encode values which are representative of all regions, societies and classes? How do we gain consensus on what is to be encoded? One of the common grounds could be to encode the four principles of Bio-ethics which have universal acceptance:
 
A. Principle of Autonomy: Respect for the choice and freedom of Humans.
B. Principle of Non-maleficence: No harm or Injury to the humans.
C. Principle of Beneficence: Ensuring benefit for the humans from a perceived action.
D. Principles of Justice: Promotion of Fairness, Equality, Transparency and Representation.
 
Another approach could be based on the Human Rights doctrines. There is a global consensus on Universal human rights doctrine and AI models should be trained in such a manner, that regardless of how they get trained (supervised, self-supervised, unsupervised or reinforcement learning), they should not violate the human right doctrine. Let us look at the recognised articles of the human rights.
 
  • Articles 1–2 establish the basic concepts of dignity, liberty, and equality.
  • Articles 3–5 establish other individual rights, such as the right to life.
  • Articles 6–11 refer to the fundamental legality of human rights with specific remedies cited for their defence when violated.
  • Articles 12–17 set forth the rights of the individual towards the community, including freedom of movement and right to a nationality.
  • Articles 18–21 sanction “constitutional liberties” and spiritual, public, and political freedoms.
  • Articles 22–27 sanction an individual’s economic, social and cultural rights, including health care.
  • Articles 28–30 establish the general means of exercising these rights

As humanoids and robots will start becoming more pervasive and they will work in our homes, offices, industries and cities, a good starting guideline for their ethical deployment can be borrowed from the Three Laws of Robotics by Isaac Asimov.

  1. The robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

There is also an opinion that AI model should not just focus on ULTILITARIAN motives of larger good but focus on individual views also. This gets credence from the Social Choice Theory which is based on pluralistic approach of value alignment. This theory does not aim to find common principles but aggregates the individual view bottoms up. This will need the AI agents to respond to ethics preference of people (one to one basis) in real time. Under this approach the AI agent will have to collect the related and relevant data and process it in such a manner that the decisions are aligned to what an individual values or desires.

An overarching ecosystem of responsible Regulation can help to ensure that the agreed upon Values are inserted into the AI models by the industry. However, Regulation often stifles innovation and hence there has to be a very open and positive discussion on Regulation of AI. This is important because AI, as a technology, has a huge potential to solve the greatest challenges faced by humanity like disease, death, climate change and uneven distribution of assets like education. AI can democratise the inputs and outputs if steered in the right manner.

1.2. HOW TO ALIGN:

Given the challenge of value alignment at hand, we need to look at various options of deployment. We want design principles so that AI can be compatible with agreed upon values. One of the approaches for this is what we call the ‘inverse reinforcement learning’ where the reward function for the AI agent is not known upfront. Instead, the AI agent is presented with a dataset or examples and it focuses on producing a reward function based on observed optimal behaviour. This is a good example of constrained optimization and leads to safety norms. There is also a suggestion to include an ethical governor in the model architecture, which aims to ensure that final outputs are consistent with moral constraints.
 
Another approach to alignment involves using evolutionary methods. These approaches evaluate the ‘lifetime’ behaviour of AI agents. Each agent uses a different policy for interacting with its environment and selects those behaviours that are able to obtain the most overall reward which is consistent with enshrined value. This is like the survival of the Fittest from a value alignment point of view. There is also an apprenticeship learning method where new AI models can be asked to learn or imitate from a certified moral expert agent. This approach to normative value alignment is unique, as rather than specifying moral principles upfront, the agents are presented with examples of good conduct. This is a novel way of creating Ethical and Responsible AI.

2. Conclusion:

 

Two decades before the birth of internet, mathematician and philosopher Norbert Wiener, who coined the word cybernetics, gave us insights on what was coming in his book called The Human Use of Human Beings: Cybernetics and Society.

He prophetically wrote as under:

“If we use, to achieve our purpose, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, then we better be sure that the purpose put in the machine is the purpose we really desire “

It is responsibility of all of us, as an interested party in the unfolding game of AI, to ensure that we are sure of how we want to use AI; why we want to use AI and what is our End purpose.

>>>>>>>>>>>>>>>>>>>>

Author: Sudhir Tiku is a Fellow of AAIH, Futurist and Automation expert based out of Singapore. He works in a global Multinational company and his work focus is Vision and Video Analytics. He is a columnist, Climber and is a regular speaker in the TEDx circles of Asia Pacific.

 

Leave a Reply

Your email address will not be published.

You may use these <abbr title="HyperText Markup Language">HTML</abbr> tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*