What is AI Ethics and Governance?
Dr Anton RavindranCEng (UK), FBCS, FSCS
Author of the book “Will AI Dictate the Future?”

- Data bias
- Algorithm bias
- Human or cognitive bias.
AI Governance

Explainable AI (XAI): Making the Blackbox Transparent
Explainable AI (XAI), sometimes known as Interpretable AI, refers to methods and techniques that enable humans to understand the results generated by an AI algorithm, hence improving on the governance and ethical dimension of AI. It is a fast-emerging area that provides transparency and creates trust in AI. XAI explains how the AI model works and why a result was generated in a comprehensible manner to non-technical end-users. What data did the model use? Why did the AI model make a specific prediction or decision? Are there any biases? When do AI models give enough confidence in the decision to form the basis for trust? How can the AI algorithm correct errors that arise?
Explainability is an intuitively appealing concept but hard to fully realise because of the complexities of advanced AI algorithms. Dr. Lance B. Eliot (2021), a renowned expert on AI and ML, emphatically says, “Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphise AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and unarguable fact that no such AI exists as yet.” In sum, AI is not sentient yet, and until we reach the stage of ASI, XAI will have limitations for deep neural network-based models.
According to the NIST, the four key principles of XAI are:
Explanation: Systems should provide evidence or reason(s) for all output.
However, explainability doesn’t provide all the answers with respect to “fairness”. Lily Hu (2021), a PhD candidate in applied mathematics at Harvard University who studies algorithmic fairness, states that “the use of algorithms in social spaces, particularly in the prison system, is an inherently political problem, not a technological one”. To develop models to avoid any bias and to ensure “fairness” we must agree precisely on what it means to be fair before developing the algorithms which may vary based on societal, cultural and political norms.
Conclusion
The rapid digitisation and penetration of AI has led to the rise of customer-centricity. Businesses collect vast amounts of data about customers’ needs, preferences and wants. Data is the new oil, as the saying goes. AI helps businesses realise which customers are more receptive to marketing campaigns and messages than others. Responsible AI has rewarded businesses with opportunities for delivering personalised products and services while upholding customer values and doing good for society. Implementing measures to avoid human bias is necessary for developing solutions that are accurate, fair, and transparent, and is not only a moral and ethical issue but also good for business. Simply put, it can set a business apart from the competition by improving customer confidence, trust and loyalty, and this trait will become far more significant as AI becomes more pervasive.
References
Andrews, E., L. (2020, October 13). Using AI to Detect Seemingly Perfect Deep-Fake Videos. HAI Stanford
University. Available at: https://hai.stanford.edu/news/using-ai-detect-seemingly-perfect-deep-fake-videos.
Bathaee, Y. (2018, May 5). The Artificial Intelligence Black Box And The Failure of Intent And Causation. Harvard
Journal of Law & Technology., 31(2). https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-
Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf.
Burrell, J. (2016, January 6). How the machine ‘thinks’: Understanding opacity in machine learning algorithms.
Big Data & Society, 1-12. DOI:10.1177/2053951715622512.
https://journals.sagepub.com/doi/pdf/10.1177/2053951715622512.
Eliot, L. (2021, April 24). Explaining Why Explainable AI (XAI) Is Needed For Autonomous Vehicles And
Especially Self-Driving Cars. Forbes. Available at:
https://www.forbes.com/sites/lanceeliot/2021/04/24/explaining-why-explainable-ai-xai-is-needed-for-
autonomous-vehicles-and-especially-self-driving-cars/?sh=3c2a2c921c5a.
Singh, A. & Mutreja, S. (2022, February 15). Autonomous Vehicle Market Statistics 2030. Allied Market Research.
Available at: https://www.alliedmarketresearch.com/autonomous-vehicle-market.
Zicari, R., V. (2022, February 7). On Responsible AI. Interview with Ricardo Baeza-Yates. ODBMS Industry Watch.
Available at: http://www.odbms.org/blog/2022/02/on-responsible-ai-interview-with-ricardo-baeza- yates/.