The State of AI Ethics at end of 2025: From Beautiful Principles to Messy Implementation
By Sudhir Tiku
Fellow AAIH & Editor AAIH Insights

The contents presented here are based on information provided by the authors and are intended for general informational purposes only. AAIH does not guarantee the accuracy, completeness, or reliability of the information. Views and opinions expressed are those of the authors and do not necessarily reflect our position or opinions. AAIH assumes no responsibility or liability for any errors or omissions in the content.
In less than a decade, artificial intelligence has moved from research labs and niche applications into the centre of economic strategy, cultural life, and geopolitical rivalry. The pace of deployment has outstripped the pace of reflection. Yet, in parallel, a dense ecosystem of AI ethics research, global principles and hard regulation has emerged which is uneven and fragmented. As 2025 comes to a close, the question is no longer whether AI ethics matters; it is whether we can translate broad values into effective, enforceable practice before harms scale faster than governance.
1. From “Should We Worry?” to “How Do We Govern?”
The first wave of AI ethics, around 2016–2019, was largely declarative. There were manifestos on fairness, transparency and accountability. There were ethics boards inside tech firms and research papers showing how machine learning could replicate and amplify bias. That period produced several anchor documents like the OECD AI Principles, adopted in 2019 and updated in 2024, became the first intergovernmental standard on AI, promoting “innovative and trustworthy AI that respects human rights and democratic values” through value-based principles and policy recommendations.
In 2021, UNESCO’s Recommendation on the Ethics of Artificial Intelligence was adopted by all 194 member states and it was the first truly global normative framework for AI. It put human rights, human dignity and environmental sustainability at its core and emphasised transparency, accountability and human oversight.
These documents mattered because they signalled a shift: AI ethics was no longer just an internal corporate conversation; it became a public policy concern and a site of multilateral negotiation. However, principles are the easy part. The last three years have been defined by a much harder question: What does it mean to actually implement them?
2. AI Ethics Research: Persistent Gaps
On the research side, AI ethics has matured into a diverse, multi-disciplinary field. Technical work has expanded beyond toy datasets into large-scale studies of fairness, robustness and interpretability in foundation models and generative systems. Social scientists and legal scholars have examined AI’s impact on labour markets, democracy, surveillance and environmental costs. New sub-fields, from algorithmic auditing to AI forensics have emerged, aiming to make opaque models more accountable in practice.
One sign of this maturation is the effort to align ethical frameworks with technical metrics. For example, recent work maps the high-level goals of accountability, transparency, explainability, human oversight, inclusivity and sustainability onto concrete governance and evaluation practices, underscoring the need to move “from principles to practice” rather than treating ethics as a preface. Yet several gaps remain:
• Fragmentation of methods: There is no consensus on what counts as an adequate “fairness audit” or safety evaluation. Benchmarks are proliferating, but many are narrow, static, or easy to game.
• Limited real-world access: Much of the critical research depends on public information and small-scale experiments, while the most powerful models and their deployment data remain proprietary.
• Under-representation of the Global South and marginalised communities: The people most affected by AI-driven extraction, surveillance, or labour outsourcing are rarely the ones shaping the research agenda.
In short, AI ethics research has become richer and more empirical, but its influence still depends on whether governance frameworks force companies to take its findings seriously.
3. Regulatory Architecture: From Patchwork to Prototypes
If the first phase of AI ethics was about principles, the second is about law. The EU AI Act is the most ambitious attempt to create a comprehensive, horizontal legal framework for AI. It entered into force in August 2024, with the main obligations due to apply from August 2026. The Act classifies AI systems by risk, bans certain “unacceptable” uses, and imposes strict requirements on “high-risk” systems in domains like employment, credit and critical infrastructure.
However, implementation is already contested. In late 2025, the European Commission proposed delaying some high-risk provisions until 2027 as part of a broader “digital omnibus” aimed at simplifying tech regulation and reducing compliance costs. This has triggered criticism from civil society groups, who see it as a rollback of digital protections and a concession to industry pressure. The United States has taken a more decentralised, sectoral and standards-driven approach.
The NIST AI Risk Management Framework (AI RMF), released in early 2023 and elaborated through subsequent guidance, provides voluntary but detailed guidance to help organisations manage AI risks and build trustworthy systems. It has also underpinned the creation of the U.S. AI Safety Institute and its consortium, positioning NIST as a technical anchor for AI governance.
The previous administration’s 2023 Executive Order on “Safe, Secure, and Trustworthy AI” leaned heavily on NIST’s work, requiring safety testing and directing agencies to develop safeguards but that order was rescinded in January 2025.
The new federal direction emphasises expansion and acceleration: an April 2025 directive instructed agencies to appoint chief AI officers, develop strategies and remove perceived bureaucratic barriers, while maintaining only baseline risk management for high-impact uses. The net effect is that U.S. AI ethics governance is somewhat volatile: strong on standards and institutional experimentation but weaker on binding, cross-sectoral constraints.
A different model comes from Singapore, which has steadily positioned itself as a governance “laboratory”. The country’s Model AI Governance Framework and its AI Verify toolkit offer practical guidance and testing tools aligned with international principles. Singapore’s Infocomm Media Development Authority and the AI Verify Foundation released a Model AI Governance Framework for Generative AI, expanding earlier guidance to cover large language models and other generative systems. The framework stresses systemic risks, content provenance, and responsible foundation-model deployment, and is backed by national investment in AI infrastructure and talent.
These frameworks are influential across Asia and beyond because they are concrete, implementation-oriented and industry-friendly which is a useful counterweight to the perception that ethics is purely restrictive.
Global Norms and New Frontiers
At the multilateral level, UNESCO’s Recommendation on AI ethics remains the central soft-law instrument, and in 2024 the organisation launched a Global AI Ethics and Governance Observatory to track implementation and share good practices.
In 2025, UNESCO also adopted global standards on the ethics of neurotechnology, addressing the risks of neural data collection and AI-enhanced brain-computer interfaces which a sign that AI ethics is increasingly entangled with adjacent technologies and cognitive liberties.
Taken together, the landscape is moving from a patchwork of position papers to a layered architecture of high-level norms (OECD, UNESCO), binding regional law (EU), national strategies and testing frameworks (NIST, Singapore, others), and a growing array of voluntary standards and industry initiatives. The challenge is coherence.
4. From Policy to Practice: How Much Has Actually Changed?
If we ask, “Is AI ethics working?”, the honest answer is mixed.
On the positive side:
• Most large AI companies now publish model cards, safety reports or risk overviews, however imperfect.
• There is rising acceptance that impact assessments and third-party audits will be part of AI deployment in high-risk domains.
• Testing initiatives like offer concrete tooling rather than only checklists, enabling regulators and firms to converge on practical benchmark suites for trustworthiness.
Public awareness has also shifted. Generative AI’s high-profile missteps from hallucinated references to biased outputs have made issues of trust, misinformation and bias part of mainstream debate.
But there are three structural problems.
First, enforcement is still thin.
Even where frameworks exist, enforcement capacities are limited. Many countries that adopted the UNESCO Recommendation lack the institutional and technical resources to conduct serious audits or enforce sanctions. The EU is racing to stand up new AI regulators just as it did for data protection after the GDPR, but this will take time and resources.
Second, ethics is unevenly distributed.
We are seeing the emergence of an AI governance divide. High-income countries and large firms can afford compliance teams, red-teaming exercises and participation in standards bodies. Many low- and middle-income countries, small enterprises and civil society organisations cannot. These risks are recreating, in governance, the same asymmetries that already exist in AI capabilities and infrastructure.
Third, structural harms remain under-addressed.
Most governance tools still focus on system-level risks (bias in a model, robustness of an output) rather than structural harms: the concentration of power, exploitative data extraction, precarious click-work and environmental impacts of large-scale computing. Current frameworks mention these issues but rarely provide enforceable levers to change them.
In other words, AI ethics has made it harder to be carelessly irresponsible, but it has not yet made it impossible to be strategically harmful.
5. Where AI Ethics Needs to Go Next
If we take the current state seriously; strong principles, growing law and partial implementation, then the next phase of AI ethics needs to focus on infrastructure, institutions and inclusion.
a) Infrastructure:
We do not need yet another list of principles; we need shared, credible ways to measure whether systems live up to them.
That means:
• Funding open, independent evaluation platforms that can test models for bias, robustness, safety and environmental impact.
• Aligning these with existing frameworks, such as the UNESCO Recommendation and the OECD AI Principles, so that “trustworthy AI” means something concrete and comparable across jurisdictions. Without measurement infrastructure, ethics remains aspirational.
b) Institutions:
The creation of ethical bodies, institutes and think tanks and other Foundations marks an important shift. Ethics is becoming an institutional mandate, not just a corporate branding exercise.
The next step is to give such institutions:
• Clear authority to audit, investigate and sanction when systems cause harm.
• Strong links with civil society and affected communities, not just with industry.
• Mandates that cover not only catastrophic “frontier risks” but also everyday harms from discriminatory scoring systems to opaque workplace monitoring.
c) Inclusion:
Today’s AI ethics architecture is still disproportionately shaped by regulators, firms, and researchers in the Global North. Meanwhile, data labellers in Kenya, content moderators in the Philippines or citizens subjected to opaque biometric systems often have the least power to resist or shape how AI is deployed.
A serious ethics agenda must:
• Fund capacity building for regulators, universities and civil society organisations in the Global South.
• Require multi-stakeholder consultation, not as a box-ticking exercise, but as a condition for deploying high-risk systems.
• Treat lived experience as a form of expertise in designing guardrails.
d) Connecting AI Ethics to Adjacent Domains
The adoption of global standards for neurotechnology ethics shows that AI ethics will not remain a silo. As brain-computer interfaces, biometrics, and AI-driven behavioural profiling converge, governance frameworks will have to grapple with mental privacy, cognitive liberty and new forms of manipulation.
This demands collaboration across domains like data protection, human rights law, bioethics and environmental regulation.
6. Conclusion: Progress, but Not Yet Alignment
By late 2025, AI ethics is no longer the “soft” side of technology. It is a field with its own research ecosystems, legal instruments, institutions and political battles. We have moved:
• From inspirational principles to binding law in some regions.
• From ad-hoc ethics reviews to emerging standards and safety institutes.
• From obscurity to public debate, as citizens see AI systems impact their jobs, media and social lives.
But there is still a gap between the ambition of our ethics and the reality of our deployments. Governance is often reactive, slow and uneven and powerful actors can still externalise many of the costs of their innovations.
The next phase of AI ethics will not be won by drafting new manifestos. It will be decided by whether we can build the infrastructure, institutions and inclusive processes needed to make our existing commitments real across regions, sectors, and communities.
The good news is that the foundations are no longer hypothetical. The work ahead is hard, but clear. We need to measure what we value, empower those who are affected and ensure that AI systems and the institutions around them are accountable to more than the next quarterly earnings call.
That will the real test of “trustworthy AI” in 2026 and beyond.
…………………………….
Author – Sudhir Tiku -Refugee, TEDX Speaker, Global South Advocate. Fellow AAIH & Editor AAIH Insights
