AI: Myths, Neuroscience & Ethics
By Sudhir Tiku
Fellow & Founder Member
Vice President for Asia Pacific and China, Bosch Singapore
The contents presented here are based on information provided by the authors and are intended for general informational purposes only. AAIH does not guarantee the accuracy, completeness, or reliability of the information. Views and opinions expressed are those of the authors and do not necessarily reflect our position or opinions. AAIH assumes no responsibility or liability for any errors or omissions in the content.
There were ideas in history that changed everything.
Fire.
The printing press.
Electricity.
The Internet.
And now we have Artificial Intelligence.
The promise of AI is immense. AI can accelerate knowledge, enhance human capability, and solve problems once thought unsolvable. For example, it can help us to defeat disease. AI reflects, imitates, generates, calculates and increasingly decides. It predicts your decisions, shapes your thoughts, and even anticipates your needs. AI has positive use cases in healthcare, education, finance, transportation, retail, manufacturing, entertainment, smart cities and others. However, there is a shadow too, like the risk of deep fakes, bias, manipulation, and unchecked autonomy. AI challenges how we understand thought, labour, ethics, identity and even reality.
AI is an idea worth spreading. But it is also an idea worth questioning
AI is not JUST the tale of our technology. It is also the tale of humans. And this tale has not begun in our labs but in our brains, in our myths, in our Philosophy and in our Economics. This intersection and how we steer this intersection will decide the future of AI. We stand to weave seemingly disparate threads – the biological complexity of neuroscience, the allegorical richness of mythology, the profound questions of philosophy, and the transformative power of economics – all converging at the fascinating nexus of artificial intelligence.
Artificial intelligence (AI) is deeply inspired by neuroscience, shaping its learning, reasoning, and adaptability. The human brain has an intricate network of 100 billion neurons. Its ability to learn from experience and its mechanisms for processing information have all played a crucial role in shaping AI development. In neuroscience, neurons transmit signals through synapses, strengthening connections based on learning and experience. Artificial neural networks (ANNs) mimic this process, adjusting weights between nodes to improve performance in tasks like image recognition and natural language processing. In Deep learning, the AI models have multiple layers of artificial neurons which mirror how different brain regions process information hierarchically. Just as the human visual cortex processes basic shapes before recognizing objects, convolutional neural networks (CNNs) analyse simple patterns before identifying complex images. Neuroscientists study synaptic plasticity which is the brain’s ability to adapt and rewire based on experience. AI employs similar principles through reinforcement learning, where algorithms adjust strategies based on feedback. The brain makes decisions through neurotransmitter systems like dopamine, which influences reward-based learning. AI incorporates similar reward-based models in reinforcement learning, where algorithms refine actions by maximizing positive outcomes. In Recurrent neural networks (RNNs) and in transformers, AI models mimic memory recall, allowing AI to understand context in language tasks.
Artificial intelligence shares striking parallels with our myths. AI is trained on digital data left over by humans. This data, at times, is not diverse, representative or inclusive and hence biased. Like Narcissus, who fell in love with his own reflection and assumed it to be the reality, AI can become trapped in self-reinforcing biases and amplifying societal prejudices. Without diverse datasets and ethical oversight, AI risks becoming an echo chamber that fails to evolve beyond its flawed reflections. Prometheus, the Titan who granted fire to humanity, represents the transformative power of AI, gifting society advancements in medicine, automation, and communication. However, just as fire could both illuminate and destroy, AI’s impact is dual-sided, offering progress while threatening ethical dilemmas such as privacy invasion and loss of human autonomy. Pandora, whose curiosity unleashed unforeseen chaos, reflects the unpredictable consequences of AI. The AI systems created with optimism but no guardrails may lead to misinformation, surveillance, and economic upheaval. AI-driven disinformation spreads like Pandora’s opened box, eroding trust in truth itself. These myths collectively serve as cautionary tales, reminding humanity that AI must be wielded with foresight and responsibility. Cassandra, who could see the future but cursed that no one would believe her, represent those AI experts who warn of AI’s danger like biases in hiring algorithms, autonomous weapons, deepfake misinformation—yet their predictions are often ignored. To avoid repeating these ancient tragedies, ethical AI development must prioritize fairness, transparency, and safeguards against unintended harm. We need AI that enhances human progress rather than mirroring the dangers foretold in myths.
Artificial intelligence is changing the way we work. Artificial intelligence (AI) is not merely reshaping industries, it is reconstructing the fundamental frameworks of economic productivity, labour dynamics and market competition. The integration of AI into business processes is not just about efficiency, it is a fundamental shift in production functions. Unlike previous technological revolutions, which primarily mechanized manual labour, AI expands the cognitive bandwidth of organizations and can also replace human work at scale. Reskilling of work force who will get substituted by AI needs policy and political mindshare. Concepts like Universal basic income needs to be probed deeper. AI’s contribution to productivity remains asymmetrical. Firms with robust digital infrastructure absorb AI advancements more efficiently, while smaller organizations struggle with adoption due to capital constraints, talent shortages, or regulatory hurdles. This disparity risks amplifying economic inequality. While AI’s promise of enhanced efficiency and decision-making capabilities is well documented, its deeper implications on macroeconomic structures and geopolitical power shifts remain underexplored
AI implementation entails significant capital investment in data infrastructure, computational power and algorithmic governance. This has a cost. Huge cost. AI’s environmental and energy costs remain a growing concern. The computational power required to train large-scale models is increasingly scrutinized for its carbon footprint. Balancing AI’s economic advantages with sustainable practices will require targeted policies to incentivize energy-efficient AI development.
AI processes information using algorithms and data patterns. But it does not truly understand Meaning, as humans do. Gen AI generates lyrical words, captivating images and speakers the same language that you and I speak but has not sense of what it means. Philosophers and AI researchers are exploring if AI can achieve genuine understanding by merely following logical rules, probabilistic inference or model blind curve fitting. Alan Turing, a pioneer of AI and computing, posed a fundamental question about machine consciousness by asking if a machine can be considered intelligent if it can convincingly imitate human responses and fool the human judges into believing so. This is the famous Turing Test. However, passing the test does not imply genuine awareness. May be time for a new kind of a benchmark or Turing test is needed.
So let us confront the harsh reality.AI is being used to:
- Predict crime—based on biased data.
- Approve loans—based on zip codes.
- Filter resumes—based on gender or names.
Downsides of AI
- Bias and Discrimination: AI systems rely on training data, and if that data contains biases, the AI will replicate and even amplify them. This can lead to biased hiring practices, discriminatory law enforcement, and inequitable access to financial and healthcare services.
- Privacy Violations: AI-powered surveillance threatens personal privacy. Powerful institutions can use AI to track individuals and analyse their behavioural patterns, and predict actions, without consent. AI’s ability to collect and process vast amounts of data raises ethical concerns about the right to privacy and digital autonomy.
- Autonomous Systems: Autonomous systems, such as AI-driven drones or self-driving vehicles present ethical dilemmas regarding responsibility in cases of harm. If an AI system causes injury or death, who is held accountable—the developer, the operator, or the algorithm itself?
- Spread of Misinformation: AI-generated deepfakes and misinformation are becoming tools for political propaganda, fraud, and social manipulation. AI can synthesize convincing fake videos, create misleading narratives, and automate deceptive social media campaigns, threatening trust in democratic institutions and public discourse.
So, what do we do?
We don’t need to unplug the machine. We need to plug humanity back in.
We need Algorithmic Accountability Laws, Data dignity frameworks, Public AI infrastructure, open-source AI, and Participatory Design Councils. We need Ethical Solutions for AI Development
- We need Fair and Transparent AI Development: AI engineers must ensure diverse, unbiased datasets and develop fairness-aware algorithms that minimize discrimination. Regular audits of AI systems can help detect and correct biases before they cause harm.
- We need Strong Privacy Protections and Regulations: Governments must enforce fair data protection laws, ensuring AI systems respect privacy rights. Users should have control over their personal data, and companies should be held accountable for unethical data collection practices.
- We need Reskilling and Workforce Adaptation: Policymakers and businesses should invest in AI literacy and retraining programs to help workers transition into new roles in an AI-driven economy. Encouraging AI-human collaboration rather than outright replacement can mitigate economic displacement.
- We need Human-in-the-Loop AI Systems: AI should complement human decision-making rather than replace it, particularly in sensitive fields such as healthcare and criminal justice. Ensuring human oversight can help prevent unethical AI decisions and improve accountability.
- We need AI Governance: AI development requires global oversight and regulatory frameworks to prevent its misuse. International cooperation can establish ethical AI guidelines, preventing the proliferation of harmful AI applications such as autonomous weaponry and deepfake disinformation campaigns.
We stand at a pivotal moment. By embracing transparency, fairness, clear guidelines, collaboration, and a commitment to responsible innovation, we can steer this powerful technology towards a future that truly benefits all of humanity. The future of AI is not predetermined; it is our duty, together, to shape it responsibly and ethically.