My near philosophical musings about the world in general its problems and possible ways out.

2025-12-29

The danger is us humans − not AI

Every technological revolution exposes a deeper truth about us. Artificial intelligence is no different. In an interview, published at 2025-11-12 in Nature [1], no less than Yoshua Bengio, one of the field’s pioneers, warned that modern AI may pose civilisation-level risks:

Whoever will control very advanced AIs in the future will have huge power. And they could use that power in ways that are good for them, maybe, but not for most of us. Democracy is about sharing power. If the power is concentrated in the hands of a few, that is not democracy — that is a dictatorship.

But there are other existential risks due to the power of AI in the wrong hands. There are people who, for ideological or mental health reasons, would press the red button — ask an AI to do something terrible that could cause the death of billions of people, like unleashing a new kind of extremely potent virus, or even mirror bacteria that could kill all animal life on this planet."

Previously, concerns and fears mainly revolved around super-AI gone rogue which might turn out to be humanities last invention and will act as the “great filter”, one of the explanations of the Fermi-Paradox [2], by eliminating its creators. Nick Bostrom’s work e.g. sharpened this picture by introducing existential risk as a serious, albeit uncertain, category of analysis [3]. Stephen Hawking made the idea vivid to a global audience, suggesting that “the development of full AI could spell the end of the human race" [4]. “AI-godfather” Geoffrey Hinton, once sceptical of extreme timelines, has publicly increased his own probability estimates that AI could drive catastrophic outcomes in coming decades [5]. The precautionary camp’s flagship—the 2023 “Pause” letter - called for a verifiable moratorium on training systems more powerful than GPT-4 to secure time for alignment research and governance [6].

Yet, I don’t completely buy their stories. And so do others, most notably Yann LeCu. He argues that catastrophic fears of rogue AI are overblown, more science fiction than science, and distract from tractable safety and misuse problems [7]. His counter-arguments insist that current systems are brittle, narrow and far from autonomous agency. LeCun urges attention to real near-term harms (misinformation, bias, safety), insisting existential claims are “preposterous” and that progress, not panic, will reduce today’s risks [8]. Both camps agree on one thing: AI is accelerating and policy must catch up.

AI systems are trained on human language, images and actions. They inherit our biases, conflicts and contradictions. When such systems scale, they amplify human impulses—good and bad. Weapons proliferate faster than ethics. The crux is not whether AI could be dangerous, but whether humans in competitive environments will deploy it dangerously. We have historical precedent: nuclear technology, bioweapons research, and information warfare all show that capability races outpace norm-building. Bostrom’s catalogue of existential hazards was never only about machines; it was about human systems under stress [9].

A curious paradox arises: the moment AI is roughly human-level may be the most dangerous—not because machines are godlike, but because they are fallible in human ways while extending our reach. They turbo-charge disinformation, cyber-offence, and bio-design, yet still lack robust common sense and values. In short, AI becomes as clever—and as stupid—as we are, but massively faster and cheaper. It’s the Stone-Age morality / Space-Age tools problem. Bengio explicitly worries about misuse and deception long before fully autonomous "superintelligence" [10]. LeCun’s rebuttal—that better AI will reduce present-day harms—may be true on certain margins, but only if deployment incentives and guardrails are aligned with the public interest [11].

As early as 2020 I had contemplated about these risks [12], which since then have been discussed widely. Back then I wondered: Couldn’t it well be that real a Super-AI, once created and enjoying an independent life, be the saviour, rather than the terminator of humanity?

Saving Humanity from itself in the end?

AI could:

  • Out-optimise our worst biases in governance (evidence-led policy, model-checked laws).
  • Accelerate science (drug discovery, climate modelling, materials), buying time against other existential risks.
  • Detect manipulations and information warfare in real-time.
  • Stress-test critical infrastructure and offer resilient designs.

These are not mere fantasies; they echo UN and EU governance aims to harness AI for Sustainable Development Goals while mitigating systemic risks [13]. Whether AI becomes saviour however hinges on incentives, transparency, access for independent researchers, and global coordination, in short on its socialisation.

Nevertheless, the most dangerous point in time could be, when AI becomes as intelligent as we humans, which means: as stupid as we are. It would serve as a powerful weapon in the hands of a humanity, whose morality hasn’t evolved as much since the stone age, while technology has spectacularly – which comes close to Yoshua Bengios position.

Yet, we cannot know by now.

The decisive variable is not “AI’s essence”, but our choices—what we build, how we deploy it, and whether we can align private incentives with public goods.

The danger is us - and therefore, so could be the solution.


[1] Castelvecchi, D., & Thompson, B. (2025, November 12). ‘It keeps me awake at night’: machine-learning pioneer on AI’s threat to humanity. Nature.

[2] Walther, H. (2025, November). The Fermi paradox and humanity’s place in the universe. Horst Walther (Blog).

[3] Bostrom, N. (2013). Existential risks: An interdisciplinary analysis (PDF).

[4] Sample, I. (2014, December 2). Stephen Hawking: AI could be the ‘worst event in the history of our civilisation’. The Guardian.

[6] Future of Life Institute. (2015). Pause giant AI experiments: An open letter

[7] Time. (2024). Yann LeCun on AI, AGI, and the future: Interview.

[8] See item 7 above

[9] See item 3 above

[10] See item 1 above

[11] Clark, M. (2025, March). Yann LeCun on AGI, superintelligence, and whether humans will be boss of AI. Business Insider.

[12] Walther, H. (2020-01-14) The Big AI controversy. LinkedIn Pulse.

[13] United Nations Secretary-General’s High-level Advisory Body on Artificial Intelligence. (2024). Governing AI for humanity: Final report (Final report large-format PDF). United Nations.

No comments: