Superintelligent AI – are we getting closer to the distant future?

Dr Keith Darlington
A few months ago, I wrote an article for Nation Cymru about Artificial General Intelligence (AGI). AGI is a stage of AI that can match a human at any intellectual task. This is seen by many as the ultimate goal of AI progress. However, many in the AI community believe that achieving AGI could lead to superintelligent AI systems (ASI).
Unlike human intelligence, clever AI programs can make and design better copies of themselves. They would also have access to all Internet knowledge that would far surpass human capabilities. One of the mathematicians, named I. J. Good, who worked on breaking the Enigma code at Bletchley Park during World War 2, understood this.
He wrote an article in 1965 about the likely consequences of what happens when machines acquire parity with human intelligence. In this article, he wrote:
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
Good was one of the first to realise that machine intelligence on a par with human intelligence could lead to an intelligence explosion with far-reaching consequences. Many high-profile AI researchers and other academics have since discussed artificial superintelligence and the consequences of an intelligence explosion that could follow.
Ways to get to ASI
In his well-known book, Superintelligence: Paths, Dangers, Strategies, written in 2014, the Oxford philosopher Nick Bostrom describes two possible routes to creating AGI: whole brain emulation (also called mind uploading) and seed AI.
Whole Brain Emulation (WBE)
Whole brain emulation, as the name suggests, would be produced by scanning and modelling the computational structure of the biological brain. The reason why this could lead to ASI is that by emulating a human brain, scientists could create a digital copy that can run at much faster speeds than humans. It could also be backed up with more digital copies and experimented on without the limitations of a biological body.
However, creating a WBE would be a formidable task because the human brain contains approximately 100 billion neurons – each of which can be connected to up to thousands of other neurons. The complexity and cost of equipment to perform such a task are huge. However, since Bostrom’s book was written over a decade ago, several major international brain-related research projects have begun, and in some cases, have been completed, that aim to understand, map, and potentially even emulate the human brain.

For example, the European Union funded a 10-year project called the Human Brain Project (HBP), which brought together over 500 experts in neuroscience, clinicians, and engineers from all over Europe to establish the ease of studying human brain complexity through computational methods and advanced technologies. This project was completed and reported in 2023 with successful conclusions for neuroscience research. Nevertheless, full brain emulation is still a considerable distance away.
The Path from AI to ASI
Another possible path to ASI is to utilise traditional AI paradigms to mimic human cognition, known as seed AI. AI has been dominated to date by single-task applications, such as image recognition, medical diagnostics, or self-driving cars. AI can perform these tasks well, often surpassing human capabilities. Still, it is limited by its ability to perform only a single task – not being able to flip to another task requiring a different kind of knowledge.
Single-task AI has been dominant because it involves simulating brain or human thinking in a particular, limited context. Until a few years ago, it seemed that blind evolutionary processes arising from the improvement of AI paradigms were unlikely to lead to ASI.
However, the emergence of ChatGPT and other large language models over the last few years has dramatically changed this thinking. There is now a greater likelihood of human-level general intelligence happening in AI systems. As I said in my earlier article, while it is unlikely that LLMs alone can achieve AGI, current research could lead to pathways that involve integrating LLMs with other capabilities, such as memory, reasoning, and embodiment, which could take it there and then, with super intelligence to follow some time later.
Conclusions
Only a few years ago, eminent AI experts were saying that ASI was unlikely to happen for at least another 50 to 100 years. The recent advances in AI have changed that mindset. In particular, the rise of LLMs, combined with other AI techniques, may significantly reduce that timeline.
Of course, there will be profound ethical and social implications for a society dominated by a machine intelligence superior to our own. It’s brain power that has given humans dominion over other animal species on this planet; otherwise, animals with greater physical power than us, like lions, would not now be facing extinction. Policymakers around the world need to be aware of the future of AI developments. There are many benefits as well as challenges for humankind.
Dr Keith Darlington is a retired AI university lecturer and author of five books on AI and computing topics, as well as over 70 magazine articles on AI and related subjects
Support our Nation today
For the price of a cup of coffee a month you can help us create an independent, not-for-profit, national news service for the people of Wales, by the people of Wales.


I hope it’s more intelligent than my phone who leaves a message of a phone call that has absolutely nothing in common with The recorded message.
The AI singularity is nigh! The future may not be the harmonious AI-humanoid post-scarcity society depicted in Iain M Banks’ Culture novels. The Irony is that we probably need ASI to predict the future of ASI!