Support our Nation today - please donate here
Opinion

ChatGPT has given AI a massive boost but there’s much more to come

04 Oct 2025 7 minute read
Chatgpt Chat with AI,

Dr Keith Darlington

Over the years, I have written several articles for nation.cymru about AI. The first was written in 2018 when I predicted that AI would make an impact in the 2020s.

At that time, I would frequently meet people who had never heard of it or didn’t know what it was. Not any more. Hardly a week passes that AI has not been in the news reports.

It has become well and truly ubiquitous in recent years, finding uses in everything from the smart devices in our homes, our smartphone notifications, NHS diagnostics, robotics, driverless cars, and much more.

AI has suddenly become something that everybody talks about. However, the surge in AI activity in recent years is mainly due to a paradigm shift in AI, towards the creation of Large Language Models. In this article, I describe what they are, their benefits and downsides, and how they may evolve in the future.

Large Language Models

AI use erupted in December 2022, when ChatGPT, developed by OpenAI,  was released to the public. Within five days, it had over a million users, and it now receives 2.6 billion prompts daily. ChatGPT was the first AI known as a Large Language Model (LLM).

It interacts with the user through a chatbot, with its primary purpose being to deliver content in the form of text.  A chatbot is simply a means of conversation with human users. A keyboard is usually used to enter questions or prompts with answers provided by ChatGPT in the chatbot window.

ChatGPT is expected to surpass a billion regular users before the end of this year, and many other LLMs compete with ChatGPT for a share of this AI market. Since their inception, LLMs have been continually improving and evolving at a phenomenal rate. Google has now incorporated its own LLM,  called Gemini, into its Search Engine. Microsoft also uses an LLM called CoPilot that has been incorporated into Microsoft Office, an IT application many readers will know for assisting with IT business tasks. There are many other LLMs in everyday use, and new variants of them are frequently released.

Google Gemini

How they work

LLMs work by first being trained on vast amounts of text data that comes from the Internet – this data includes books, websites, and articles.  In doing so, they learn word patterns and relationships between words. The amount of training data used is enormous, estimated to utilise datasets from over 3 billion Web pages.  After they have been trained on this data, they can answer questions about every topic imaginable and reply almost instantly, providing as much detail as the user requires in their prompt.

For example, suppose you wanted to ask the question “What is the purpose of a carburettor?”. The LLM analyses your query, retrieves relevant patterns from its training data, and generates a coherent, contextually appropriate answer by predicting the most likely sequence of words that describes a carburettor’s function. They have also been trained to give carefully curated responses that are appropriate to the educational level and style required by the user.

Statcounter AI Chatbot global market share (May 2025)

Towards AGI

In June 2025, I wrote an article about Artificial General Intelligence (AGI). AGI is a stage of AI evolution, which is considered to be the Holy Grail of success by many in the AI community, because it’s when AI evolves into something that can match a human at any intellectual task. AGI is a long way off at present, but progress with LLMs is seen as a necessary condition to achieving AGI.

Progress with LLMs has been rapid since the release of ChatGPT. Since ChatGPT, also called GPT 3.5 was released, GPT 4.0 rapidly followed, and recently, a more powerful version called GPT 5.0 was released in August 2025. Not only are these newer versions trained on larger datasets, but they also deliver content in the form of voice responses and can draw images of anything – real or imagined.   But more importantly, newer emerging features – such as the ability to reason are incorporated in limited forms. This means that they can transcend their original purpose of simply providing content creation. They can help to automate tasks that might save us time and work effort. This is known as Agentic AI.

Agentic AI

Agentic AI use LLMs as a “brain” to perform actions through AI tools, called agents, that they can access. The emphasis is on taking action rather than just delivering knowledge and information. Agentic AI can work independently and autonomously, performing specific tasks without the need for human oversight.

This is very different to a non-agentic use for an LLM  as a passive AI tool.  This non-agentic use would mean that the user would have to perform each search manually, read each result, and then ask the LLM to summarise or whatever. In an agentic system, the LLM serves as the autonomous driver, managing the entire workflow from start to finish. I.e., in Agentic AI, the LLM is not just a chatbot; it becomes a reasoning engine that plans, uses AI agents, and synthesises the required information.

For example, suppose I wanted to start up a wine-making business in South Wales. I may give the LLM user input such as “Research the wine making market in South Wales for the next two years, identifying opportunities and pitfalls, and a report strategy, with specifics about the best area to choose, dates to start, and so on, The LLM, if it includes Agentic AI,  doesn’t just search for this phrase. It reasons about what needs to be done by breaking down this task into subtasks and getting AI agents to go off and find the answers to those subtasks.

Research in Agentic AI has only just begun, but there is more to come with the subsequent developments likely to deliver agents that collaborate autonomously with very little human input. This would have been unthinkable only a few years ago. However, there is a a downside to Agentic AI which is attracting much controversy at the present time.

The Environmental Cost of LLMs 

One of the downsides of LLMs is the immense computational power needed for both training and ongoing operations. This means that they consume large amounts of energy and water.  The technology giants are building AI datacentres in many parts of the world to accommodate the training and operations needs of LLMs. So much so that environmentalists are now raising concerns about the environmental impact of AI. However, the AI technology companies claim that AI is presented as a solution to the most significant challenges of our time, including addressing the climate crisis. It remains to be seen whether their claim is true.

One final point, much was made about the massive £31 billion AI datacentre investments in the UK during President Trump’s September visit to the UK. The technology giants Microsoft, NVIDIA, and Google have all committed to investments in building an AI infrastructure. However, that would require access to copious amounts of energy. The Nvidia CEO, Jensen Huang, admitted that the UK could be an AI superpower but also said that in addition to sustainable power like nuclear, wind, and solar, he is also hoping that gas turbines are also going to contribute. It remains to be seen if that can align with the UK govenment’s net zero goals.

Dr Keith Darlington is a retired AI university lecturer and author of five books on AI and computing topics, as well as over 75 magazine articles on AI and related subjects


Support our Nation today

For the price of a cup of coffee a month you can help us create an independent, not-for-profit, national news service for the people of Wales, by the people of Wales.

Subscribe
Notify of
guest

6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
smae
smae
2 months ago

I expect data centers to be launched into space at some point. Quite frankly there’s no need for them to be on earth and they would have both a limitless amount of energy from the sun (no night time) and also plenty of cooling (space is cold).

What needs to be harvested is tidal energy. There are many places particularly along the welsh coast that could provide huge benefits, not just to securing energy but also decreasing coastal erosion through the use of lagoons and such, this could provide much of the power we need.

Adrian
Adrian
1 month ago
Reply to  smae

Tidal energy is too expensive to implement and maintain: that’s why there’s so little take-up. If it were viable the private sector would be doing it.

Mawkernewek
1 month ago
Reply to  smae

This would be quite a long way away at this point, it would require a substantial amount of infrastructure to be launched into space, with a larger mass than current satellites, either to launch them from Earth, or build the infrastructure to manufacture them on the Moon or the asteroids.

Adrian
Adrian
2 months ago

It’s woefully woke, so not worth engaging with.

Mawkernewek
1 month ago
Reply to  Adrian

Is anything you don’t understand ‘woke’ whatever you mean by that?

Though of course, there are a lot of people who don’t understand AI who talk about it a lot anyway saying it will either save the world, or destroy the world, so not engaging because you don’t have anything useful to say could be a positive choice.

Mawkernewek
1 month ago

I think one of the problems is the industry seems to think that a Large Language Model is the one and only way to do AI and make progress by throwing more and more data and compute time at it.

This might work well for certain specific tasks but it doesn’t really bring general intelligence.

Our Supporters

All information provided to Nation.Cymru will be handled sensitively and within the boundaries of the Data Protection Act 2018.