AI and social media today: The fear of missing out

Ben Isaac-Evans
Social media is a want not a need. These were the words of my wife as we discussed the current state of social media.
She has a point.
It’s easy to forget that, however, when you’ve got a device in your hand that can connect you to people and places on the other side of the world and give some kind of answer to whatever question you ask of it.
It’s addictive.
The algorithms that we constantly hear people talk about are now transferring from our mobile phone handsets into our minds in a sense.
They demand our attention, and the makers of the platforms we use intentionally make them this way.
Mark Zuckerberg was in court in the last few weeks in Los Angeles facing questions for over 3 hours on whether his platform Meta causes addiction in children and causes harm.
The jury in the trial heard an email being read out showing concerns by former UK deputy prime minister, Nick Clegg, who worked for Meta as global affairs editor until last year, that “The fact that we have age limits which are unenforced makes it difficult to claim we are doing all we can.”
This is obviously a problem at this stage. With the recent Epstein Files story updating us daily on horrific abuse of children, surely, it’s time for social media companies to be held accountable for their actions.
What we see time after time though is that these companies are getting away with it.
Nation Cymru recently published an article from Emily Price, which explained why she was deactivating her X account for good. For me, I’m surprised that many more haven’t followed suit. Especially politicians.
Elon Musk, the owner of X has been seen to throw up a Nazi salute and is now expanding his empire with the use of Artificial Intelligence (AI) data centres to power his Grok AI.
As someone whose grandfather travelled all over Europe during WW2 fighting the Nazis it is alarming to see how this behaviour is now accepted as if it was nothing, whether a joke or not.
Maybe we are becoming too desensitized to this behaviour now. We are after all viewing a live-streamed ongoing genocide in Palestine every day via our mobile handsets.
It can feel as though X is a place for our voice to be heard, and like most social media platforms there’s a fear of missing out or as it’s now known, FOMO.

Artificial Intelligence
What many of us in Wales might not be seeing is the detrimental effect on the environment that Musk’s Grok AI data centre’s are causing.
In Memphis, Tennessee, Musk has built his huge AI Gigafactory featuring the world’s largest supercomputer and called it Colossus (after a 1970’s film of the same name where a supercomputer becomes sentient and gains total control of the world).
What he hasn’t done though by the look of it is engaged with local environmental regulations, and to make sure his centre can cool the chips needed to power his AI he’s installed 35 gas turbines which are now polluting the air for those living in close proximity.
People have reported waking up in the middle of the night thinking they’d left their gas ovens on when it first opened such was the smell.
With Musk being the first person on course to be a Trillionaire there’s no surprise that he isn’t concerned about any potential fines because of this. It’s spare change to him.
In the past few months of Grok’s AI technology being used, huge concerns have been raised about it being implemented on Musk’s X platform enabling it to strip women and children of their clothing.
Just looking at the contents page of Laura Bates book, The New Age of Sexism, she highlights a growing number of concerns which AI is making us face. These include Deepfakes, sex robots and AI partners.
There are also the more widely discussed concerns about copying material of authors, illustrators and musicians; The risk of robots taking peoples jobs; The impact on climate and water use; How it affects mental health; AI taking control of our so called nuclear deterrents.
Only yesterday I saw an article saying that AI toys for children misreads emotions and responds inappropriately. This list is growing and at an alarming rate along with AI’s development it seems.
The question then is how do we as a society determine the future of this technology and make sure that we the people decide its trajectory and not tech billionaires.
A group called Pull The Plug are trying to answer that very question and on 28 February they held the biggest march the UK has ever seen in relation to AI.
The demonstration, called March against the Machines, was organised by a coalition of groups including Pull the Plug, Pause AI, Mad Youth Organise, Blaksox, and Assemble. These organisations are calling for AI technology to be made safe, and democratically controlled by the public.
The march passed the offices of Open AI, Meta and Google DeepMind in London.
The group emphasise that they are not trying to pull the plug on AI in a literal sense, but demand a pulling of the plug on AI Billionaires (and soon to be Trillionaire with Musk) that are making decisions for the rest of the world.
The group are planning a protest in Cardiff in the next few months but have no definite date yet.

Regulation
The main cause of concern for many parents is the regulation, or lack of, in relation to the social media platforms.
In a recent documentary that’s been released on Channel 4 called Molly vs the Machines the issue comes to the forefront of one father’s mind following the heartbreaking suicide of his daughter at the age of 14.
The film depicts the story of Ian Russel, who is the father of the late Molly Russel who in 2017 took her own life in after viewing 2000 plus images of self harm and suicide on Instagram in the last few months of her life.
Many things struck me while watching the programme, but the main thing was the hold these platforms now have over us.
Even after finding out about what Molly was viewing on Instagram, her close friends still admit that they are using the app. This is not a criticism of them but just an observation that the feeling of needing to connect via apps is still there even though a friend was severely impacted by that very one.
That is just one story in many of course. In the US last year there was the case of 16 year old Adam Raine who also took his own life. He was using the popular app Chat GPT and after months of talking to it, the app encouraged Adam to keep his intentions secret and offered to draft up his suicide note.
His family are now suing OpenAI, the owners of Chat GPT. According to the Raine family’s lawyer, the safeguards that Chat GPT needed were not in place because the company was in a race with Google to advance its technology. The case is ongoing and we’ll no doubt hear more about it in the coming months.
In recent days I read a report showing that the government are using AI tools to process asylum claims and that the tools have been shown to make factual errors. The report also states that the asylum seekers haven’t been informed that AI is being used to process their claims which raises the question on whether it’s a General Data Protection Regulation (GDPR) violation.
On Wednesday, the UK Government backtracked on its position on copyright and AI. Among the artists against allowing AI companies to use copyrighted material to train their models were Sir Elton John and Dua Lipa.
It feels very much to me that AI in its current form is creating more problems than solutions. Vast amounts of money is being invested in it and many are predicting that the bubble will sooner or later burst – with a similar outcome to the banking crash of 2008.
With the technology developing at an incredible pace our politicians are in an unenviable place when it comes to shaping its future.
They must, however, listen to the voices of the most vulnerable and not the richest when it comes to that future.
Support our Nation today
For the price of a cup of coffee a month you can help us create an independent, not-for-profit, national news service for the people of Wales, by the people of Wales.

