Loading...

Last Updated on 28th July 2023

Read the script below

Natalie: Hello and welcome to Safeguarding Soundbites, with me Natalie.

Colin: And me, Colin. If you haven’t tuned in before, this is the podcast that brings you all the latest safeguarding news, advice and updates.

Natalie: But today’s episode is an extra special edition as it’s our bumper summer special.

Colin: That’s right, Natalie. We’re going to be catching up on the top stories from the last month, including our regular social media updates and the latest news about AI-generated child sexual abuse imagery and the Online Safety Bill.

Natalie: Let’s start off with social media where there’s been a major change over on Twitter…

Colin: [ahem], do you mean over on….X?

Natalie: Oh gosh, this is going to be difficult! But yes, Elon Musk has announced that the social media platform formally known as Twitter is going to be called X from now on. As of recording, the logo has been changed to an X and there’s no more Larry the bird!

Colin: I’m sorry, Larry the who?

Natalie: The bird! Did you not know the Twitter bird logo was called Larry?

Colin: I didn’t! You learn something new every day.

Natalie: Every day’s a school day!

Colin: It is indeed. So it’s now called X, is that confirmed?

Natalie: It seems to be! We’ll keep you listeners up to date with anymore changes in the future. And I’ve also got a quick update for you on Threads.

Colin: Threads is Meta’s competitor version of Twitter – sorry, X – yes?

Natalie: Yes. Launched last month in July, it saw instant success with over 100 million users signing up in the first few days. However, it looks like that success may have been short lived as it’s been reported that the platform’s daily active user numbers have dropped by around 70% since launch.

Colin: Oof. Do you think it’s…hanging by a thread?!

Natalie: I’m going to ignore that pun, Colin! I think it’s too early to tell and, as X keeps making changes, we don’t know how that might impact people’s decisions to hop onto an alternative. We do know that Meta plans to add Threads to the fediverse, which is going to be a sort of grouping of several social media apps, including Mastodon.

Colin: Natalie, before we go any further – for those are super confused right now…we have X, we have Threads, now we have fediverse. Can you just explain in simple terms what the fediverse is?

Natalie: Basically, the fediverse is made up of thousands of different social media servers that can communicate with other. Rather than your standard social media platforms like Instagram and Twitter, right now it’s platforms like Mastodon, Peertube and Lemmy. Less mainstream ones and it also includes platforms for blogging, podcasting etc. The important part is users can all communicate with each other, as if they were all on a single social media network.
The concern there is that a recent study found high amounts of child sexual abuse materials on Mastodon – 112 instances of known child sexual abuse material across 325,000 posts.

Colin: Wow. And Mastodon is a decentralised social media platform – which means it’s not just one server, one place, hosting everything but people can set up their own servers. So essentially there’s less oversight on what’s being hosted and posted there.

Natalie: Exactly. And a server might only have one moderator – one person to check what’s being posted there. So if Threads joins this fediverse, it means people can cross-communicate between apps. People on Threads can chat to people on Mastodon. Which, from a safeguarding point of view, is worrying. But Threads isn’t currently part of the fediverse, it’s just a plan. So another ‘watch this space!’

Colin: And watch it we shall! Or listen to it anyway, here on Safeguarding Soundbites! Thank you for our social media updates, Natalie.

Moving on now…a BBC investigation has found that artificial intelligence (AI) is being used to create and sell child sexual abuse material. By using AI software designed to generate images, predators are able to create realistic materials. The Internet Watch Foundation has also confirmed their own investigations found evidence of AI-generated child sexual abuse imagery. They have called on the Prime Minister, as well as AI companies, to do more to prevent the abuse of AI tools and protect users from the spread of this type of content.

Natalie: Wow – and to be clear, Colin…even though it’s AI-generated, this type of content is still illegal, there’s no ‘grey area’ around this because it’s AI?

Colin: Yes, AI-generated images of child sexual abuse are illegal in the UK. And the Internet Watch Foundation also very rightly pointed out that this isn’t a victimless crime – having this type of imagery out there can a) normalise it and b) make it harder to spot when real children are in danger.

Natalie: It’s frightening, actually. Especially as AI imagery becomes more realistic. So, what’s the solution?

Colin: Well firstly, there needs to be changes in how AI models work – that simply, this type of content cannot be created. And also, that there’s enough development in technology so that AI-generated content can be identified easily. And, of course, that the platforms and websites that predators use to share and sell this content are continuously improving their systems to identify it, remove it quickly and address the people accessing it. Secondly, that the growth in CSAM is also directly linked to the failure of the government to ensure there are meaningful deterrents. We need to remember this is about people’s behaviour, not the technology – it’s people telling the technology what to create.

Natalie: Thank you, Colin, for those important points and reminders. Now, that actually brings me on to our next story, which is an update on the Online Safety Bill. A previous loophole which was apparently discovered by news media The Telegraph has now been closed. The loophole meant that, whilst the senior management of tech companies could be held criminally responsible for persistent failures to protect children from online harms like content on eating disorders and self-harm, it didn’t cover child sexual abuse material.

Colin: That’s a big oversight. So they’ve closed that loophole now?

Natalie: Yes, exactly. The government have now added a new amendment to the bill so that tech executives will be held to account if they fail to tackle child sexual exploitation and abuse content.

Colin: And was there another amendment? Something to do with algorithms?

Natalie: Yes, so basically the Online Safety Bill, which for anyone listening who’s not sure, is designed to regulate how tech companies and services like social media platforms, safeguard their users – it puts laws in place that mean they have to protect users from harmful and illegal content. And that has previously only applied to content, like videos, images, messages, etc. But this other new amendment will now hold those same companies to account for algorithms that push users towards harmful content too.

Colin: So algorithms work by suggesting content to a user based on what they’ve liked or visited previously, or what someone with a similar user profile has liked before – someone the same age, for example.

Natalie: Yes and so the issue has been that once a user has, for example, looked at harmful eating disorder content on their social media account, that algorithm will continue suggesting more content about eating disorders to them.

Colin: And if it’s something a young person is struggling with eating disorders, for example, every time they go on social media, that content is there again. And it might feel like there’s no escape from seeing it.

And, actually, while we’re on the subject, there’s been new research from Childline showing that more than 100 children in Northern Ireland reached out for help about body image issues and eating disorders over the last year, with 40% of their counselling sessions taking place over the summer months. So we know young people are struggling with these issues – which aren’t helped if every time they go on social media, it’s there again.

Natalie: Exactly. So we will see what happens with this new amendment and how it all works out.

Colin: It sounds like some positive steps are being made.

Natalie: It does! And speaking of good news, Colin…

Colin: Yes! It’s time for our safeguarding success story of the week!

Natalie: Take it away!

Colin: So this episode’s safeguarding success story is that the UK games industry has announced plans to restrict access to loot boxes in games for children. Loot boxes are basically those in-game purchases that assign the contents at random. So, for example, you buy a loot box and you might get something really valuable or a really rare item, like limited edition clothes for your game character or loads of experience points. Or you might get something that’s common or just not that exciting.

Natalie: So it’s a bit like gambling?

Colin: That’s the concern and you’re certainly introducing that as a concept to children. So these new plans, which are a set of guidelines, are a step to protecting children and young people from that.

Natalie: Do we know what any of the new guidelines are?

Colin: Yes, so there’s eleven of them but I’ll just outline the principles – they include preventing anyone under 18 from being able to get access to loot boxes without the permission of a parent or carer. And there’s also plans to launch a public information campaign, to raise awareness.

Natalie: To quote you a few minutes ago, sounds like positive steps!

Colin: Indeed! Well, that is everything from us today. We won’t be back again next week…

Natalie: Boo!

Colin: But we will be back again next month…

Natalie: Hooray!

Colin: And, of course, you can keep up to date on everything we’re doing and any safeguarding updates on the free Safer Schools NI App and our website saferschoolsni.co.uk.

Natalie: And you can also follow us on social media by searching for Safer Schools NI. Until next time, goodbye and…

Both
: Stay safe!

Join our Safeguarding Hub Newsletter Network

Members of our network receive weekly updates on the trends, risks and threats to children and young people online. 

Sign Up

Who are your Trusted Adults?

The Trusted Adult video explains who young people might speak to and includes examples of trusted adults, charities and organisations.

Pause, Think and Plan

Use our video for guidance and advice around constructing conversations about the online world with the children in your care.

2023-07-28T10:47:19+00:00
Go to Top