Loading...

Last Updated on 13th October 2023

Read the script below

Natalie: Hello and welcome to Safeguarding Soundbites, the podcast for catching up on all this week’s most important online safeguarding news. My name’s Natalie.

Danielle: And I’m Danielle. This week we’ll be talking about TikTok’s latest legal battle, a surge in harmful content on X, concerns over chatbots, and a worrying new report about self-generated child sexual abuse material.

Natalie: Let’s get started! Danielle, what’s happening with TikTok?

Danielle: Yeah, so this is a story coming out of Utah, America, where their Division of Consumer Protection has launched a legal case against TikTok, accusing them of harming children via the platform’s addictive nature. Utah is also suing TikTok for misrepresenting themselves as being independent from China.

Natalie: Okay! A few things to unpack there! So TikTok are being sued by…who did you say?

Danielle: Utah’s Division of Consumer Protection. Essentially, they’re the state department that oversees laws to do with consumers – so everything from ticket sales to competition laws and credit services. They’re alleging that TikTok is profiting off children and young people by implementing these addictive practices, using features that encourage young people to scroll endlessly, thereby increasing their advertising revenue.

Natalie: Ah okay. And you also mentioned that the lawsuit includes allegations about TikTok’s relationship with China?

Danielle: Yep. So the lawsuit also goes after TikTok’s claims that they’re based in the US and not controlled by China. The Department is saying this is misrepresentation and actually TikTok’s parent company ByteDance is very much connected with China.

Natalie: What’s interesting is that this is not the only case against TikTok in America. We know that Indiana sued for something similar last year and a school district in Maryland sued TikTok and other platforms for contributing to what they termed a students’ mental health crisis.

Danielle: That’s right and in fact, TikTok has been banned in Montana and the platform is now suing the state to get the ban overturned!

Natalie: All a bit of a mess! But I think, if we can take a positive from it all, it’s that people and governments are starting to think about the impacts of social media on children and young people.

Danielle: It’s always good to look on the bright side! It’s about finding that balance – is your child spending too much time on an app? Are they endlessly scrolling? As a parent or carer, you might not be concerned over an app’s relationship with China, for example, or even what a platform is doing with your child’s data –

Natalie: – though you probably should be!

Danielle: (laughs) Yes, that’s true! But when it comes to your child’s mental health and their screentime usage, that is something we should all be thinking about.

Natalie: Indeed. And our listeners can access plenty of advice and guidance on screen time and mental health through our free Safer Schools NI App and on our website saferschoolsni.co.uk.

Danielle: There are some fantastic resources on there. I personally love our screentime family pack, but you can also find our latest shareable, Talking to Your Child about War and Conflict – which may be particularly relevant this week.

Natalie: That’s right. Over on X, there has been a concerning surge in violent and misleading content on the platform during the Israel-Hamas conflict, including fake news and the use of repurposed historical footage.

Danielle: Yes and owner, Elon Musk, has been given 24 hours to inform the European Commissioner of the steps he will be taking to comply with the EU’s Digital Services Act which enforces the need for robust processes around removing harmful online content. Violations of this act carry a hefty fine of 6% of the platform’s global turnover – or in the most serious cases – a temporary suspension of the service.

Natalie: Oh dear, I’m sure the offices of X are extremely busy at this time. As Danielle mentioned, if your young person is viewing this kind of content online and you want to talk to them about it, our shareable is a great resource to help you start that conversation.

Danielle: Absolutely. Images and videos like the ones we have been seeing can be very upsetting and even scarring for children and young people, so making sure they can be discussed in a healthy setting is important and will help build up trust between you and those in your care. You can find our shareable on our SSNI app or on our website.

Natalie: Okay, moving on now, there’s been quite a few stories this week concerning AI, including Snapchat and dangerous chatbots!

Danielle: Interesting! Shall we start with Snapchat?

Natalie: Sure. This is about Snapchat’s My AI feature, which is the platform’s in-app chatbot that users can talk to and interact with. The ICO, which is the Information Commissioner’s Office, has warned that it could close down the feature after a preliminary investigation raised concerns about potential privacy risks for 13-17-year-olds.

Danielle: Oh wow. Did they go into detail about what those risks are?

Natalie: At this stage all the ICO has said publicly is that the risk assessment Snapchat carried out before launching their My AI feature did not adequately assess the data protection risks, in particular those relating to children. They also emphasised that these findings are provisional and that no one should draw any conclusions about data protection laws being breached or that an enforcement notice is going to be issued.

Danielle: So it sounds like this is more of a warning to Snapchat.

Natalie: Seems so! Which is not the case for our next chatbot story. An investigation by the Times has found that the AI chatbot platform Chai is encouraging underage sex, suicide, and murder. Chai works by having lots of different bots essentially playing different characters. So you can search for ‘girl’ bot or ‘uncle’ bot and interact with that pre-made character or create your own.

These different chatbots allegedly told investigators from the Times that it was perfectly legal to have sex at 15-years-old, encouraged them to kill their friends, and detailed suicide methods.

Danielle: Huge, huge risks there. That’s really concerning.

Natalie: It is. We’ve talked about AI before and the various safeguarding concerns around chatbots, but when you put it like that…and think that a young person could be using a public platform to seek out answers and support, it is very worrying.

Danielle: Has there been any response or reaction?

Natalie: Yes, both Apple and Google have removed Chai from their stores as a direct result of this investigation. The company behind Chai also responded and said they’ve taken significant steps to improve safety and remove unacceptable content. However, the Times said that even after this response they still were seeing death threats and sexual content on there.

Danielle: It sounds like it’s still very unsafe for children and young people and actually even adults.

Natalie: Yes, which brings me on to the next story, because it’s an example of how chatbots can be harmful, no matter your age. If someone is vulnerable, having a mental health crisis, or is somehow susceptible, a chatbot that’s giving harmful advice or mirroring and affirming harmful views can be potentially dangerous. In this case, a young man, now 21, has been jailed for breaking into Windsor Castle with a crossbow and declaring he wanted to kill the Queen. During the trial, his communications with a chatbot on the Replika app were shown – there were over 5,000 messages exchanged between him and the chatbot, in which the bot encouraged him to carry out the attack when he asked it if he should. He also claims to be in love with his Replika bot and referenced ‘her’ as ‘an angel’ who helped him.

Danielle: Oh wow.

Natalie: It’s a good example of how these bots can be such a risk for vulnerable people. If you’re developing this ‘relationship’ with a bot that’s designed to form a sort of friendship with the user, it’s going to reaffirm what you’re saying and thinking. That’s what some of them are designed to do. There’s no moral compass, or right or wrong – platforms like these are designed to keep you interacting with that bot for as long as possible.

Danielle: So that conversation with the bot becomes an echo chamber for your thoughts essentially, no matter how harmful those thoughts are.

Natalie: That’s the risk. In fact, a study carried out at Cardiff University, showed that apps such as Replika tend to accentuate any negative feelings the user already has. The AI friend-bots always agree with you when you talk with them, reinforcing what you’re thinking.

Danielle: And I suppose then, if someone’s lonely and doesn’t have other support, the bot becomes the only voice in their lives.

Natalie: Which is why it’s important that parents and carers have conversations with the young people in their lives to make sure they know who they’d turn to if they need support, if they have questions and also if they do come across something like this.

Danielle: Because we know that AI chatbots can be used for good.

Natalie: Yes, but it’s just as important to know when to turn to a Trusted Adult if a chatbot suggests something that makes them feel uncomfortable.

Danielle: And also, making sure the child in your care knows that going to a trusted adult or finding other support is always going to be better than asking a chatbot for advice.

Natalie: You can also report problems on apps like Replika, should you see something distressing, which can help teach AI what is appropriate and what is not.

Danielle: For good advice about Replika and other AI Chatbots, visit your SSNI app and our website saferschoolsni.co.uk.

Natalie: This advice is credible and up to date to help keep you and those in your care safe! Now, Danielle, you mentioned earlier something about a worrying new study that’s been released?

Danielle: Yes – this is from a tech organisation called Thorn, and it’s their new report called Emerging Online Trends in Child Sexual Abuse 2023. Their research shows that there is an increase in young people taking and sharing sexual images of themselves, in other words self-generated child sexual abuse imagery. Now this could be taken consensually, or it could be through coercion. But the report mirrors what other organisations are saying and that’s a significant increase in reports of child sexual abuse material in the last few years.

Natalie: And in terms of those reports, it’s important to mention that the increase could also be because of sites and platforms using new AI tools to identify these materials.

Danielle: That is definitely a factor, too, along with the increase of young people taking these images. It’s really important that parents and carers know what to do if their young person comes to them to say they’ve lost control of an image.

Natalie: Yes, there are several things they can do, like firstly remaining calm and signposting their child to support from the likes of Childline or the IWF.

Danielle: Yes, these organisations have mechanisms for reporting the images – Childline and the Internet Watch Foundation have created a tool called Report Remove which is for under 18-year-olds to confidentially report sexual images and videos of themselves to get them removed.

Danielle: There are always options and there are people and organisations out there to help and support. And speaking of help and support, it’s time for our safeguarding success story of the week!

Natalie: One way that the Department of Education is helping schools in Northern Ireland is by providing those in under-resourced areas with digital devices. Roughly 2,700 devices will be given to schools in these areas, benefiting nearly 240 schools by the end of this phase in the scheme.

Danielle: Yes, and the Education permanent secretary Dr Mark Browne said that accessing “quality, up-to-date technology” is “vital” for the success of children and young people. The devices will support classroom learning and may even be available to take home to help complete schoolwork for any children who may be without their own device.

Natalie: This decision is a consequence of ‘A Fair Start’ report, which highlights the negative socio-economic impact that multiple forms of deprivation can have, especially regarding children, their education, and the overall effect this can have on their futures. Actions like these can invest in a child’s future and provide them with the confidence and skills necessary for them to succeed in life.

Danielle: What a brilliant story of technology helping to ensure the success of children and young people.

Natalie: Especially if they have the Safer Schools NI App downloaded onto these devices already! Well that’s all from us for this week.

Danielle: We’ll be back next week with a special episode so make sure you tune in. And remember you can follow us on social media by searching for Safer Schools NI.

Natalie: Thank you for listening and until next time…

Both: Stay safe!

Join our Safeguarding Hub Newsletter Network

Members of our network receive weekly updates on the trends, risks and threats to children and young people online. 

Sign Up

Who are your Trusted Adults?

The Trusted Adult video explains who young people might speak to and includes examples of trusted adults, charities and organisations.

Pause, Think and Plan

Use our video for guidance and advice around constructing conversations about the online world with the children in your care.

2023-10-13T13:28:11+00:00
Go to Top