skip to Main Content

AI-assisted online grooming

As a former headteacher with a deep passion for safeguarding, I am greatly troubled by the increasing use of artificial intelligence (AI) by sex offenders to masquerade as children or young adults online, preying on vulnerable young individuals. This malicious and deceitful form of online grooming demands immediate action to prevent it.

Online grooming is a significant concern for children and young people, taking many forms where adults impersonate young individuals with fake profiles to gain their trust and manipulate them into sexual activity. However, the utilisation of AI to create these personas brings a new level of risk.

AI technology can create remarkably realistic avatars and chatbots that can simulate human conversation and interaction. Sexual predators can exploit these tools to create fake profiles indistinguishable from real children or young adults, engaging in conversations with young individuals, winning their trust and developing relationships over time.

This form of grooming is particularly dangerous because it’s difficult for young people to distinguish between an AI persona and a real person. They may believe they’re talking to someone who understands and empathises with them, while, in reality, they’re being manipulated by a sex offender.

Another alarming aspect of AI use by sex offenders is their potential to write as children or young adults. Advanced AI language models can produce text identical to a real person, including a child or young adult’s style of writing.

This increases the risk of predators using AI to produce messages or emails that resemble a child or young adult’s voice, further developing relationships with young people online, making it even more difficult for them to detect the true motives of the person behind the AI-generated message.

According to the UK’s National Crime Agency (NCA), the COVID-19 pandemic saw a significant surge in online child sexual abuse cases, with a 31% increase in reported incidents between April and September 2020 compared to the same period in the previous year. The NCA also noted an increase in the use of technology by predators, including the use of AI and machine learning to produce and distribute child sexual abuse material.

The UK-based charity, the Internet Watch Foundation (IWF), reported a rise in the use of AI-generated child sexual abuse material in 2020. The IWF highlighted that predators are using AI to generate and disseminate child sexual abuse material in greater volumes and at a faster rate than ever before.

To prevent such abuse, social media platforms and other online communication tools must invest in technologies capable of identifying and blocking AI-generated messages. This may require a significant investment in AI and machine learning technologies that can detect patterns and language indicative of AI-generated messages.

Young people must also be educated on the risks of online grooming involving AI-generated messages. They need to be taught how to identify red flags, such as messages that don’t match the writing style of the person they’re communicating with or seem too advanced for the apparent age of the sender.

Parents, caregivers, and educators also have a critical role in protecting young people from the hazards of AI-assisted online grooming. They can monitor young individuals’ online activities, discuss online grooming risks, and encourage them to report any suspicious messages or behaviour to a trustworthy adult.

Preventing sex offenders from using AI to groom children online requires a comprehensive approach involving education, technology, law enforcement, and support services. Working together, we can create a safer online environment for young people, shielding them from the perils of AI-assisted online grooming.

If you’d like to discuss this matter further, please don’t hesitate to contact me at info@aboutsafeguarding.co.uk

www.aboutsafeguarding.co.uk

 

This Post Has 0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *