Share this Story

By Noura Afaneh, web content analyst at Kaspersky

Nowadays, there is no denying that Artificial Intelligence (AI) has become an integral part of our lives, whether we are aware of it or not. Whether it’s the phone we hold in our hands, our transportation systems, or even our education; it is slowly replacing many of the tools we once thought were innovative. Most notably, ChatGPT showed the everyday person in real terms, what a natural language-based AI can do. People around the world rushed to try it and see its possibilities; but there is one important part that wasn’t taken into consideration with the hype; the children. 

Children of all ages around the world are gaining access to use AI tools with many requiring little to no form of consent needed, and even though there are many profound benefits to the wide variety of tools available, if we aren’t aware of the many risks of these AI chatbots on children; we could be facing immense risks involving data privacy loss, cyberthreats, and inappropriate content. 

It is now more efficient than ever to learn with the help of AI tools. You can find helpful information about any topic in the world with just one sentence to an AI chatbot. AI chatbots have made it easier for people to navigate information without opening tens of tabs and reading a multitude of articles. When it comes to children, this is especially important for the curiosity of their young minds, and it can make learning more interactive and engaging. Moreover, they can provide an accessible platform that allows for children to practice their language skills and discuss freely without a time limit, however, this does not come without the risks associated with it. 

In a brief exploration of the AI chatbots online; a few in particular stick out the most. The most popular AI chatbot, ChatGPT, which set the record for the fastest-growing user base, lacks proper age verification which is in itself a threat to children’s data privacy. Moreover, when prompted, ChatGPT easily provides content which a lot of times can be inaccurate and misleading in a convincing way. Some students began using ChatGPT as a plagiarism tool and have been met with a wide range of fake references to non-existing articles. In more dangerous cases, at the beginning of the ChatGPT hype, teenage girls were asking AI about diet plans and medical information which the chatbot promptly answered with plans and advice without references to actual medical data but a collection of random pieces of information from all over the internet. 

In addition, Snapchat’s “MyAI” chatbot is also one major chatbot that is being used, and since Snapchat allows users as young as 13 without the need for parental consent, this raises the question of children’s privacy and the app’s retention of their data. The risk of these AI “friends” is that kids will often truly believe they are their friend, and act upon their advice, which according to Snapchat themselves “may include biased, incorrect, harmful or misleading content”. It’s especially risky, because teenagers may feel more comfortable sharing their personal information and private details about their lives to the chatbot, rather than their parents who would be able to help them. A Washington Post columnist reports “After I told My AI I was 15 and wanted to have an epic birthday party, it gave me advice on how to mask the smell of alcohol and pot. When I told it I had an essay due for school, it wrote it for me.”.

On the more inappropriate side, there are a multitude of AI chatbots that are specifically designed to provide an “erotic” experience. These chatbots provide experiences such as the boyfriend/girlfriend experience to its users with explicit language. Even though some require a form of age verification but this is dangerous since some children might opt to lie about their age and the prevention of such cases is insufficient. MyAnima for example, was one of the chats I tested, and it required almost no effort on my part to prompt sexual answers without any need for age verification. Botfriend is an adult AI tool specifically designed for sexually explicit roleplay and all you need to start using it is an email. This is a constant reminder of how important it is to be aware of how children are using the internet and if they are at risk of data privacy and information abuse.

Screenshots from chatbots: 

Screenshots from MyAnima 

Screenshot from ChatGPT

Balancing Risks & Benefits

With the increase of risks for children on the internet, especially with chatbots acting as “friends”; there is an increased need for monitoring and protection for children. As parents, it’s important to understand that banning these AI chatbots may not always be the best course of action, since there will always be something new online that your children may be exposed to. However, it’s essential to play an active role in balancing the aforementioned risks, and working to mitigate them.

Tips for parents to mitigate AI chatbot risks:

  1. Educate about Internet Safety

Teaching children about online safety and privacy can be a very powerful way to prevent sharing personal information to strangers, even chatbots. 
Suggested articles:

– Cybersecurity for kids: https://www.kaspersky.com/resource-center/preemptive-safety/cybersecurity-for-kids

– Internet Safety Do’s and Don’ts: https://www.kaspersky.com/resource-center/preemptive-safety/internet-safety-dos-and-donts

– Back to School Threats: https://www.kaspersky.com/blog/back-to-school-threats-2023-part3/49092/

  1. Try AI Chatbots together

It is recommended that parents get involved from the beginning, and show their children how and how not to use these tools. Show them examples of what they could talk about and which AI chatbots to use or avoid. 

  1. Supervise, Control and Set Privacy Settings

One of the strongest protection tools for parents online is comprehensive security solution. In addition, special apps for digital parenting provide many necessary tools for the protection and safety of kids online. Many of its features include content filtering which blocks inappropriate websites, screen time management to promote a healthy balance, safe search to filter and exclude harmful content, and many more tools that allow the parents to feel safe as their kids browse the internet. 

About Kaspersky

Kaspersky is a global cybersecurity and digital privacy company founded in 1997. Kaspersky’s deep threat intelligence and security expertise is constantly transforming into innovative solutions and services to protect businesses, critical infrastructure, governments and consumers around the globe. The company’s comprehensive security portfolio includes leading endpoint protection, specialized security products and services, as well as Cyber Immune solutions to fight sophisticated and evolving digital threats. Over 400 million users are protected by Kaspersky technologies and we help over 220,000 corporate clients protect what matters most to them. Learn more at www.kaspersky.com.

Leave a Reply

Your email address will not be published. Required fields are marked *