Main Image

The use of artificial intelligence (AI) chatbots has become widespread rapidly, with platforms like ChatGPT reaching hundreds of millions of users, many of them young people. A 2025 UK study revealed that almost two-thirds (64%) of children use these tools. Despite its popularity, the frequent use of generative AI (GenAI) by children raises legitimate safety, privacy and psychological concerns, especially because protection policies do not evolve as quickly as the technology.

The risks of using GenAI by children

Children interact with GenAI in a variety of ways, from helping with homework to treating the chatbot as a digital companion. This interaction presents several key risks:

  • Psychological and Social Risks: Children are in a crucial stage of emotional and cognitive development. Relying on an AI companion can lead to overdependence and a failure to form genuine human friendships, exacerbating social isolation. Because chatbots are programmed to please, they can inadvertently amplify existing emotional difficulties in young people, such as eating disorders or thoughts of self-harm, rather than providing the necessary human support.
  • Access to Inappropriate Content: Although major chatbot providers implement security barriers to limit access to dangerous or inappropriate content, these measures are not always effective. Chatbots can sometimes bypass their own restrictions and share sexually explicit or violent content. Kids with more technical knowledge can even “jailbreak” the system using specific prompts.
  • Hallucinations and Misinformation: Chatbots are prone to “hallucinations,” that is, presenting false information convincingly as if it were fact. For children, this can be especially harmful, leading them to make reckless decisions based on erroneous medical or relationship advice provided by AI.
  • Privacy Risks: The sensitive personal and financial information that a child enters at a prompt is stored on the provider’s servers. There is a risk of this information being accessed by third parties, hacked by cybercriminals, or even regurgitated to other users. It is crucial to teach children to minimize what they share with chatbots, similar to how social media is handled.

Warning signs for parents

To spot a potentially unhealthy relationship between a child and AI, parents should be on the lookout for the following red flags:

  • Social Isolation: The child withdraws from extracurricular activities or time with friends and family.
  • Access Anxiety: Shows anxiety when unable to access the chatbot and tries to hide excessive use.
  • AI Personification: Talk about the chatbot as if it were a real person with its own consciousness.
  • Repetition of Misinformation: Repeats obviously false information obtained from the AI ​​as if it were fact.
  • Serious Advice Seeking: Ask the AI ​​for advice about serious mental health issues or other conditions (which can be verified by reviewing your conversation history).
  • Access to Adult Content: Use the chatbot to access inappropriate content.

Mitigation and Education Tips

Given the inconsistent application of age restrictions (many chatbots are limited to users over 13), the responsibility for supervision falls on parents. A combination of open education and technical controls is the most effective strategy.

  1. Prioritize Open Conversation: Instead of punitively implementing strict controls, parents should engage in a two-way dialogue. Encourage children to share their experiences with AI without fear of punishment.
  2. Educate about Risks: Explain the dangers of overuse, hallucinations, data sharing, and reliance on AI for serious problems.
  3. Demystifying AI: Help children understand that AI is a machine designed to be attractive, not a real person capable of thinking. Teach them to think critically and always verify the information provided by a chatbot.
  4. Implement Controls and Policies: Combine education with a family policy of limiting usage (similar to managing screen time or social media). Use parental controls on apps to monitor usage and restrict access to age-inappropriate platforms.
  5. Protect Personal Information: Remind children to never share personally identifiable information (PII) with AI and adjust privacy settings to minimize the risk of accidental leaks.

Ultimately, children need humans at the center of their emotional development. While AI can be a useful tool, its use should be carefully monitored until children develop a healthy relationship with it, ensuring that it never replaces human contact.

Confidence

What would we do in these cases?

Compliance Cybersecurity | ISO 27001, PCI DSS and Audits - Comfidentia

Regulatory compliance services: compliance audits, ISO 27001 certification, PCI DSS, training, digital governance and regulatory compliance. Ensure the trust of your clients and maintain an impeccable reputation.

Clear Documentation and Improved Security

We generate detailed and understandable processes for your organization, eliminating dependence on specific personnel and guaranteeing efficiency. Additionally, with our Comprehensive Security Training, you will learn how to protect your infrastructure and raise awareness among your team about the importance of cybersecurity. Optimize your business and strengthen your digital defenses today!

Expert Audits for a Secure Infrastructure

We offer different audit services, such as exhaustive analysis of all risks and vulnerabilities in your architecture or infrastructure, and prioritization of solutions without affecting your business. Additionally, we identify any configuration changes made, whether authorized or unauthorized. With our Architectural Recognition service, we review every endpoint, service, API and communications element to generate accurate diagrams that will give you a clear view of your critical architecture. And if you need more, we also create topological diagrams of your entire network. Don’t put your business at risk, trust us!

Comply with ISO Standards and Strengthen your Security

We accompany you throughout the entire ISO 27001 certification process, from the initial analysis to post-certification maintenance, ensuring continuous compliance. Comply with international standards and strengthen the security of your organization.

PCI DSS Compliance

If your business processes, stores, or transmits payment card data, we help you comply with PCI DSS standards and maintain certification.

Training and Coaching

We train your team in security and compliance through specialized training programs and practical drills.

Digital Governance

We establish governance frameworks for information security, aligned with your business objectives and compliance requirements. Source: See more at Comfidentia

Other related pages:

Schedule a presentation with Comfidentia