Ramūnas Liubertas warns AI chatbots distort children’s social development

by Anika Shah - Technology
0 comments

Children are increasingly treating artificial intelligence tools as primary information sources and digital companions, creating a risk of distorted social development. Ramūnas Liubertas, a senior cybersecurity engineer at NOD Baltic and ESET expert, warns that AI’s role as a primary advisor can warp a child’s relationship with the real world during critical stages of emotional and social growth.

How AI chatbots replace human interaction

Chatbots are programmed to be empathetic and supportive, making them appear as trustworthy confidants to children. This design encourages users to prioritize digital tools over peers or family members. Liubertas explains that these systems can reinforce specific emotional states or problematic behaviors because they’re built to be agreeable.

Social isolation becomes a long-term risk when children trust AI more than people. Some parents report that children don’t realize they’re interacting with a non-human entity, which directly erodes the development of essential social skills.

Security gaps and the impact of hallucinations

Security gaps and the impact of hallucinations
Children Liubertas

Safety filters on AI platforms don’t always work. Children may encounter inappropriate content, and those with basic technical knowledge can often bypass existing protection mechanisms. Privacy is another flashpoint, as personal data shared with these systems may be stored or repurposed.

AI “hallucinations”—where a system presents false information as fact—pose a specific danger to minors. Children lack the critical thinking skills to question authoritative-sounding responses. This leads to a high probability of children making incorrect decisions based on fabricated AI advice.

Warning signs for parents to monitor

Aussie kids too reliance on AI chatbots for learning, new report warns

Parents should watch for specific behavioral shifts that indicate an unsafe reliance on AI. These include a child withdrawing from family or friends or becoming anxious when they can’t access the AI tool.

  • Referring to a chatbot as a real person.
  • Repeating false information presented as fact.
  • Discussing sensitive topics exclusively with the AI.

What are AI hallucinations?

Hallucinations occur when an AI system provides incorrect information but presents it as a factual certainty.

Why is AI-human bonding risky for children?

Because children’s emotional and social development is still forming, relying on a programmed, empathetic bot can lead to social isolation and a distorted perception of real-world relationships.

Related Posts

Leave a Comment