91tv

Is generative AI friend or foe for children’s social development?

Is generative AI friend or foe for children’s social development?

With the growing accessibility of generative AI tools in Vietnamese schools and homes, Dr Gordon Ingram and Ms Vu Bich Phuong, RMIT Psychology lecturers, suggest there should be more discussions on the psychological effects of kids’ interactions with AI chatbots and virtual friends.

The rise of generative AI usage and AI companions in youth

Children and young people now spend more time online than ever before. Globally, it is estimated that teens spend in front of screens, and nearly go online almost constantly during the day. In Vietnam, 89 per cent of children and adolescents use the Internet every day, for on average.

Parents and teachers are still adapting to this massive increase in screen time and the social media influences on children’s lives that arise from platforms like TikTok, Instagram, and YouTube. But lately, new trends have quickly emerged, driven by developments in AI technology.

Youths are experimenting with ChatGPT to get help with homework, play games, solve social problems in their lives, or simply chat when feeling lonely. Some may prefer chatting with AI to talking with peers, especially when struggling with negative emotions.

Teenager staring at a laptop Children and young people now spend more time online than ever before. (Photo: Unsplash)

Some young people even go further with “AI companions”. Apps like Replika or Character.ai are increasingly marketed to adolescents as playful, wise, and comforting “friends”. Several apps, including CrushOn.AI and DreamGF (an AI girlfriend chatbot), also highlight the attraction of experimenting with romantic or sexual conversations.

“As with social media, these new ways of using AI raise big questions for parents about their ability to control the content their children have access to,” says Dr Gordon Ingram, a and senior lecturer at RMIT Vietnam.

Social cognition and AI: What does research say?

Social cognition – the ability to perceive, process, and respond to social stimuli involving other human beings – is among the crucial skills that children must develop.

“It is built through real-world interactions and social modelling, where children learn to interpret facial expressions, take others’ perspectives, resolve conflicts, and regulate emotions”, explains Ms Vu Bich Phuong, a child and adolescent clinical psychologist and RMIT associate lecturer.

Young children may attribute human-like traits to AI chatbots, perceiving them as sentient or emotionally aware. This may lead to parasocial interactions, where children form one-sided emotional bonds with AI, similar to cartoon characters or celebrities. More importantly, AI chatbots respond to and meaningfully interact with users. “This raises concerns about whether such interactions may come to substitute for real interactions with peers in children’s social development”, Ms Phuong says.

Research on children’s and young people’s use of AI tools tends to focus on their increasing . Yet, if children are spending more time interacting with AI, it is still an open question whether these interactions are beneficial for them to develop empathy, conflict resolution, and social risk-taking – skills vital for maintaining healthy relationships with other people.

Counter-intuitively, Dr Ingram suggests that, by including safeguards on their social interactions and always maintaining a polite tone (at least by default), AI interactions may actually be “too safe”, because it might also create unrealistic expectations of real-life social relationships.

Ms Phuong agrees, adding that when children are not exposed to challenging peer dynamics, they may become less tolerant of discomfort, less resilient in the face of rejection, or even more impulsive – traits linked to anxiety and social withdrawal.

“Given the explosion in children’s and young people’s parasocial interactions with AI tools, this is a topic in urgent need of research,” she says.

What should be the age restrictions for generative AI?

Social media platforms like Facebook and TikTok now officially prohibit users from creating an account if they are under 13, largely following concerns about pre-teens encountering inappropriate material or even being “groomed” by adults on social media. Australia recently raised the minimum age to 16 by law, and other countries, including New Zealand, may soon follow suit.

However, generative AI platforms lack parallel restrictions, despite offering similarly immersive experiences and content. Large language models (LLMs) and chatbots are easily accessed without meaningful age verification.

Should we apply a similar age threshold for AI? “Yes, and perhaps even more strictly,” says Ms Phuong. This is because unlike social media, AI companions simulate reciprocal conversations, which can have deeper psychological influence on impressionable children. Without regulation, children may misinterpret AI’s responses as morally or socially appropriate.

Dr Gordon Ingram and Ms Vu Bich Phuong (L-R) Dr Gordon Ingram and Ms Vu Bich Phuong (Photo: RMIT)

has urgently called for governments to regulate generative AI in schools. , a non-profit organisation that reviews the suitability of emerging media and technology for children, also recommends banning social AI companions for people under 18. Mechanisms for age verification have been discussed previously by RMIT experts.

“We are calling for a continuous effort to raise awareness for parents and educators about this age restriction regulation,” Ms Phuong urges.

In addition, while inappropriate content is easily flagged on platforms like YouTube or Facebook, the reporting mechanisms for many generative AI systems are not transparent for children or caregivers.

“AI providers need to make reporting mechanisms more visible and accessible, and researchers can help with this by conducting studies to analyse users’ concerns about using AI and their awareness of what to do if they find something disturbing,” Dr Ingram suggests.

Recommendations for parents, educators, and policymakers

To ensure AI supports rather than undermines children’s social development, Ms Phuong recommends that parents supervise and co-engage with their child during and after their interactions with LLMs or chatbots. Doing so can help parents understand their kids’ perception of AI-generated content, provide adequate and timely support, and ensure that human interaction is available alongside AI interaction, helping kids think critically about the output they receive from AI.

To educators, Dr Ingram recommends prioritising group discussion, group play and cooperative problem solving in class. Generative AI can be introduced in secondary school, and only within a collaborative, group-based setting.

The RMIT academics suggest policymakers mandate age-appropriate design of AI platforms to verify age and provide child-safe settings with content moderation. Reporting mechanisms to flag harmful content must be available and easily accessed.

More systemic support and funding for research are also needed to bring multidisciplinary teams together, including psychologists and technical experts, to conduct high-quality longitudinal and experimental studies on AI and youth development. Such initiatives will be necessary to develop national guidelines for ethical AI use in education and homes.

“As AI becomes a normalised part of children’s lives in Vietnam and around the world, we must tread carefully. Artificial intelligence can be a powerful tool for learning, but it is no substitute for the rich, natural intelligence that enables us to live in an emotional world of human relationships,” Dr Ingram says.

“On this International Children’s Day, may we turn off our screens and turn on our connection with our children. Let us raise a generation of children with empathy, resilience, and belonging – children who enjoy learning, growing, connecting with each other more than with machines – for their social health,” Ms Phuong concludes.

-----

Masthead image: Angelov – stock.adobe.com 

Related news