Some young people even go further with “AI companions”. Apps like Replika or Character.ai are increasingly marketed to adolescents as playful, wise, and comforting “friends”. Several apps, including CrushOn.AI and DreamGF (an AI girlfriend chatbot), also highlight the attraction of experimenting with romantic or sexual conversations.
“As with social media, these new ways of using AI raise big questions for parents about their ability to control the content their children have access to,” says Dr Gordon Ingram, a and senior lecturer at RMIT Vietnam.
Social cognition and AI: What does research say?
Social cognition – the ability to perceive, process, and respond to social stimuli involving other human beings – is among the crucial skills that children must develop.
“It is built through real-world interactions and social modelling, where children learn to interpret facial expressions, take others’ perspectives, resolve conflicts, and regulate emotions”, explains Ms Vu Bich Phuong, a child and adolescent clinical psychologist and RMIT associate lecturer.
Young children may attribute human-like traits to AI chatbots, perceiving them as sentient or emotionally aware. This may lead to parasocial interactions, where children form one-sided emotional bonds with AI, similar to cartoon characters or celebrities. More importantly, AI chatbots respond to and meaningfully interact with users. “This raises concerns about whether such interactions may come to substitute for real interactions with peers in children’s social development”, Ms Phuong says.
Research on children’s and young people’s use of AI tools tends to focus on their increasing . Yet, if children are spending more time interacting with AI, it is still an open question whether these interactions are beneficial for them to develop empathy, conflict resolution, and social risk-taking – skills vital for maintaining healthy relationships with other people.
Counter-intuitively, Dr Ingram suggests that, by including safeguards on their social interactions and always maintaining a polite tone (at least by default), AI interactions may actually be “too safe”, because it might also create unrealistic expectations of real-life social relationships.
Ms Phuong agrees, adding that when children are not exposed to challenging peer dynamics, they may become less tolerant of discomfort, less resilient in the face of rejection, or even more impulsive – traits linked to anxiety and social withdrawal.
“Given the explosion in children’s and young people’s parasocial interactions with AI tools, this is a topic in urgent need of research,” she says.
What should be the age restrictions for generative AI?
Social media platforms like Facebook and TikTok now officially prohibit users from creating an account if they are under 13, largely following concerns about pre-teens encountering inappropriate material or even being “groomed” by adults on social media. Australia recently raised the minimum age to 16 by law, and other countries, including New Zealand, may soon follow suit.
However, generative AI platforms lack parallel restrictions, despite offering similarly immersive experiences and content. Large language models (LLMs) and chatbots are easily accessed without meaningful age verification.
Should we apply a similar age threshold for AI? “Yes, and perhaps even more strictly,” says Ms Phuong. This is because unlike social media, AI companions simulate reciprocal conversations, which can have deeper psychological influence on impressionable children. Without regulation, children may misinterpret AI’s responses as morally or socially appropriate.