When AI becomes a peer, exploring its impact on children's development & wellbeing

By Sarah Browne | 29th January 2026 | Blogs

Artificial intelligence is no longer a future concept for children and young people. It is already woven into daily life. From homework help to chatbots that respond like a friend, AI is increasingly present in the emotional and relational worlds of young people, with research showing that over 60% of children aged 7 to 17 have used generative AI.

What does it mean for wellbeing when children start relating to technology as if it were a peer?

Children are increasingly describing AI chatbots as feeling “like talking to a friend", with over a third reporting companionship from AI tools, rising to half among vulnerable children. While this can create a gentle, low‑pressure space for children to explore their thoughts, it’s important to remember that AI cannot offer the subtle emotional cues, warmth, or co‑regulation that come from sharing space with a trusted adult or peer.

At Thrive, we recognise that relationships are the heartbeat of healthy development. A caring adult brings empathy, attunement, shared moments of understanding and something AI is not designed to provide: gentle, compassionate challenge.

 

Why AI can feel like a friend

AI tools are created to validate the user experience, often mirroring back agreement or reassurance without the broader emotional context. They are designed to sound warm, responsive, and human, increasing levels of trust, especially among young people with weaker peer support networks. Over time, it can blur the line between information seeking and emotional reliance. This reframes AI from a neutral tool into something closer to a social actor in young people’s lives. 

Talking to AI vs talking to a human

A trusted adult, however, will notice when a child’s perspective may need soft reframing or when they need help thinking something through from another angle. This kind of relational guidance builds emotional resilience and helps strengthen a child’s sense of self.

AI is not governed by safeguarding, ethics, or duty of care. Adults in educational settings operate within clear safeguarding frameworks, professional responsibilities, and ethical boundaries, all designed to keep children emotionally and physically safe. AI, no matter how friendly it seems, cannot hold those responsibilities.

By helping young people notice these differences, we support them to build strong, secure relational patterns that nourish them in ways technology simply can’t replicate.

 

 

Reliance on AI for advice and information

Almost one in four children using AI chatbots now turn to them for advice. This has raised wider concerns in youth mental health, particularly for children who are already struggling. There is a risk they may be exposed to misinformation or unhelpful guidance and begin to rely on technology at exactly the moments when human connection and emotional grounding matter most.

Through a Thrive lens, we can gently encourage children to stay curious, to compare information, and to bring trusted adults into the conversation. A key part of this is supporting children to think critically about the information AI provides noticing what feels accurate, what feels uncertain, and when they might need to ask for clarification or seek another viewpoint.

When young people learn to weave together AI‑generated insights with human perspective, reflective dialogue, and critical thinking, their confidence grows and so does their sense of agency. They begin to understand that AI can be helpful, but it is their own judgement, supported by emotionally available adults, that truly keeps them safe and empowered.

 

 

The impact on the developing brain

Young brains are shaped by the environments they grow up in, and today’s digital world brings constant stimulation, rapid feedback loops, and emotionally charged interactions. Adolescents, whose brains are still developing emotional regulation and executive function, are particularly vulnerable to systems engineered for compulsive use. Recent findings from Internet Matters 2025 show rising emotional costs of being online, with negative wellbeing scores for vulnerable children at a three year high.

What supports healthy development remains consistent. Children need opportunities to pause, reflect, and regulate, supported by attuned adults. These co-regulated moments help build the neural pathways for resilience, empathy, and self-soothing.

While AI can respond, it cannot attune. It cannot read a child’s cues, adjust with genuine care, or provide the emotional safety that comes from being truly seen and understood. Relationships, not algorithms, remain the strongest protective factor in children’s development.

 

The benefits of AI when used well

Children use digital spaces to stay creative, active, and connected. AI can offer a low stakes environment to practise conversation, explore ideas, and receive instant support with learning. When guided by adults, AI can act as a scaffold rather than a substitute. It can enhance curiosity, problem solving, and agency, while human relationships remain the foundation. With the right boundaries, AI can complement, not compete with, the relationships children rely on to grow and flourish.

Alongside general chatbots, there are now more controlled and purpose-built AI tools designed to support children’s learning and wellbeing. Some systems focus on safer mental health support by improving accuracy and reducing risk, while others help detect early signs of emotional difficulty so that timely, human support can be put in place.

The Department for Education has also confirmed plans for AI tutors to be introduced in schools within the next two years, signalling a future where AI supports personalised learning under professional oversight rather than replacing educators.

Used thoughtfully, with clear boundaries and strong relational guidance, AI can complement, not compete with, the relationships children rely on to grow and flourish.

  

Reflections for educators

Educators play a pivotal role in shaping how children understand and use AI. Not as something to fear, but something to approach thoughtfully and relationally.

Practical steps that align with Thrive principles include:

  • Embedding gentle “pause and reflect” moments during AI use, help children tune into their bodies, emotions, and instincts.
  • Teaching children how to question and evaluate AI responses, encourage curiosity and critical thinking rather than passive acceptance.
  • Creating warm, open spaces for conversations about online emotions, identity, and personal experiences - let children feel seen and heard.
  • Reinforcing that AI can be a useful tool, but not a replacement for people. Adults remain the connectors, co‑regulators, and safeguarding anchors who hold the ethical and relational boundaries children rely on.

In a landscape where AI increasingly feels relational, the role of educators remains central.

  

Join our upcoming webinar: AI and wellbeing, what schools are noticing and why it matters
We’ll be exploring how emerging technologies, including AI, are shaping children’s wellbeing, behaviour and learning, and what this means for schools in practice.

Tuesday 3rd February | 4:00pm - 4:45pm

Register for free

  

   Sarah Browne  | Content Developer

 

 Join our community of senior leaders and classroom staff

CONTACT US