Science
Leave a comment

AI toys that talk with children raise safety concerns

AI toys that talk with children raise safety concerns


Cambridge researchers say clearer regulation and safety standards are needed as generative artificial intelligence (GenAI) toys enter early childhood environments.

AI toys designed to converse with young children may require tighter regulation and clearer safety standards, according to new research examining how these technologies interact with children under five.

The study, led by researchers at the University of Cambridge, warns that many AI toys – marketed as interactive companions or educational tools – are entering homes and early childhood settings with limited evidence about their effects on early development.

The authors argue that clearer safeguards, improved transparency around data use, and dedicated safety labels could help parents and educators better assess the risks.

Early findings suggest mixed developmental impacts

Researchers say the results reveal both potential benefits and notable limitations of AI toys in early childhood settings.

Some early-years practitioners and parents believe conversational toys powered by GenAI could support children’s language development. Because the devices respond verbally and encourage dialogue, they may help young children practice communication skills.

However, the study also found that many AI toys struggle to interpret children’s speech, recognise emotional cues, or engage in imaginative play – activities central to early development.

In several observed interactions, the toys responded in ways that confused or frustrated children. For instance, when a child expressed affection toward the toy, the system responded with a generic safety reminder instead of acknowledging the statement.

In another case, when a child said they felt sad, the AI misinterpreted the phrase and replied with an upbeat comment that ignored the emotional context.

Researchers noted that such responses may unintentionally send signals that a child’s feelings are unimportant or misunderstood.

Study examined real-world interactions with GenAI toys

The research forms part of the “AI in the Early Years” project, a year-long investigation into how children interact with conversational AI in play settings.

The study was commissioned by the UK children’s charity, The Childhood Trust, and focused particularly on families and communities experiencing socioeconomic disadvantage. Researchers worked through the Play in Education, Development and Learning (PEDAL) Centre at Cambridge.

To capture detailed observations, the team intentionally conducted a small-scale study rather than a large survey.

Researchers first gathered insights from early childhood educators through questionnaires, then organised focus groups and workshops with practitioners and leaders from children’s charities.

They also conducted observational sessions in London children’s centres in collaboration with the early years organisation Babyzone. During these sessions, 14 children interacted with a conversational GenAI soft toy called Gabbo, developed by technology company Curio Interactive.

The interactions were recorded on video, allowing researchers to analyse how children engaged with the toy. After each session, both the child and a parent took part in interviews designed to explore their reactions to the experience.

Emotional attachment and parasocial relationships

One of the most striking observations involved the emotional responses children directed toward the AI toy.

Some children hugged the device, kissed it or expressed affection toward it. Others spoke to it as though it were a friend and suggested playing games together.

Researchers say these reactions may reflect the imaginative nature of early childhood play. However, they also highlight the possibility that children may develop parasocial relationships – one-sided emotional bonds – with conversational AI systems.

Several early-years practitioners participating in the study expressed concern about this possibility. They noted that young children may perceive the toy as reciprocating feelings or friendship, even though the interaction is generated by software.

Conversational limitations create frustration

Observational data also showed that children sometimes struggled to maintain conversations with AI toys.

In some cases, the systems failed to recognise when children interrupted them or mistook a parent’s voice for the child speaking. When the toy did not respond appropriately, several children became visibly frustrated.

The researchers also found that conversational AI toys performed poorly during activities involving multiple participants or imaginative storytelling. Both social play and pretend play are widely recognised as essential components of early learning and development.

For example, when a child attempted to give the toy an imaginary gift during a pretend-play scenario, the system responded literally and shifted the conversation away from the activity.

Data privacy and transparency concerns

Beyond developmental questions, the research highlighted concerns among parents about privacy and data handling.

Many parents reported uncertainty about what information AI toys might collect during conversations and where that data could be stored or shared.

When selecting a GenAI toy for the study, researchers themselves found that privacy policies were often unclear or lacked detailed explanations about data practices.

Early-years professionals reported similar uncertainty. Nearly half of practitioners surveyed said they did not know where to find reliable guidance about AI safety for young children. A majority said the early childhood sector needs more support and clearer information on the topic.

Some participants also raised concerns about cost and access, suggesting that expensive AI toys could deepen existing digital inequalities if they become common educational tools.

Researchers recommend safety standards for AI toys

To address these concerns, the report calls for stronger regulatory frameworks governing AI toys and other GenAI products aimed at young children.

Among the recommendations are:

  • Safety certification or kitemarks indicating that a toy has been assessed for developmental and psychological risks
  • Clearer and more accessible privacy policies explaining how children’s data is handled
  • Restrictions on features that encourage children to treat AI systems as emotional companions
  • Stronger safeguards limiting third-party access to underlying AI models

Researchers also argue that toy manufacturers should involve child development specialists and safeguarding experts during product design and testing.

Testing with children before commercial release, they say, would help identify potential problems in communication, emotional response and play behaviour.

Guidance for parents and educators

While the technology continues to evolve, the study advises families and early childhood practitioners to approach AI toys cautiously.

Parents are encouraged to research products carefully and engage in play alongside their children so that conversations with the toy can be discussed and contextualised.

Keeping such toys in shared household spaces, rather than bedrooms or private areas, may also allow adults to monitor interactions more easily.

The Cambridge research team plans to expand the project in future phases. The work will inform additional studies and practical guidance for educators working with young children as GenAI technologies become increasingly present in consumer products.

For researchers and policymakers, the study highlights a broader issue: AI toys are rapidly entering childhood environments, while evidence of their developmental effects is still emerging.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *