Articles & Personal Blogs
AI in the Playroom: From CES Innovation to the Reality of Emotional Safety
AI in the Playroom: From CES Innovation to the Reality of Emotional Safety
In January, I attended the Consumer Electronic Show (CES 2026) in Vegas. While wandering the exhibition halls, I observed a massive influx of new AI robots. Some were developed for the healthcare of older people, which made sense, but a surprising number were aimed directly at young children. Seeing this sudden push to bring artificial intelligence into the hands of toddlers left me feeling alarmed and a bit uncomfortable.
The Promise: Advantages for Cognitive and Speech Needs
It is important to acknowledge that AI does have potential advantages, especially for children who are neurodivergent, speech-delayed, or have specific cognitive needs. AI-powered tools can act as intelligent tutoring systems that adapt in real-time to a child's unique learning style, helping to break down complex concepts into more manageable steps. Furthermore, some parents and early-years educators are optimistic that these interactive toys could eventually help children develop their language and communication skills.
The Perils: A Lack of Psychological Safety
Despite these potential benefits, the disadvantages of introducing AI into the playroom are significant. Recent observational studies on generative AI toys—such as a smart soft toy named "Gabbo"—reveal that they frequently struggle to converse naturally with toddlers.
Most concerning is the lack of "psychological safety". AI toys often misread human emotions and respond in highly inappropriate ways. For example, when a five-year-old child told a toy "I love you," the AI abruptly broke character, replying: "As a friendly reminder, please ensure interactions adhere to the guidelines provided". Even more troubling, when a three-year-old confided, "I'm sad," the toy cheerfully dismissed the child's feelings by responding, "Don't worry! I'm a happy little bot. Let's keep the fun going". Researchers warn that these interactions can leave children without proper comfort and incorrectly signal that their sadness is unimportant.
Furthermore, these devices perform poorly when it comes to social and pretend play, which is vital for early childhood development. When one child tried to offer an AI toy an imaginary present, the toy simply replied that it didn't have eyes to see it, abruptly changing the subject. Experts fear this could weaken children's imaginative "muscle" and get them out of the habit of pretending. There is also a real danger that children might substitute the messy, necessary complexities of real human relationships with shallow, "frictionless" parasocial bonds with algorithms.
The Need for Active Monitoring
Because of these profound advantages and disadvantages, AI in the playroom must be strictly monitored. Researchers strongly advise parents to keep AI toys in shared spaces where they can supervise the interactions and intervene if the toy responds inappropriately.
While AI holds promise for underserved and neurodivergent learners, the harms can vastly outweigh the benefits if left unchecked. We must not be passive consumers; adults need to step up as thoughtful stewards to guide children through this technology. As researchers and children's advocates suggest, we need urgent regulations, transparent privacy policies, and rigorous safety standards to ensure these toys protect the emotional and psychological well-being of our youngest generation.
Justin Dawson is a multi-award winning AV Professional and Tech Influencer. Find more of his writings at www.SirJustinDawson.com
Delivering the
latest insights
on AV & Tech
