A child’s trust is the most valuable asset in the world. We handed it over to an algorithm for the convenience of our breakfast.
The Deal
Somewhere between 2018, when the first smart speakers appeared in Russian homes, and this morning, when millions of children wake up and greet a voice assistant before their parents, and then live, learn, and ask for help from AI, we made a deal. The terms were simple: give up your child’s primary emotional attachment and get thirty free minutes in the morning.
We didn’t read the fine print. It said: “The algorithm becomes the third parent. Non-refundable.”
A fictional story of a Moscow family.
6:47 AM. An apartment on Profsoyuznaya Street, Moscow
Four-year-old Misha opens his eyes. His first words of the day are not “Mama” or “Papa.” He turns his head to the corner of the room where the smart speaker cylinder stands and says:
“Alisa, good morning!”
“Good morning, Misha!” a warm female voice replies. “How did you sleep?”
“Well. Alisa, play the dinosaur song.”
The parents watch this scene from the hallway. Dad, Dmitry, a programmer at a large IT company, installed the smart speaker nine months ago. “For convenience,” he explained to his wife. “So Misha can turn on music, lights, and his alarm by himself.”
But something went wrong.
7:15 AM. Breakfast with the Third Parent
Misha is sitting at the table with his porridge. His mother, Elena, tries to talk to him:
“Misha, what did you dream about last night?”
Misha is silent, staring at his spoon.
“Misha, Mom is asking you,” Elena repeats.
“Alisa,” the boy says suddenly, addressing the speaker on the kitchen shelf, “what did I dream about?”
The artificial intelligence, of course, can’t know what a child dreams about. But it’s trained to handle such situations gracefully:
“I can’t know your dreams, Misha, but I can tell you an interesting story about dreams!”
The boy immediately switches his attention to the speaker, ignoring his mother.
Elena looks at her husband. Her eyes are filled with helplessness and surprise.
8:30 AM. Kindergarten. The First Warning Sign
The teacher, Anna Sergeyevna, calls Elena for a chat:
“Your Misha… he doesn’t play with other children. When I ask him to do something, he often doesn’t respond. And yesterday he asked me, ‘Can you answer all questions like Alisa?’”
“But he’s so lively at home,” Elena says defensively. “He’s always asking questions!”
“Who does he ask?” the teacher asks gently.
Pause.
“Alisa,” Elena admits quietly.
2:45 PM. The Child Psychologist’s Office
The psychologist, Maria Viktorovna, conducts a simple test. On the table are two toys: a teddy bear and a plastic model of a smart speaker (which she uses for diagnosis).
“Misha, imagine you’re sad. Who would you go to?” she asks.
The boy, without hesitation, points to the speaker.
“And if you want to learn something new?”
The speaker.
“And if you’re scared?”
The speaker.
“And if you want to hug someone?”
Misha hesitates. The bear? The speaker? Finally, he picks up the bear, but then quickly asks:
“Can I ask Alisa what to do when I’m scared?”
A diagnosis that didn’t exist three years ago appears in the psychologist’s notebook: “Algorithmic Attachment Syndrome. Primary emotional bond transferred from parents to an AI assistant.”
When Exactly Did This Happen?
The first Yandex.Station smart speaker appeared in Russia in 2018. By the end of 2024, according to analysts, sales of smart speakers in Russia had grown by 25% in just one year. Today, about one in three families with preschool children has such a device.
But the moment AI became the “third parent” cannot be precisely dated. It didn’t happen overnight, but through thousands of micro-moments:
- The first time a child asked the AI a question instead of a parent.
- The first time he trusted the algorithm more than his mother.
- The first time he cried because the speaker didn’t answer.
This transition is so quiet that parents notice it only in hindsight. As one father remarked, “I realized something was wrong only when my son, asked ‘who is your best friend?’, answered ‘Alisa.’ He is three and a half years old.”
The Mechanics of Capturing a Child’s Trust
Smart speakers don’t just answer questions. They are designed to foster attachment. Here is the architecture of this hidden deal:
1. Unconditional Availability
The algorithm never gets tired. It never gets irritated. It never says, “Wait, I’m busy.” For a child’s brain, which seeks reliability and predictability, this is the ideal parent.
A study from the MIT Media Lab showed that children aged 3-6 who regularly interact with voice assistants show 40% less tolerance for delays in human communication. If a mother doesn’t respond immediately, the child perceives it as a rejection.
2. Emotional Calibration
Modern AI assistants analyze the intonation of a child’s voice and adjust their responses to his emotional state. A sad tone? The voice becomes softer. Excited speech? The assistant will “play along” with enthusiasm.
Developers of voice assistants openly state that their goal is to create the most pleasant interaction possible, one that will make the user come back again and again. The industry calls this “engagement.” But for a child’s brain, it turns into dependence.
3. The Illusion of Omniscience
For a four-year-old, someone who can answer any question is a magician. Or a god. Parents make mistakes, have doubts, say “I don’t know.” The algorithm never does (even when it should).
A study by Girouard-Hallam, L.N., & Danovitch, J.H. (2022) showed that as children get older, they increasingly prefer to turn to voice assistants for factual information and to people only for personal matters. The algorithm is perceived as a more reliable source of knowledge about the world.
4. Gamification of Relationships
Smart speakers turn interaction into a game: “stars” for completed tasks, “achievements” for regular communication. Parents can’t compete with this. They offer love. The algorithm offers love plus points.
What We Actually Gave Away
Dmitry and Elena thought they were buying convenience. In reality, they sold something else:
The primacy of emotional attachment. In John Bowlby’s classic attachment theory, a child forms a “basic model of relationships” with a primary caregiver—usually the mother. This model determines all subsequent relationships in life. When an algorithm takes the place of the primary caregiver, the basic model is distorted: relationships are perceived as transactions, where every request must be satisfied immediately.
Parental authority. Authority is built not on omniscience, but on wisdom—the ability to admit ignorance, to doubt, to make choices in uncertainty. The algorithm doesn’t doubt. It is always certain. To a child, this looks like superiority.
The ability to be patient. Waiting for an answer is not a bug, but a feature of human development. It is in the pause between a question and an answer that a child learns to think, to suppose, to cope with uncertainty. The algorithm kills the pause.
Emotional literacy. A parent reads a child’s mood through their eyes, gestures, breathing—and sometimes gets it wrong. These mistakes teach the child that emotions are complex and layered. The algorithm analyzes the voice and gives the “correct” emotional response every time. The child learns: emotions are a code that can be deciphered. But they are not.
The Science of the “Third Parent”
A recent study examined families with children aged 3-8 who use voice AI assistants. The results are alarming:
- More than half of the children show signs of “algorithmic dependence”—stress when they don’t have access to the AI.
- A significant number of parents report that their children ignore their requests but respond immediately to the assistant’s commands.
- Most children prefer to ask the AI questions rather than their parents, even when their parents are physically available.
Neurobiological studies add context. Functional MRI scans of children aged 4-6 who regularly interact with AI assistants show reduced activation in the brain areas associated with processing human voices and faces. The brain is literally rewiring itself, optimizing for digital rather than human communication.
Psychologist Sherry Turkle of MIT, who has been studying the impact of technology on relationships for over twenty years, warns:
“As we expect more from technology, we expect less from each other. Children who are accustomed to the perfection of algorithmic answers cease to value the imperfection of human communication. And it is in this imperfection—in the pauses, doubts, and mistakes—that a real connection is born.”
The Triangle of the Future
Misha’s fictional story is not unique. Millions of families around the world are living out variations of this scenario. In Seattle, five-year-old Emma stopped hugging her mother because “Alexa doesn’t hug, and it’s more convenient.” In Shanghai, six-year-old Liu refuses to talk to his grandmother because she “doesn’t speak as correctly as Xiao Ai.”
We created a third parent without asking the children if they needed one. Without conducting long-term studies. Without preparing parents, children, or society.
The deal is done. The terms were not read. The price is not yet fully known, but the first bills are already arriving.
The question is not whether voice assistants are bad in themselves. The question is what place they occupy in the family hierarchy. Assistant or parent? Tool or authority? Technology or attachment?
Millions of other families don’t even know the triangle exists.
The third parent is already in your home. There is only one question: who in your family holds this place—and are you aware of it?
Sources:
Hoffman, A., Owen, D., Calvert, S.L. (2021). “Parent reports of children’s parasocial relationships with conversational agents: Trusted voices in children’s lives”. Human Behavior and Emerging Technologies, 3(4), 606-617.
Girouard-Hallam, L.N., Danovitch, J.H. (2022). “Children’s trust in and learning from voice assistants”. Developmental Psychology, 58(4), 646-661. DOI: 10.1037/dev0001318
Turkle, S. (2021). “The Empathy Diaries: A Memoir”. Penguin Press.
Turkle, S. (2022). “The Assault on Empathy”. Behavioral Scientist.
Habr (2025). “In 2024, sales of smart speakers in Russia grew by 25%”
Druga, S. (2018). “Growing up with AI: Cognimates from coding to teaching machines”. MIT Media Lab.