By Stuart Kerr, Technology Correspondent
Published: 27 July 2025
Last Updated: 27 July 2025
Contact: liveaiwire@gmail.com | Twitter: @LiveAIWire
Author Bio: About Stuart Kerr
If artificial intelligence is ever to match the flexibility, intuition, and adaptability of human cognition, researchers say it may need to mimic more than just adult logic. It might need to learn like a baby.
From Cradle to Code
For decades, AI models have been trained using massive labelled datasets and brute-force computation. But infants don’t require millions of examples to grasp the world. They learn from limited input, messy environments, and ambiguous feedback—yet still manage to master language, emotion, and problem-solving with astonishing efficiency.
New research is embracing this human developmental blueprint. At Université de Montréal, scientists used machine learning to evaluate brain maturity in newborns via EEG scans, potentially detecting developmental issues in just minutes. It’s a window into how real infant learning works—and how AI might follow.
Meanwhile, a study featured on Cognifit revealed that babies learn concepts by breaking them into subcomponents—a technique now mirrored in AI architectures. This kind of "concept decomposition" allows machines to form generalisations from sparse, noisy data.
Learning Through Limitations
In “Rewired Brains”, LiveAIWire explored how AI models are already assisting in trauma recovery by analysing early neural responses. But some researchers want to flip the equation entirely—asking how AI itself can be trained to learn in a human-like developmental arc.
That’s the goal of BabyVLM, a new vision-language model that mimics infant learning patterns. Developed with limited datasets and adaptive feedback, BabyVLM is already outperforming larger models on low-data tasks.
Similarly, Devdiscourse reports on how infant-inspired models improve AI’s ability to adapt, transfer knowledge, and learn new tasks without retraining. Efficiency, it turns out, might lie in early developmental mimicry.
Ethical Nursery
But alongside scientific intrigue lies ethical concern. Should AI be embedded in developmental systems before we fully understand how those systems form? What happens when machines are raised with biased data, missing empathy, or shaped by corporate imperatives?
In “The AI Parent Trap”, we explored how parents are outsourcing everything from sleep training to storytelling to AI companions. But if we raise AI like children—and raise children alongside AI—we risk entangling both in cycles of influence we barely comprehend.
The arXiv paper on AI and child development calls for strict governance and transparency, particularly in systems used around infants. Researchers urge caution in mixing the fragile unpredictability of early childhood with the deterministic logic of machine code.
The Emergent Mind
Ultimately, the most compelling argument for infant-style learning isn’t efficiency or novelty—it’s emergence. Babies don’t just learn facts. They build mental models of the world. They experiment. They fail. They try again.
This gradual scaffolding of thought is exactly what AI struggles with. As LiveAIWire highlighted in "The AI Identity Crisis", human cognition is grounded in a continuous experience of growth and memory. Teaching machines to learn from birth may be the first step in giving them something that looks like a mind.
About the Author
Stuart Kerr is the Technology Correspondent for LiveAIWire. He writes about artificial intelligence, ethics, and how technology is reshaping everyday life. Read more.