mobile logo
Back to Blog
Mother May AI: An Opinion on Geoffrey Hinton's Mother AI
09/18/2025

Mother May AI: An Opinion on Geoffrey Hinton's Mother AI

By Evan Owen

For decades, researchers have chased the fantasy of superintelligent systems that are smarter, more capable, and perhaps even godlike. With the rise of large language models and the concurrent revolution in hardware power, technology is advancing faster than we ever thought possible. Does this mean we are on the precipice of domination by machines? Are we on the verge of a reality where human obsolescence is knocking on our door?


At the AI4 conference in Las Vegas, Geoffrey Hinton expanded on his view that a future with AGI is rapidly approaching, estimating it is "five to twenty years" away instead of his previous guess of "thirty to fifty years." He said, "Very few people who are experts in AI think we’re not going to get superintelligence." In his view, these machines will be "much smarter than us," so we must ensure that "when they’re more powerful than us and smarter than us, they still care about us." Hinton believes an intelligent AI will develop two instrumental subgoals: "It'll try and stay alive and it’ll try to get more control."


Hinton continued on this path:


“So we need to reframe this problem. It's not that we have to be stronger than them and stay in control of them. We have to make it so that when they're more powerful than us and smarter than us, they still care about us. The only model we have of a more intelligent thing being controlled by a less intelligent thing is a mother being controlled by her baby. The mother has all sorts of built-in instincts, hormones, as well as social pressures to really care about the baby. And the mother genuinely cares about the baby. What we need to do is develop mother AI. We need AI mothers rather than AI assistants. An assistant is someone you can fire. You can't fire your mother.”


Geoffrey Hinton’s perspective highlights a key challenge of modern artificial intelligence. The AI models we know today are derived from inherently human data, and the benevolence of a model is based on the benevolence (or lack thereof) of the humanity that trained it. A model's behavior emerges from existing human interactions and is only able to pull from information created or curated by humans.


The idea of instilling a top-down, "maternal" benevolence into an AI model is easier said than done. If we possessed a reliable methodology for embedding such complex attributes, we would have already solved countless smaller problems that plague today's world. While the goal of a maternal, benevolent AI is not controversial — many, including myself, agree it is a worthy aspiration for superintelligent systems — it distracts from the real and tangible problems AI is currently facing. It incites hysteria and dystopian fears rather than grounding us in the challenges we must solve today.


Instead of asking the question “Mother May AI?,” we should be asking ourselves how we can solve the problems AI is creating now and will create in the near future. How do we focus our efforts on making AI that serves practical, beneficial purposes and follows well-defined, meaningful goals and objectives, rather than letting hypothetical uncertainties run wild? In her talk, Fei-Fei Li said it best:


“At the end of the day, if we believe in the benevolence of human society and where we want to push for the future, we need to put humans in the center. We cannot put a technology in the center. AI is created by people. It is used by people, and it should be governed by people. And that’s why we should put humans in the center of this. The principle of human-centered AI, for me, is about responsibly using AI to augment and enhance humanity to better the human condition. And I think that can guide business, that can guide research, that can guide policymaking.”


By keeping human governance, well-being, and benevolence at the center of AI development, we can instill the kind of deliberate or emergent benevolence our models need. This human-centered philosophy is what helps drive our work. At Qwerky, we are focused on solving the tangible problems of today, such as AI's sustainability and efficiency challenges. We believe our model optimizations not only help solve the practical issues AI is facing now but will also help lead to a future where the best of humanity can be reflected in the models that enhance rather than detract.