This story is a part of the 2025 TIME100. Learn Jennifer Doudna’s tribute to Demis Hassabis right here.
Demis Hassabis discovered he had received the 2024 Nobel Prize in Chemistry simply 20 minutes earlier than the world did. The CEO of Google DeepMind, the tech large’s synthetic intelligence lab, obtained a telephone name with the excellent news on the final minute, after a failed try by the Nobel Basis to search out his contact info upfront. “I’d have gotten a coronary heart assault,” Hassabis quips, had he discovered concerning the prize from the tv. Receiving the respect was a “lifelong dream,” he says, one which “nonetheless hasn’t sunk in” after we meet 5 months later.
[time-brightcove not-tgx=”true”]
Hassabis obtained half of the award alongside a colleague, John Jumper, for the design of AlphaFold: an AI software that may predict the 3D construction of proteins utilizing solely their amino acid sequences—one thing Hassabis describes as a “50-year grand problem” within the subject of biology. Launched freely by Google DeepMind for the world to make use of 5 years in the past, AlphaFold has revolutionized the work of scientists toiling on analysis as assorted as malaria vaccines, human longevity, and cures for most cancers, permitting them to mannequin protein constructions in hours quite than years. The Nobel Prizes in 2024 had been the primary in historical past to acknowledge the contributions of AI to the sector of science. If Hassabis will get his approach, they received’t be the final.
AlphaFold’s influence could have been broad sufficient to win its creators a Nobel Prize, however on the planet of AI, it’s seen as nearly hopelessly slender. It may well mannequin the constructions of proteins however not a lot else; it has no understanding of the broader world, can not perform analysis, nor can it make its personal scientific breakthroughs. Hassabis’s dream, and the broader business’s, is to construct AI that may do all of these issues and extra, unlocking a future of virtually unimaginable surprise. All human illnesses will likely be a factor of the previous if this know-how is created, he says. Power will likely be zero-carbon and free, permitting us to transcend the local weather disaster and start restoring our planet’s ecosystems. World conflicts over scarce sources will dissipate, giving approach to a brand new period of peace and abundance. “I believe among the largest issues that face us at this time as a society, whether or not that’s local weather or illness, will likely be helped by AI options,” Hassabis says. “I’d be very anxious about society at this time if I didn’t know that one thing as transformative as AI was coming down the road.”
This hypothetical know-how—recognized within the business as Synthetic Basic Intelligence, or AGI—had lengthy been seen as many years away. However the quick tempo of breakthroughs in laptop science over the previous couple of years has led prime AI scientists to radically revise their expectations of when it is going to arrive. Hassabis predicts AGI is someplace between 5 and 10 years away—a quite pessimistic view when judged by business requirements. OpenAI CEO Sam Altman has predicted AGI will arrive inside Trump’s second time period, whereas Anthropic CEO Dario Amodei says it may come as early as 2026.
Partially underlying these totally different predictions is a disagreement over what AGI means. OpenAI’s definition, for example, is rooted in chilly enterprise logic: a know-how that may carry out most economically priceless duties higher than people can. Hassabis has a unique bar, one centered as a substitute on scientific discovery. He believes AGI can be a know-how that would not solely clear up present issues, but additionally provide you with completely new explanations for the universe. A check for its existence may be whether or not a system may provide you with normal relativity with solely the data Einstein had entry to; or if it couldn’t solely clear up a longstanding speculation in arithmetic, however theorize a completely new one. “I establish myself as a scientist in the beginning,” Hassabis says. “The entire cause I’m doing the whole lot I’ve finished in my life is within the pursuit of data and making an attempt to grasp the world round us.”

Order your copy of the 2025 TIME100 difficulty right here
Learn Extra: DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Warning
In an AI business whose prime ranks are populated principally by businessmen and technologists, that identification units Hassabis aside. But he should nonetheless function in a system the place market logic is the driving power. Creating AGI would require a whole lot of billions of {dollars}’ price of investments—{dollars} that Google is fortunately plowing into Hassabis’ DeepMind unit, buoyed by the promise of a know-how that may do something and the whole lot. Whether or not Google will make sure that AGI, if it comes, advantages the world stays to be seen; Hassabis factors to the choice to launch AlphaFold at no cost as a logo of its benevolent posture. However Google can also be an organization that should legally act in the very best pursuits of its shareholders, and persistently releasing costly instruments at no cost just isn’t a long-term worthwhile technique. The monetary promise of AI—for Google and for its rivals—lies in controlling a know-how able to automating a lot of the labor that drives the greater than $100 trillion international economic system. Seize even a small fraction of that worth, and your organization will develop into probably the most worthwhile the world has ever seen. Excellent news for shareholders, however dangerous information for normal employees who could discover themselves immediately unemployed.
To date, Hassabis has efficiently steered Google’s multibillion-dollar AI ambitions towards the kind of future he needs to see: one centered on scientific discoveries that, he hopes, will result in radical social uplift. However will this former baby chess prodigy be capable to keep his scientific idealism as AI reaches its high-stakes endgame? His monitor report reveals one cause to be skeptical.
When DeepMind was acquired by Google in 2014, Hassabis insisted on a contractual firewall: a clause explicitly prohibiting his know-how from getting used for navy functions. It was a crimson line that mirrored his imaginative and prescient of AI as humanity’s scientific savior, not a weapon of struggle. However a number of company restructures later, that safety has quietly disappeared. At the moment, the identical AI methods developed underneath Hassabis’s watch are being offered, through Google, to militaries resembling Israel’s—whose marketing campaign in Gaza has killed tens of hundreds of civilians. When pressed, Hassabis denies that this was a compromise made in an effort to keep his entry to Google’s computing energy and thus notice his dream of growing AGI. As an alternative, he frames it as a realistic response to geopolitical actuality, saying DeepMind modified its stance after acknowledging that the world had develop into “a way more harmful place” within the final decade. “I believe we will’t take without any consideration anymore that democratic values are going to win out,” he says. Whether or not or not this justification is trustworthy, it raises an uncomfortable query: If Hassabis couldn’t keep his moral crimson line when AGI was only a distant promise, what compromises would possibly he make when it comes inside touching distance?
To get to Hassabis’s dream of a utopian future, the AI business should first navigate its approach by means of a darkish forest stuffed with monsters. Synthetic intelligence is a dual-use know-how like nuclear power: it may be used for good, nevertheless it is also terribly harmful. Hassabis spends a lot of his time worrying about dangers, which typically fall into two totally different buckets. One is the potential of methods that may meaningfully improve the capabilities of dangerous actors to wreak havoc on the planet; for instance, by endowing rogue nations or terrorists with the instruments they should synthesize a lethal virus. Stopping dangers like that, Hassabis believes, means fastidiously testing AI fashions for harmful capabilities, and solely regularly releasing them to extra customers with efficient guardrails. It means retaining the “weights” of probably the most highly effective fashions (basically their underlying neural networks) out of the general public’s palms altogether, in order that fashions may be withdrawn from public use if risks are found after launch. That’s a security technique that Google follows however which a few of its rivals, resembling DeepSeek and Meta, don’t.
The second class of dangers could appear to be science fiction, however they’re taken severely contained in the AI business as mannequin capabilities advance. These are the dangers of AI methods performing autonomously— resembling a chatbot deceiving its human creators, or a robotic attacking the particular person it was designed to assist. Language fashions like DeepMind’s Gemini are basically grown from the bottom up, quite than written by hand like old-school laptop packages, and so laptop scientists and customers are always discovering methods to elicit new behaviors from what are finest understood as extremely mysterious and complicated artifacts. The query of how to make sure that they all the time behave and act in methods which might be “aligned” to human values is an unsolved scientific drawback. Early indicators of misaligned behaviors, like strategic mendacity, have already been recognized by researchers working with at this time’s language fashions. These issues are solely prone to develop into extra acute as fashions get higher. “How will we make sure that we will keep in command of these methods, management them, interpret what they’re doing, perceive them, and put the precise guardrails in place that aren’t movable by very extremely succesful self-improving methods?” Hassabis says. “That’s a particularly troublesome problem.”
It’s a devilish technical drawback—however what actually retains Hassabis up at night time are the political coordination challenges that accompany it. Even when well-meaning corporations could make protected AIs, that doesn’t by itself cease the creation and proliferation of unsafe AIs. Stopping that would require worldwide collaboration—one thing that’s changing into more and more troublesome as western alliances fray and geopolitical tensions between the U.S. and China rise. Hassabis has performed a major function within the three AI summits held by international governments since 2023, and says he wish to see extra of that sort of cooperation. He says the U.S. authorities’s export controls on AI chips, meant to stop China’s AI business from surpassing Silicon Valley, are “high quality”—however he would favor to keep away from political decisions that “find yourself in an antagonistic sort of state of affairs.”
He may be out of luck. As each the U.S. and China have woken up in recent times to the potential energy of AGI, the local weather of worldwide cooperation —which reached a excessive watermark with the primary AI Security Summit in 2023—has given approach to a brand new sort of realpolitik. On this new period, with nations racing to militarize AI methods and construct up stockpiles of chips, and with a brand new chilly struggle brewing between the U.S. and China, Hassabis nonetheless holds out hope that competing nations and corporations can discover methods to put aside their variations and cooperate, at the very least on AI security. “It’s in everybody’s self-interest to guarantee that goes properly,” he says.
Even when the world can discover a approach to safely navigate by means of the geopolitical turmoil of AGI’s arrival, the query of labor automation will rear its head. When governments and corporations now not depend on people to generate their wealth, what leverage will residents have left to demand the components of democracy and a cushty life? AGI would possibly create abundance, nevertheless it received’t dispel the incentives for corporations and states to amass sources and compete with rivals. Hassabis admits he’s higher at forecasting technological futures than social and financial ones; he says he needs extra economists would take the potential of near-term AGI severely. Nonetheless, he thinks it’s inevitable we’ll want a “new political philosophy” to arrange society on this world. Democracy, he says, “just isn’t a panacea, by any means,” and might need to offer approach to “one thing higher.”

Automation, in the meantime, is already on the horizon. In March, DeepMind introduced Gemini 2.5, the most recent model of its flagship AI mannequin, which outperforms rival fashions made by OpenAI and Anthropic on many well-liked metrics. Hassabis is presently exhausting at work on Undertaking Astra, a DeepMind effort to construct a common digital assistant powered by Gemini. That work, he says, just isn’t meant to hasten labor disruptions, however as a substitute is about constructing the mandatory scaffolding for the kind of AI that he hopes will at some point make its personal scientific discoveries. Nonetheless, as analysis into these AI “brokers” progresses, Hassabis says, count on them to have the ability to perform more and more extra complicated duties independently. (An AI agent that may meaningfully automate the job of additional AI analysis, he predicts, is “just a few years away.”) For the primary time, Google can also be now utilizing these digital brains to regulate robotic our bodies: in March the corporate introduced a Gemini-powered android robotic that may perform embodied duties like taking part in tic-tac-toe, or making its human a packed lunch. The tone of the video saying Gemini Robotics was pleasant, however its connotations weren’t misplaced on some YouTube commenters: “Nothing to fret [about,] humanity, we’re solely growing robots to do duties a 5 12 months outdated can do,” one wrote. “We’re not engaged on changing people or creating robotic armies.”
Hassabis acknowledges the social impacts of AI are prone to be important. Folks should learn to use new AI fashions, he says, in an effort to excel professionally sooner or later and never danger getting left behind. However he’s additionally assured that if we finally construct AGI able to doing productive labor and scientific analysis, the world that it ushers into existence will likely be plentiful sufficient to make sure a considerable enhance in high quality of life for everyone. “Within the limited-resource world which we’re in, issues in the end develop into zero-sum,” Hassabis says. “What I’m fascinated with is a world the place it’s not a zero-sum sport anymore, at the very least from a useful resource perspective.”
5 months after his Nobel Prize, Hassabis’s journey from chess prodigy to Nobel laureate now leads towards an unsure future. The stakes are now not simply scientific recognition—however probably the destiny of human civilization. As DeepMind’s machines develop extra succesful, as company and geopolitical competitors over AI intensifies, and because the financial impacts loom bigger, Hassabis insists that we may be on the cusp of an plentiful economic system that advantages everybody. However in a world the place AGI may carry unprecedented energy to those that management it, the forces of enterprise, geopolitics, and technological energy are all bearing down with rising strain. If Hassabis is correct, the turbulent many years of the early twenty first century may give approach to a shining utopia. If he has miscalculated, the longer term may very well be darker than anybody dares think about. One factor is for certain: in his pursuit of AGI, Hassabis is taking part in the highest-stakes sport of his life.