This year's Nobel Prize awarded prizes in both the fields of physics and chemistry to AI achievements. What does this mean and what impact will it have? Demis Hassabis put forward his own opinions in this exclusive interview.
In October, DeepMind co-founder and CEO Demis Hassabis became one of the three co-winners of the Nobel Prize in Chemistry for AlphaFold.
As an artificial intelligence software, AlphaFold solves a problem posed by the biological community 50 years ago: predicting the structure of every known protein.
In fact, AlphaFold, this groundbreaking model, is only part of DeepMind’s achievements. In the 15 years since its establishment, DeepMind has become one of the most important AI laboratories in the world.
Although some business considerations have been added after being acquired by Google and merging with Google Brain, they are still focusing on the most complex and fundamental problems in science and engineering, and ultimately designing powerful AI that can imitate or even replace human cognitive abilities.
Less than 24 hours after winning the Nobel Prize, Demis Hassabis accepted an interview with Financial Times reporter Madhumita Murgia and discussed the major problems that DeepMind will solve next, the role of AI in scientific progress, and his own thoughts on the road to AGI. Prospect forecast.
Demis Hassabis at Google DeepMind headquarters in London
AI4Science’s next challenge
The related progress of AlphaFold 3 demonstrates to a certain extent the next step of DeepMind in the field of biology - understanding the interactions within organisms, ultimately modeling the entire pathway, and even building a virtual cell.
In addition, through the efforts of DeepMind subsidiary Isomorphic, they are also entering the field of drug discovery - designing new compounds, finding binding sites, and predicting the properties, absorption, toxicity, etc. of these substances.
At present, Isomorphic has also cooperated with Eli Lilly, Novartis and other companies to carry out 6 drug research and development projects, which are expected to make clinical progress in the next few years. It hopes to significantly reduce the time required for drug discovery, thereby helping to cure some diseases.
In addition to the field of biology, Hassabis also expressed that he is very excited about work in the field of materials design.
Last year, they published a paper in Nature proposing an AI tool called GNoME to achieve AlphaFold 1 level material design and discovered a total of 2.2 million new crystals; in the next step, they need to work hard to reach AlphaFold 2 level.
Paper address: https://www.nature.com/articles/s41586-023-06735-9
In terms of mathematics, AlphaProof and AlphaGeometry have reached the IMO silver medal level this year. In the next few years, DeepMind will try to use the power of AI to truly solve an important mathematical conjecture.
For the energy and climate fields, the Graphcast model published in Science last year can predict the weather for the next 10 days with unprecedented accuracy within one minute.
Paper address: https://www.science.org/token/author-tokens/ST-1550/full
The technology involved may be able to help with climate modeling, which is very important in areas such as combating climate change and optimizing power grids.
It can be seen that DeepMind's future blueprint focuses more on application and engineering practice, aiming to further transform technology into work that can affect the real world, rather than pure basic research.
In this regard, Hassabis said that "protein folding" is a "challenge" that is "unexpected" and cannot require every problem to have such gold content.
The problem of "protein folding" is so core and important that it is equivalent to Fermat's last theorem in the field of biology. However, unfortunately, there are not many problems that are important enough and explored for a long enough time to be called a "challenge."
The Nobel Prize will be a watershed moment for AI
This year's Nobel Prizes in Physics and Chemistry were awarded to AI scholars one after another. It is interesting, but no one can tell why the award committee made such a decision.
How does Hassabis understand this?
He said that this is very much like a "statement" deliberately issued by the committee, and will also become a watershed moment for AI, marking that its technological maturity has been sufficiently recognized to assist scientific discovery.
AlphaFold is the best example, while Hinton and Hopfield's awards are for more basic and low-level algorithm work.
Hassabis said he hopes that when he looks back in 10 years, AlphaFold will herald a new golden age of scientific discovery in all these different fields.
This also brings up an interesting question: With tools like AlphaFold, scientists no longer need to spend too much time and energy making predictions. Does this mean we should explore new fields? Or even change the way you learn scientific concepts?
It should be noted that AI systems are a unique new class of tools. They have some inherent functions and therefore do not fit into the traditional classification of tools.
Although tools such as AlphaFold can currently only make predictions, in a sense, prediction is also part of "understanding". If you can predict, that brings understanding.
Even if the predicted output is important enough, such as the structure of a protein, then it is valuable in itself.
From a broader perspective, science contains many levels of "abstraction."
For example, the entire field of chemistry is based on physics. You don't need to understand all the physical principles such as quantum mechanics to talk about atomic compounds and understand chemistry at its own abstract level.
For the field of biology, we can study life, but we still don’t know how life evolved or emerged, and we can’t even correctly define the concept of “life.”
Similarly, AI is like a layer of abstraction that the people building programs and networks understand on a physical level, but then the predictions that come out are like emergent properties that we can predict on our own at a scientific level. Analyze these predictions.
AGI is approaching, understanding is important
Whether it is natural science or artificial intelligence systems, "understanding" is very important.
Artificial intelligence is an engineering discipline, which means that you must first build a system before you can study and understand the object; while phenomena in natural science do not need to be manufactured, they exist naturally.
Although AI systems are engineered artifacts, this does not mean that they are easier to study than natural phenomena. It can even be expected that it will be as difficult to understand, take apart and deconstruct as biological neural networks.
This is happening now, but we have made some progress. For example, there is a specialized field called "mechanistic interpretation", which uses neuroscience concepts and tools to analyze the "virtual brain" of the AI system.
Hassabis is very optimistic about the explainability of AI and believes that great progress will be made in understanding AI systems in the next few years.
Of course, AI can also learn to explain itself. Imagine combining AlphaFold with a language proficiency system so that it can predict and explain what it is doing at the same time.
Currently, many leading labs are narrowing the scope of their exploration and focusing on scaling Transformers. It is undeniable that this is a good direction and will become a key component of the final AGI system, but DeepMind will continue to persist in exploration and innovative research.
In fact, DeepMind has the broadest and deepest research platform to date for inventing the next generation of Transformers as part of their scientific legacy.
These explorations are necessary, in part, to see how far we can go so we know what needs to be explored.
Exploring new ideas and taking exciting ideas to their full potential are both important. If you don’t understand the absolute limitations of your current ideas, you won’t know what breakthroughs are needed.
LLM's long context window is a good example. The 2M token context made by Google Gemini 1.5 Pro is a cool innovation that no one else can copy yet.
Google DeepMind London Office
Only by understanding AI can we have safe AGI
Hassabis and many technology leaders have predicted that it will take 5 to 20 years to realize AGI.
If we want to use scientific methods to achieve this goal, it means more time, energy and thinking, focusing on AI understanding and analysis tools, benchmarking and evaluation, requiring 10 times the current investment.
These inputs should come not only from technology companies, but also from AI security agencies, academia and civil society. We need to understand what AI systems are doing, their limitations, and how to control and protect these systems.
"Understanding" is an important part of the scientific method, but it is missing in pure engineering. Engineering just looks on - does this approach work? If it doesn't work just try again, it's full of trial and error.
Science is what can be understood before anything happens. Ideally, this understanding means fewer errors. This is important with AI and AGI because when applying such a powerful technology, you want to make as few mistakes as possible.
Maybe in a few years, as we get closer to AGI, a social question will arise - what value do we want these systems to have? What goals should we set for them?
This is different from technical issues. The technical aspect focuses on how to keep the system on track and moving towards the set goals, but it does not help us decide what the goals should be.
For a secure AGI system, both technical issues and social issues need to be right, but Hassabis believes the latter may be harder to achieve.
A series of issues such as goals and values will involve more UN and geopolitics, and even social sciences and philosophy, and require extensive discussions with all levels of government, academia and civil society.
Even if AGI is 10 years away, we don’t have a lot of time to solve these problems, so discussions in this area should start now, bringing voices from a variety of sources and perspectives to the table.