This transcript has been edited for clarity.
Eric J. Topol, MD: Hello. This is Eric Topol for Medicine and the Machine on Medscape. I have been looking forward to having this conversation with Demis Hassabis for many months, if not years. I look upon Demis as the leading force for artificial intelligence (AI) in the world.
He was a chess prodigy at age 4 and became a chess master at age 13. He was admitted to Cambridge University at age 15 but took a gap year to develop games. He majored in computer science and then earned a cognitive neuroscience PhD at University College London. He started DeepMind Technologies in 2010, which now has about 1000 research scientists and engineers, as well at least 1000 published papers. Of note, among the AI community, Dr Hassabis is the most prolific author of papers published in Nature and Science.
His mission: "To solve intelligence, shoot for the stars, not be distracted with the practical stuff, [to develop] generalized algorithms relying on reinforcement learning, human-level intelligence across all cognitive tasks, not the narrow stuff."
I hope that's a reasonable summary. Welcome.
Demis Hassabis, PhD: Thanks for having me.
Topol: I want to get into three areas. Let's start with games, your first big foray, building on your younger-age endeavors. This was an interesting direction — a safe sandbox, as you've called it. Without being told the rules of the games, you would move forward with them. AlphaGo 2016 was a biggie, with 10170 positions in the game Go. Our Medscape audience may not know the game of Go, but it is ancient and popular. Some 200 million people watched AlphaGo take on the world champion, Lee Sedol, and especially move 37. Can you tell us about that?
Hassabis: Games have been a huge part of my life since I can remember. I grew up playing chess and was captain of various junior teams in England. It was chess that first brought my attention to how we think. I was trying to improve at chess as a promising England chess junior. And of course, you're trying to improve your own decision-making, your own thought processes, your own planning, all these amazing things that chess teaches you. That made me reflect on the nature of thinking itself. What was it? How were we coming up with moves? How do we come up with plans and ideas? What does that?
Games became my introduction to programing. When I discovered computers, I taught myself how to program on a ZX Spectrum computer, which was huge in the United Kingdom when I was about 8 years old. I fell in love with the computer as this incredible machine that, even back then, I could intuitively understand would be a potentially magical extension of your mind if you could program it in the right way. Then, my love of games and computers naturally combined into designing video games, which was my first career. AI was a big part of that.
Probably the most famous game I wrote professionally was Theme Park, when I was around 17 years old. It sold millions of copies. The cool innovation in that game was the AI. In Theme Park, you basically designed your own Disney World, and then thousands of little people came into your theme park and played on the rides. Depending on how happy they were, you could charge them more for their hamburgers, sweets, and drinks. There was a whole economics model underlying it. For its time — this was in the early 1990s — it was a revolutionary game. I realized that it was popular because every player had a different, unique experience. The game's AI adapted to the way the player was playing the game; no person's game would be the same as another person's game. That stuck with me.
Around that age I decided that my whole career was going to be about advancing AI. I believed we would develop a better understanding of our own minds by trying to build artificial general intelligence and then comparing its capabilities to what we know about the human mind. I took that further with my PhD, studying the brain, specifically the hippocampus, memory, and imagination. I was fascinated by how the brain works, but I wanted to get inspiration from the brain about algorithmic and architectural ideas for AI.
All those things came together in 2010 when we decided to start DeepMind. It's hard to remember now because AI is a popular buzzword now, but in 2010, nobody was talking about AI. In the investment world, we could barely scrape two pennies together for it. It's incredible to see what's happened in the past decade. DeepMind provided the third use of games in my life — as a training ground, proving ground — for AI systems, a convenient testbed.
That came to fruition when AlphaGo beat the world champion at Go, which was a longstanding sort of Mt. Everest problem in AI. We did it in a unique way, using a learning system that learned how to play Go from first principles using reinforcement learning, playing against itself millions of times with no human knowledge programed into it. So it was able to come up with its own original ideas, including this move 37 that you mentioned, which was a revolutionary idea in Go. Even though we had played as a species — the game was invented 3000 years ago, so it had 3000 years of history — no one had thought to play that type of move before in the history of Go.
Topol: It's amazing. Then you moved through AlphaGo Zero to the most recent MuZero, where you basically can cut across Go, chess, Atari, and Shogi. Tell us about MuZero.
Hassabis: MuZero is the latest version of our AlphaZero AlphaGo series. What's unique is that AlphaGo and AlphaZero play chess and Go at higher than world champion level. They learned these games from first principles — that is, with no knowledge about them, just playing against themselves and effectively forming their own idea about the motifs of the game.
But the thing about board games, even complicated ones like chess and Go, is that the rules are relatively simple and they're specified. They're given to the program. In a computer game, the transition matrix between different states — if I make an action, what's the next state of the world going to look like? — is much more unpredictable. You have to model the pixels on the screen. There isn't a simple, rules-based transition matrix. The big advance of MuZero over the other programs is that it can learn the dynamics, let's call it, of the world it finds itself in and then use that model to improve itself through playing and experiencing that world.
In theory, the big breakthrough was that we could then combine our work on board games with our work on computer games. Playing classic Atari games — Space Invaders, Pong — was our first big breakthrough in 2013-2014. Our deep reinforcement learning system could master those games just from the pixels on the screen and being told to maximize the scores, and not being told anything about the controls or the rules or how to get points; it would have to discover that for itself from first principles. That was the first big proof point in the whole AI industry of a learning system that could scale to something impressive and challenging for humans.
With MuZero, we've almost come full circle and built a system that can now play every game, pretty much, that we have ever tried and individually cracked. Of course, we are after generality. You can see that with the evolution of our programs. AlphaGo, for example, only played Go and it needed some human games to learn from to begin with, to bootstrap itself. Then AlphaGo Zero removed the need for human games, so it just played against itself starting from random. AlphaZero, the next version of that, that could play any board game — chess, Go, Shogi, anything you give it — and now MuZero includes computer games. You can see that we try to get to world champion–level performance with a system, and then we try to remove from that system anything that might be specialized to that particular domain so that it becomes more and more general.
Topol: When you started DeepMind in 2010, nobody talked about deep learning. That didn't come about until maybe 2015 or so. You were prescient. Games were a warmup for the big stuff Before I get into the protein structure story, I want to get a bit of perspective from you on this category. I'll call it language images, the tasks. You have worked on AlphaCode Ithaca, Gopher, and Gato. These are big and in parallel to your game and life-science work, as well as the other entities out there trying to work in this space, such as Open AI with GPT-3 and -4. You also have Flamingo AI. What's your sense about this area?
Hassabis: It's obviously one of the most exciting growth areas right now, these large "models." Sometimes they're called foundational models. One of my investors, many years afterwards, asked, "Did you name it DeepMind after deep learning?" And I said, "Yes. You really only realized that now?" But in 2010, of course, nobody knew what deep learning was. It had been invented in academia by Jeff Hinson and colleagues — a few of them are now at DeepMind — but no one had heard of it in industry at that point.
Now people have figured out how to scale these models to massive size with transformers, a new version of deep learning. They can be built with up to a trillion parameters, and we're going to see even bigger models than that. With models of that size, one can actually almost read the entire internet.
For 30 years now, billions of users have been putting unbelievable amounts of information on the internet. Most of it is probably nonsense, but there are a lot of facts there, if you can ingest them all. These systems are relatively inefficient. Certainly, the human brain is many, many orders of magnitude more data efficient. That remains a challenge, but even still, these brute-force methods and large models are making huge progress, initially on language understanding and language production (text). But very rapidly it's going to become multimodal. We're seeing the beginnings of that with image and text.
Of course, we have our own versions of this, Stateoftheart AI and other companies, including Google. Most of the big research companies and organizations now have their own state-of-the-art versions of these models. Where do they go next? In my opinion, they still don't really understand what they're saying. They're quite clever at regurgitating and averaging things, and they can sound sensible for a reasonably long conversation. But they still don't really understand the nature of the world. They don't have models of the physics of the world or a theory of mind other than self and other. They are slightly strange systems. The question is whether continued scaling will be enough on its own, or will we need more big breakthroughs like AlphaGo or transformers? This is a hotly debated topic. There are probably not many more, but I believe we still need some big innovations to get us to human level.
Topol: You believe that will happen?
Hassabis: I believe eventually that will happen. If you study neuroscience, there's nothing seemingly noncomputable in the brain. I've talked to people like Roger Penrose many times about this. He believes that there's some quantum effect, but as far as neuroscientists go, nothing quantum or nonclassical has been proven to be going on in the brain. If that's the case, then we are very sophisticated Turing machines. So are computers. So, there must be some way to potentially mimic a lot of those capabilities.
Topol: I want to zoom in on digitizing biology and protein structure, going back 50 years to when Christian Anfinsen won the Nobel Prize in Chemistry. He said that someday we would be able to predict the 3D structure of proteins from the amino acid sequence. And now you've done it. It may be the most important life-science breakthrough in decades. This started with AlphaFold back in 2016. Tell us the story because I'm blown away by it.
Hassabis: It's definitely the most important, impactful thing we've done to date. It's probably also been the most difficult project we've done so far and the most complex system we've produced.
Protein folding is about understanding the 3D structure of proteins, which underpin all of life. Every function in your body is supported by proteins, and their 3D structure governs their function in large part. You start with the amino acid sequence string, the genetic sequence of the protein. It's almost like a puzzle — what's the 3D output going to look like? I've had my eye on this problem for a long time.
I think of it as the biology equivalent of Fermat's Last Theorem. It's that exciting. Christian Anfinsen sounds a bit like Fermat, with a throwaway comment in his Nobel lecture. I thought, Oh, this should be possible. But he starts off a whole field on it. He doesn't say how it should be done. He just says, "In theory, it's possible."
I'm intrigued by those kinds of problems. The other reason we put so much effort into it and picked that problem first is that if it could be cracked, it should unlock whole new branches of life-science research. And I believe it's done that already within less than a year. I first came across it at college at Cambridge, because one of my acquaintances in my close friendship group was obsessed with this problem. He still works as a structural biologist at the Laboratory of Molecular Biology (LMB) in Cambridge. He continues to work on this. He used to talk about it at every opportunity. We could solve protein folding. We'd better do X, Y, and Z, unlocking everything, drug discovery, and so on. That stuck in my mind. It's an intriguing problem. I thought it would be well suited to AI one day. I've had this in the back of my mind. I keep a list of interesting problems that I want to tackle one day.
It's been fun in the past couple of years. We've had an amazing time in science, not just with AlphaFold but applying AI to all sorts of interesting scientific problems and ticking them off one by one. But this was top of my list. It has been the purpose of DeepMind all along. Of course, we proved ourselves on games — that was the most efficient way to develop our algorithms. But it was always a means to an end. We were not interested in winning the games in and of themselves, although that was a great achievement in AI. In the end, we were trying to use games to develop general algorithms that could then be translated to real-world problems for huge impact. That could include industrial problems or commercial problems.
We do a ton of work with Google. Almost every Google product you use now has some DeepMind technology in it. But the real passion for me was applying it to massive scientific challenges, to use AI to accelerate scientific discovery itself. What's been fun and gratifying in the past year or two is that we finally got to the point where our systems are powerful and sophisticated enough for that to happen. AlphaFold is our first example. We started the project in 2016, almost the day after we got back from the match against Lee Sedol in Seoul.
We won that match 4-1. We didn't lose 37. It was mind-blowing. For people who are interested in that event, there's an award-winning documentary about it. Take a look if you want to understand the human story behind that process, which is also quite interesting.
We won, and I was thinking, What's next? We had the ingredients ready to tackle a problem such as protein folding. The final piece was games. I first came across the problem in the mid-1990s; the second time I came across the problem was during my post-doc at MIT in 2009, just before I started DeepMind, when a Citizen Science game called Foldit came out.
When I was doing my PhDs and academic stuff, I was still intrigued by the idea of creating a game where gamers have fun playing but are actually doing useful science, sort of accidentally, collaterally in the background. That would be amazing. I still believe that idea has more to run, but I think Foldit is probably the best example of that.
For those not familiar with Foldit, it was like a puzzle game — almost like turning protein folding into a Tetris game. You would make moves, bend the backbone of the protein, for example, and then it would give you a score, which is the energy function of the protein. A few amazing gamers, although they weren't biologists, solved the structure of a couple of pretty important proteins, and they published it.
As I was watching this being played, when I looked back on this in 2016, when we were ready to pass go on this, I was thinking, What have we done with Go? We've mimicked the intuition of the Go masters. The Go masters are incredible. They've played Go since they could walk. It's played obsessively in Asia — Korea, China, and Japan. If you have the talent for it, you go to Go school. We managed to mimic their intuition about the game of Go with AlphaGo.
I thought that whatever was going on in the gamers' minds with that pattern matching, when they explained what they were doing and somehow were making the right decisions, we should be able to mimic that intuition in an AI system as well. That was the heart of the insight into why I took on that project.
There were also some cool training data from the Protein Data Bank (PDB). Existing experimental structures from the past 30 or 40 years of experimental biology had produced about 150,000 structures. That's still relatively small for training AI systems, but it's probably enough to get going. In the end, to solve this, we had to get the system to produce its own predictions and then feed those predictions back in as new data, because there wasn't quite enough actual data.
The other important thing about the problems we tend to pick is to find a clear metric that you can optimize against. In protein folding, it's the energy of the system and also the error rate of the positions of the molecules, so it's very clear whether you're making progress. We have clear goals for hill climbing and making our system better.
Finally, I should mention the CASP (Critical Assessment of protein Structure Prediction) competitions, which is like the Olympics of protein folding. This is a well-run competition by John Moult and his colleagues that has been going on for 30 years. It is a great benchmark.
Topol: This was AlphaFold 2, which was published in July 2021. It had a level of atomic accuracy of less than one angstrom. I work with a lot of colleagues in structural biology. They've spent years to determine the structure of a protein and many times they never solve it. But not only do you produce confidence measures, you also — anyone — can put in their favorite protein and see how it works in seconds. And you also get feedback from the user. You also linked up with the European Bioinformatics Institute (EMBL-EBI). It's open-source and it's free.
There are users from every country in the world now — 500,000, maybe a million. This is like going from 0 to 60 mph in less than 1 nanosecond. You're going from 1 million to 100 million proteins, to every protein, to any organism model. It's mind-blowing. And by the way, in 2021 it was the breakthrough of the year in Science and Nature Methods. Wow. You can also predict RNA structure and gene expression, all of these, with the deep learning tools. It has a lot of relevance in medicine, whether for neglected diseases, SARS-CoV-2 virus biology, or antibiotic resistance. You're shaking up the world of life science and medicine.
Hassabis: We hoped it would have an impact but there's no way we could have predicted that it would be a sea change. And we've only just begun. It's hard sometimes to understand the full ramifications, because obviously we're not exactly in that domain ourselves, although a couple of people on the team are. It's a hugely multidisciplinary team, by the way. It's not just machine learners and engineers; it's also biologists, biophysicists, and chemists. One of the things we specialize in at Deep Mind is bringing together truly multidisciplinary research teams, and that's what was required to create something like AlphaFold.
We've been giving talks at some of the biggest "cathedrals" in molecular biology. For biologists who use the tool, it's as simple as typing in a Google search. It's a Google search for proteins. Of course, we teamed up with the amazing EBI folks in Cambridge; they already host a lot of the biggest databases, such as UniProt. They were the perfect partners to host all of this data and do it super-professionally. We realized that if we did that, rather than building our own tool, it would plug directly into the main vein of biology researchers so they could just use it as another one of the standard tools that they're already familiar with. That all worked out amazingly well and is one of my most fruitful collaborations.
But you're right. We effectively solved this problem over the summer of 2020, during the CASP 14. The results were announced at the end of 2020. Then we published the methods, and all the predictions of the human proteome and 20 other model organisms, in the summer of 2021. This is lightning speed for science, as you know. Then the question is, because it's a computational tool, it's amazing to see how fast it's been adopted into biologists' workflow, because if one invents a new, amazing technique like CRISPR, or optogenetics, we've seen in the past that it still takes maybe 4 or 5 years for people to get trained in that new way of doing things and build their labs in the right way and figure out how to use them. But with a computational tool, it's instantaneous.
We made that breakthrough in the summer of 2020, and then in December we folded the whole human proteome. Over the holidays, while we were having a lunch, we were running the computer. That's another thing I love about computers: While you're having lunch, they can be doing useful work for you. You come back and they've solved the problem.
Then we thought, why not do another 20 model organisms, those important for research: the zebrafish fruit fly and the mouse, those important for agriculture (such as rice and wheat) and for diseases, especially neglected diseases such as malaria. More recently, we worked with the Drugs for Neglected Diseases initiative. We are focused on doing things that have maximum benefit for the world — leishmaniasis, Chagas disease, all these neglected diseases in the developing parts of the world that affect millions of people. Unfortunately, pharma doesn't pay much attention to those diseases, so it's mostly nonprofits.
We thought if we could give them the protein structures, they could start drug development. Having the protein structure really helps because you can see what part of the protein to target. What's incredible is that the rule of thumb for an experimentalist to determine the structure of a protein used to take one PhD student their entire PhD, and sometimes they still couldn't crystallize it. In the entire history of experimental biology, the community solved a total of 150,000 proteins during the first year. We've done a million now, including the 20,000 for the human proteome. It's exponential because it's also software. We're going to try to solve the whole 100 million — all of the proteins known to science — over the next year.
Topol: If there wasn't something convincing about AI shaking up the world before, the work you're doing is it. And by the way, this week it's on the cover of Science yet again, with the cracking of the nuclear pore complex, thanks to your work and that of your collaborators. It's also relevant to future pandemics, with the cracking of the 20 top pathogens. It is extraordinary how much impact this has.
I know you're interested in protein disorder prediction and the effects of point mutations — the functional aspects, not just the shape and 3D structure. That's in the pipeline for you. Another big outgrowth of AI is drug discovery. You started a company called Isomorphic Labs and you open-sourced everything you did. You enabled all these competitors; there must be about 50 companies now doing AI drug discovery. You also have your own efforts. Help us understand that.
Hassabis: With AlphaFold, we decided that the maximum impact we could have that would benefit humanity and the scientific community was to open-source that work and make it freely available for any use, commercial or academic. Many people were surprised about that — that we would allow pharma companies to use it. We just felt that it was the best way to make advances in drug discovery. We've seen the consequences and how much the field has flourished since then. I knew about the nuclear pore complex work but I didn't realize that it had published. That's amazing — the biggest complex in the human body, with AlphaFold helping.
But this is just the beginning. Earlier, you said, I believe, we're entering a new era of digital biology. I think at its heart, biology can be thought of in a fundamental way as an information processing system. On a physics level, that's what biology is. DNA is the most obvious example of that. All of biology can be viewed as informational, and if that's true, then AI could be the perfect description language, if you like, for biology, in the same way that math perfectly describes physics; they are sort of in partnership. Biology is an information system — an unbelievably complicated one and an emergent system. It's too complicated to describe with simple mathematical equations. It's going to be much messier than that. You're not going to have a Newton's laws of motion equivalent for a cell. It's just too messy, too emergent, too dynamic.
But AI can potentially make sense of that soup of signals, patterns, and structure that's far too complicated for the human mind to grasp unaided. I do believe that we're in the perfect regime and AlphaFold is the first huge proof of concept of that, otherwise it would be just conjecture. I do think there are many more things to come from AlphaFold, such as small molecule design, protein-protein interaction, point mutation prediction. Isomorphic Labs is our attempt to push forward on that, especially on the drug discovery angle.
AlphaFold was just one piece of the puzzle to help drug discovery. But there are many other pieces of the drug discovery pipeline that I think AI can fundamentally speed up and improve. Maybe it will improve the odds of drug compounds going through clinical trials. So I think there's enormous potential for AI to reimagine or rethink the drug discovery process from first principles, but from an AI computational perspective. Isomorphic Labs is our expression and attempt to do that.
Topol: It's terrific, and it's now getting to this intersection of life science and medicine. We have big problems in medicine. You would consider them narrow. For example, electronic health records have all this unstructured text that we can't deal with. We also have the problem of multimodal AI, whereby people have sensors with continuous data, and we have the genome and microbiome and their records and environmental sensors and so on. But we don't know how to analyze that data. How are we going to move this field forward to where we can understand the individuality and uniqueness of each person?
Hassabis: I agree. We've done our own work in the past on image-scan recognition for mammography and retinal scans. It is almost routine for AI to help process imaging, at least to triage the scans for the doctors and nurses to decide which are the critical or difficult patients. That seems to be a no-brainer to me. As you say, somehow it must be collected multimodally with electronic health records and text and other things. Of course there are questions over respecting privacy, which is vitally important in this area.
The problem in many of the health systems in the world is that the data are in archaic systems and are not well curated, so it's quite difficult for anyone to find or even combine the right sorts of data. How to handle that is a question for politicians and health ministers to figure out. In some countries, such as Singapore, data are more integrated. It may be a bit simpler in smaller countries. Perhaps it will develop in those places first.
Medicine should be personalized in the sense of cancer and other therapies. I believe it's well understood or well appreciated now that cancers are a multitude of diseases. If you sequence the cancer itself in an individual, you see how it interacts with the patient in an individual way. Same with the microbiome, which probably is super-important in many diseases that are poorly understood and unique to each individual.
We are sledgehammering cures — giving people a whole cocktail of drugs because we aren't sure which one is going to work with that patient. It does damage to their systems. Perhaps they are cured, but you do a lot of collateral damage. I believe that can be made much more precise if you understand the individual involved. The problem, then, is that you have to extrapolate from an N of 1, but with current medical techniques, you need N's of hundreds or thousands to be sure.
But let's say we have a generic drug that works in most people. I can imagine a world where, 10 years from now, we have AI systems that go in, find out your genetic details and other things, and you get tested and then the AI tweaks that generic drug slightly for you. Then it can predict the outcome of that treatment, which will have fewer side effects and be more effective. To me that seems like a plausible way that personalized medicine can come to life. That would be amazing for healthcare.
Topol: I'd like to get your perspective on this idea of a digital twin infrastructure. Today, we do these clinical trials and maybe 10 out of 100 patients benefit, yet we treat all the other 90 with the same cockamamie thing. It won't even help. What if you had a planetary digital twin infrastructure whereby you could get nearest neighbors at every level, and you could say precisely "you will respond" and "this is the best treatment and the best outcome or the best prevention." Is that attainable? You're very young. Can we get to that in your lifetime?
Hassabis: I hope so. With our science team and investors at Isomorphic and also at DeepMind, you can think of it as building up the interaction layer, modeling more and more complex parts of biology. You talk about a digital twin. One of my dreams in the next 10 years is to produce a virtual cell. What I mean by virtual cell is you model the whole function of the cell with an AI system. You could do virtual experiments on that cell, and the predictions that come out of that would hold when you check them in the wet lab. Can you imagine, if you had something like that, how much faster and more efficient that would make the whole drug discovery and clinical trials process?
Only 1 in 10 drug candidates make it through the trials as we do them now. It takes 10 years to even get to that point, so this costs billions of dollars. That's why we don't have more drugs for more diseases, especially in the poorer parts of the world. The investment risk is huge and the process is too slow given the aging population and the things we know we have to do with future pandemics and other things. You can think of what we've done with AlphaFold as the first step of the ladder. Can we determine the structure of proteins statically? But of course, biology is a dynamic system. So the next step is proteins interacting with other proteins and maybe disordered areas achieve order because of proteins interacting with ligands and molecules. Then you build up slowly, maybe to pathways and eventually to cells and then ultimately perhaps the whole organism. That's the dream.
Topol: Well, I hope you add that to your checklist, Demis. I've been enthralled by this discussion. You and your team are an amazing force. I can't thank you enough for taking time with us. And I want to congratulate you because you have shaken up life science like no one else, and you're just getting going. Where you're headed, we will follow. Everyone listening should realize that we're talking to a force. Now the only thing you have to do is convince us that you're actually human and not an AI agent. By that, I mean, wow.
Hassabis: Thank you. That is kind of you to say.
Follow Medscape on Facebook, Twitter, Instagram, and YouTube
Medscape © 2022 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Eric J. Topol, Demis Hassabis. It's Not All Fun and Games: How DeepMind Unlocks Medicine's Secrets - Medscape - Jun 15, 2022.
Comments