This transcript has been edited for clarity.
Eric J. Topol, MD: Hello. This is Eric Topol with the Medscape Medicine and the Machine podcast. I'm thrilled today to welcome Kai-Fu Lee, who is one of the leading artificial intelligence (AI) experts in the world.
Kai-Fu Lee: Thanks for having me.
Topol: It's a joy and an opportune time, what with the publication of your new book, AI 2041: Ten Visions for Our Future taking us forward 20 years. Before we get to that, let me give our Medscape audience a little background. You were born in Taiwan. You came to the United States in 1973, went to Columbia University and then Carnegie Mellon University, one of the leading AI centers in the country. You had an amazing career at Apple, Microsoft, and Google, when you led Google in China. In many ways you have been a major force for AI around the world, so we're really interested in your perspective.
You and I first converged after I read your book AI Superpowers: China, Silicon Valley, and the New World Order. I was blown away because you had a unique perspective. We had the chance to work on a piece together that was published in Nature Biotechnology called "It Takes a Planet," about the idea of collaboration to advance AI in human health and medicine. Then more recently, this idea culminated in a new book by you and your colleague, Ku-Fan Chen, who is a Chinese science-fiction writer.
The book is a combination of sci-fi and your views on AI in medicine. You've gained personal experience having gone through a lymphoma diagnosis, which I know colored your perspective.
Data and AI Superpowers
Topol: Let's start with AI Superpowers, a remarkable book. Give us the skinny on your thoughts when writing this book when it was published in 2018.
Lee: Back when I wrote AI Superpowers, there were a couple of key concepts that I wanted to get across. The first was that AI, especially deep learning, is a tremendous breakthrough, and this breakthrough is a little different from what most people think it is. It isn't about humans programming machines in an "if-and-else" kind of way. Rather, it's about machines that can learn based on a huge amount of data and draw their own conclusions on directions that humans give them. The amazing thing about this technology is that it will scale with more computer power and more data. Of course, the challenge is that it doesn't have any self-awareness, understanding, emotions, or creativity as we understand these factors.
The company or country in possession of an enormous amount of data has a definite advantage. The hypothesis was that China, with such a huge population using the mobile computing platform like no other country, would become an AI superpower. People in China spend many more hours on super-apps, which would become very good at AI and make money for them, thereby encouraging more people to go into engineering and raising the price of AI engineers. At the time that was a possible but questionable hypothesis. In the past few years, we've seen it come to fruition, and data is in fact the most important element. Of course, you still need smart people and fast machines.
Topol: This was also the subject of the 60 Minutes segment, when you addressed these issues. I can't recommend your new book enough. It provided a vantage point from which I approached you to work together on a commentary titled "It Takes a Planet." In that piece, we wrote that if we put aside political and nonmedical issues, we could develop a digital infrastructure that would enable us to find digital twins, nearest neighbor analysis. A patient with a new diagnosis of cancer would have the whole world of data to draw upon to find a digital twin and the best treatment and outcomes. That digital twin could live in China or in Africa, and, of course, there could be many. But you would learn in a whole different way from how we learn today — from clinical trials, trial and error, and all sorts of very rudimentary ways that don't involve AI. As you pointed out in our article, the chance of doing that now is enhanced because of federated learning and homomorphic encryption. Could you explain what those are for the uninitiated?
AI and Privacy
Lee: Sure. I personally believe that healthcare is the most important application of AI that we'll see in the next 20 years because so much data is being collected on each individual. That includes things we didn't have before: multiomics, very high-accuracy MRI imaging, detailed blood and marker analysis, and also wearable computing data. These technologies can create the advent of a data-driven process of understanding our human body, our statistics, and how they relate to our health, and how to tweak various parameters, whether it's sleep, exercise, medicines, nutrients that can make us healthier.
Getting that data was of great importance. Yet, many people don't want their health data to be public. Some people from some cultures and countries care about that greatly. A number of mechanisms can potentially get around that. The most rudimentary method is to anonymize the data — hide the name, age, race, zip code, etc., to aggregate the data. But some of that information is useful in understanding the patient's history. So a number of technologies are being investigated now that are generally called privacy computing. Privacy computing means that we put our data in a place that we trust, and it will not leave that place. We hope this will allow us to have our cake and eat it too.
AI aggregation and training can take place with a number of such trusted entities. For example, if there were a thousand hospitals, each with a thousand patients, the patient's data is already in the hospital; therefore the data is, by definition, accessible to the hospital. Each hospital would train AI models based on its own patients but no others. Then when the models are combined across the thousand hospitals, we have the effect of training on a million patients, without the trainer ever seeing any data from any hospital. That's the promise that we hold if the whole planet would work together in that kind of an arrangement.
Topol: The federated learning that you just explained is one of these privacy computing methods that we didn't have some years ago. Encryption has also reached new levels, like the example of the homomorphic encryption. Can you speak to that?
Lee: Homomorphic encryption allows us to encrypt the data in a one-way technology so it can be encrypted but cannot be decrypted, yet AI training can still take place on the encrypted data, creating models without ever seeing the original data. These two technologies are at different stages of maturity. Federated learning is ready to be deployed in some scenarios. We have yet to see how much attenuation we would get by not directly training on the million patients by aggregating the models, but we think it should be a small amount. Homomorphic encryption still has some fundamental algorithmic issues that don't yet work on nonlinear functions like deep learning. But people are working on enhancing that approach.
AI and Precision Medicine
Topol: That's great. This is a hot area for further development. Chapter four of your new book is called "Contactless Love," and it's actually about the pandemic: what we have learned from the pandemic, and the process of automation acceleration. There's a sci-fi story in each chapter, along with your extraordinary perspective. Can you recall some of the key points you made in this chapter? Everyone should read this; it's extraordinary.
Lee: It's a bit of an outside, nonmedical expert's opinion from the AI side. I hope I got most things right. A couple of things are discussed in that chapter. One is the fact that healthcare has become digitized and how that creates a beginning for the future of precision medicine. That is, the power of AI is not only in its ability to optimize functions like the overall health of all the populations, the likely cure rate, or even the healthy survival rate of patients. But also it can individualize treatment for people. Just like when we go to Amazon, the page I see and the page you see are different. It's the same with TikTok and Facebook. It knows who I am and what I like and shows me videos that I might click on but you might not.
For the same reason, in the future, it might be possible to determine different diagnoses and treatments initially for different clusters of people — maybe young people, old people, men and women, etc. — but eventually perhaps a different treatment for every person based on that person's allergies, family history, and so on. Doctors already do that to some extent using rules, but this can target each individual, using data that human doctors don't yet fully have time for or know how to understand, like genetic sequencing. That's incredibly exciting.
AI and Robotics
Lee: A second topic in that chapter is the use of robotics. We are already seeing this in China because China has so many people. During a pandemic, the ability to do predictable, fast-turnaround testing is extremely important. The nucleic acid testing in China can now be done for entire cities within 2-3 days, conducting 5-10 million tests during that period. That is accomplished by machines that can run the test 100 times faster than people; this is robotic technologies at work.
One of the companies we invested in does this, as well as other companies. We discovered just by luck that the work of medical technicians is relatively easy to replicate compared with, say, the assembly of an iPhone, because assembly of an iPhone requires strong vision, hand-eye coordination, and dexterity. But much of the lab technician work is pouring liquids, and robots can make a very tiny droplet of liquid much, much more accurately and minutely than humans.
Projecting further, we can see that it can take over a lot of the work of medical technicians, which is not just about human replacement and saving money but increasing the turnaround. Machines can work 24/7. They don't make mistakes. There's no risk for contagious contamination. Robotics is perfect for drug discovery and also laboratory experiments used in academia. One of the companies we fund is now building a platform that can be sold to academic and pharmaceutical companies to accelerate the pace of experiments, discovery, and other scientific uses. This is thanks to the initial product design for COVID tests, and now it's being expanded to many domains in medical sciences.
We all know about intuitive, surgical, and various types of robotics that are being used for dental implants, spinal surgery, colonoscopy, and the list goes on. I believe we'll see a lot more robotics at work, including very small robots that can do things that surgeons cannot do, but it's a process. We won't have robo-surgeons and robo-doctors from day one. They will be small assistants that do suturing and things that doctors don't want to do and that AI can do better. But gradually, they'll have more prominence.
AI in Drug Discovery
Topol: Another topic I want to talk with you about, which I think is very important, is drug discovery. We've all read about DeepMind's breakthrough with AlphaFold 2, which is beginning the nearly complete process of protein folding of all the proteins, which can be a tremendous starting point for drug discovery. But other parts of drug discovery are being automated with AI assistance also. We've invested in a company that has the ability, given the pathogen, to propose likely targets based on previous clinical studies, papers, and other knowledge bases, and find small-molecule solutions to that illness. These robots wouldn't be working alone but alongside a scientist, each proposing, checking, and approving experimentation. Through an iterative process, that phase of drug discovery has been accelerated by a factor of three, with costs coming down by a factor of 10.
So the promise is that, if this can work for many drugs and many phases of drug discovery, the big reduction in the research and development (R&D) cost can make it possible for scientists to pursue treatments for rare diseases. Such treatments aren't economically feasible at present because the one- or two billion–dollar price tag for R&D cannot be recuperated; the disease is not common enough. With the AI-enhanced R&D, more treatments can be discovered, and rare diseases can potentially be cured.
I'm particularly excited about this area because it doesn't bring the medical profession and the AI data profession into any conflict, it doesn't change the medical process, it doesn't threaten anybody's job. It's a "one plus one equals three" kind of an opportunity. We're hopeful that many more drugs will be developed with human scientists and AI working symbiotically.
Dark Forces of AI
Topol: That's great. We will track where AI can take us in the next two decades. From your book, you wrote, "If we get the dance between AI and human society right, it would unquestionably be the single greatest achievement in human history." But as you also pointed out, we are the masters of our fate and no technological revolution will ever change that. One of the worries that you're in touch with, as much as anyone, is that AI has a dark side. Not only privacy and security, but also potentially worsening inequities, a lack of explainability and transparency, and so forth. How do we tackle these counterforces? As you say, it could be the single greatest achievement, but it also could make our situation and healthcare worse.
Lee: It requires a few enhancements and changes in mentality. I do believe, first of all, that regulations are needed, but they need to be more targeted. So rather than just looking at large companies and thinking about breaking them up, I believe more specific measures are needed. For example, are there environmental, social, and governmental requirements for AI companies so they are not sharing fake news or DeepFake or not being transparent? There should be a scoreboard so if they don't do well, they will be shamed; people won't use their products or buy their stocks. There can also be serious punitive measures like the Facebook/Cambridge Analytica situation. I don't think the US and UK laws are strict enough because the Cambridge Analytica people have gone off and done more startups without paying the price.
Topol: Very important point.
Lee: Companies that are not trusted, that complain too much, or are complained about too much can be targets of an AI audit, just like an IRS tax audit. Our governments can't afford to check every detail, every company, but maybe one out of a thousand potentially serious offenders could be audited. How that can be done needs to be worked out, but all of these will provide more fine-grained and stronger deterrents for companies to do better.
The second important approach is for technologists to work on solutions. Historically, all technology platforms have created problems. Electricity caused people to be electrocuted. The internet and personal computers have brought about all these viruses. But eventually circuit breakers solved the electrocution problem largely at home and work, and antivirus software largely solves the virus problem. So, for these other problems that you brought up, there are technological solutions.
We've talked about the use of privacy computing to protect personal data. There are ways to help improve explainability. There are ways to use tools to reduce bias and fairness issues. I wish more technologists, researchers, and startup companies would target these issues rather than trying to come up with the next breakthrough AI algorithm or the next way to make money with the AI algorithm. Think about these externalities and how they can be fixed. I'm hopeful that the technologists will play a big part in finding the solutions.
Many of these issues occur when the interests of the app or the internet company are out of alignment with the users. For example, if a social network company wants more minutes from the user, that company will train the AI to keep showing enticing videos, which may or may not be good for the user but will keep that person interested. Then we can end up in a very bad situation where this interest misalignment has negative effects, for example, causing the viewers to become more extremist in their views or watching videos that are not good for them but are entertaining in some way.
A better way would be to develop business models that are more consistent with the alignment of interests. An example is Netflix. I use Facebook and Netflix, and this is no criticism of either company, but their business models cause their behavior. Netflix is naturally long-term oriented and incentivized because the user pays and subscribes on an annual basis. So Netflix must keep the user entertained and make sure the content is worthwhile year after year and the alignment of interests is much closer. Whereas the Facebook business model is advertising and eyeballs-oriented. They're incentivized to keep showing us content to get us to click and watch. We should, as investors and entrepreneurs, think more about business models that are more aligned between the large company and its users.
Topol: For AI scientists, the answer for all the AI problems is always to use AI. But instead of incentivizing the hero stories, incentivize work on technological solutions to the problems of bias, fairness, explainability, and all these things that can be approached technologically.
AI and Health
Topol: There are two other things I want to get into. One is the status of healthcare AI in China now. You're a first mover in many respects. It's still early, but what do you see in terms of AI and medicine in China?
Lee: China enjoys some advantages from data. Obviously, some strong data regulations have come out in China as well as the United States and the European Union. The regulatory side is comparable. But the Chinese hospitals are larger. They have many times more beds than American hospitals. Another interesting thing is that there is usually an authoritative hospital for each area of medicine-brain sciences or some other area of medicine. So all the hospitals in the country, when they have really tough cases, they send them to that hospital. Thus, one hospital usually possesses a lot of data, because it's the authority on that specialty. The data aggregation is more natural; it's larger without having to resort to federated learning and the like. That's one potential advantage.
The second is that China's medical practice is behind that of the United States, but that creates some opportunities, too, because some startup companies are forward-data-thinking, and as they gather data, they're thinking about using AI directly. For example, we are investors in a pharmacy benefit management company that is now accumulating amazing data from users about their illnesses from insurance claims and outcome data. I suspect US companies, say a CVS, would probably not be as sensitive to seeing the value of that data.
The aggregation of the increasing volume of data, and the increasing number of young people who study bioinformatics, which is at the cross-section of the two disciplines, are playing a significant role in China. We're investors in a couple of companies that come from bioinformatics and are adding AI. One of the issues of AI plus medicine is the cultural differences between the two disciplines. AI people tend to think of everything as statistics: If the numbers are better, then do it. But people trained in medicine, they follow the Hippocratic Oath that every human life is sacred; it's more conservative, more case by case, less statistical.
In China, this emergence of people who've studied bioinformatics can cross over to both sides and play an important role in the AI startups that are trying to get into medicine or from the medical side trying to get into AI. I see more crossover now and the people tend to be younger. The older generation in China were educated and trained before the technologies were popular. The younger generation provides more of the momentum.
I see opportunities in China. In the medical domain, China is still significantly behind the United States. A lot more can be learned and improved there.
Topol: I have seen, for example, radiology algorithms put in place to guide screenings in China. I wonder whether remote monitoring and preempting the need for hospitalization will come first in China. It's a big advantage to be able to deal with the massive amount of data you have, to develop and validate algorithms, and have the technical ability to deal with multimodal AI, which is yet another new frontier.
AI and Humanity
Topol: The last thing I want to ask you about is in an area where you and I converge. You obviously have the expertise in AI, but we both understand how technology can enhance humanity and compassion. How can technology make medicine more humane and compassionate?
Lee: One of the stories I often tell is about an entrepreneur who developed an elderly care robot and deployed it. To his surprise, he found that the elderly who used the product used the customer service feature far more often than any other feature. He asked me why that was, and we looked into it. It turns out that it was not because the product was faulty or hard to use or that people were looking for that kind of assistance. Rather, the elderly person would click on the customer service link and a representative would say, "How can I help you with the product?" And the user would say, "Let me show you a photo of my granddaughter" or "Why didn't my son call today?"
People clearly want and need the human side in the healthcare equation. That's just as important as an accurate diagnosis and treatment. In my personal case, as I combatted lymphoma, the times when I felt people were taking care of me, when I felt I had more medical knowledge, and when I felt I would likely be cured, my confidence went up as a result. It's kind of a placebo effect; my mental state, confidence, and optimism played a big part in my recovery. Doctors can play a much bigger role in that. There's no doubt that AI will increasingly help the doctor make the diagnosis initially in certain vertical sectors like radiology or breast cancer, and the doctor possesses much greater medical knowledge.
Over time, AI will get better. I'm an optimist and I believe AI will get better than humans in the entire process of diagnosis. Whether I'm right or wrong, AI can be a very good assistant that can help a human doctor make the final call and deliver a better diagnosis, treatment, and outcome. AI allows the doctor to not have to spend all the time reading every medical journal for new treatments, to study the statistics, to see which of two treatments is most efficacious. Doctors can rely more and more on AI. Then the doctor can spend more time with patients, understanding the family history, teasing out conditions and data useful for the diagnosis, and even visiting the patient at home to provide more background and increase confidence. This also increases the likelihood of recuperation. That combination of the human doctor spending more time to give the patient more confidence while AI improves the doctor's capabilities and becomes a better assistant, seems to predict a good symbiotic future for both.
Topol: You just summarized the title of this podcast, Medicine and the Machine. No one could do that better. I'm indebted to you for your work and the opportunity to write together. I'm sure your contributions will continue. They're immense. Your background has put you in an enviable position because you have experienced it from many cultures and many different, leading companies, tech titans. We'll be very interested to follow you. I suspect a third book will be coming out in the future. Thank you so much for spending time with us today and enlightening us about your perspective — not just in healthcare, of course, but in the broad aspects of AI.
This podcast is intended for US healthcare professionals only.
Follow Medscape on Facebook, Twitter, Instagram, and YouTube
Medscape © 2021 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: How Machines Bring Humanity Back to Medicine - Medscape - Oct 04, 2021.
Comments