NHS 'Must Catch Up With the Pace of Technological Change'

Peter Russell

January 28, 2019

The NHS needs to catch up with the pace of technological change before artificial intelligence (AI) is used more widely in medicine, a report by the Academy of Medical Royal Colleges (AoMRC) said.

The authors urged healthcare providers and regulators to find answers to clinical, ethical, and practical questions over its implementation.                                    

The report, Artificial Intelligence in Healthcare , said that examples of AI could already be seen in the system: from Google's DeepMind which had taught machines to read retinal scans with at least as much accuracy as an experienced junior doctor, to health app Babylon which said that its chatbot had the capacity to pass GP exams, albeit a claim contested by the Royal College of General Practitioners.

A debate continued though on whether AI would confer benefits to patients and clinicians or represented a risk to patient safety, health equity, and data security, the report said.

It called for doctors and others in the health system to take an active role in the development of technology.

However, it cautioned against politicians and policy makers believing that AI would alleviate pressure on the NHS any time soon.

Health Improvements or a Dystopian Future?

"We could see a utopian world, where health inequalities are reduced, where access to care is dramatically improved and quality and standards of care are continuously driven up as machines learn more about the conditions of the people they are treating," the authors wrote.

"The dystopian, but also feasible outcome is that health inequalities increase, or the system becomes overwhelmed by 'the worried well' who have arrived at their GPs' surgery or the Emergency Department because they have erroneously been told to attend by their AI enabled Fitbit or smartphone."

Another version of the future was a world in which only the well-off could afford the best data from AI-delivered healthcare.

Recommendations on the Future of AI

The report made a number of recommendations. These were:

  • Politicians and policymakers should avoid thinking that AI is going to solve all the UK's health and social care problems

  • Patient safety must remain paramount and AI must be developed in a regulated way in partnership between clinicians and computer scientists, without stifling innovation

  • Clinicians can and must be part of the change that will accompany the development and use of AI – this could affect medical education and careers

  • For those who meet information handling and governance standards, accurate data should be made more easily available across the private and public sectors

  • Joined up regulation is key to making sure that AI is introduced safely

  • External critical appraisal and transparency of tech companies is necessary for clinicians to be confident that the tools they are providing are safe to use

  • Artificial intelligence should be used to reduce, not increase, health inequality – geographically, economically and socially

A Q&A With the Report's Author

We asked Dr Jack Ross, a clinical fellow at the AoMRC and one of the report's authors, to summarise the findings.

Dr Jack Ross: I think the overarching view is that artificial intelligence is very exciting but it's early days, and there's been a lot of hype that has been unsubstantiated at the moment. 

What we have realised is that the tech companies are way ahead of society, and healthcare providers, and regulators, at the moment and there's a lot of questions we need to start thinking about relatively soon – clinical, ethical and practical questions – before AI starts to be used more widely in medicine.

Medscape UK: We're used to hearing reports about problems with technology in the NHS, such as systems that are incompatible with each other, is there a lot more work that needs to be done with infrastructure before we can take advantage of what AI might contribute?

Dr Ross: Absolutely. On the one hand you're talking about bringing in AI, and on the other hand you're talking about a major user of fax machines.

There needs to be a lot more work on interoperability, and making sure there's high quality data available, before we think about how we're going to use that for machine learning.

Medscape UK: How will artificial intelligence impact on doctors and other clinicians? Will there be a big learning curve for them now and in the future?

Dr Ross: I think there's quite a big learning curve about what the strengths and limitations of AI are.

I think for the foreseeable future there won't be any AI decision-making tools because that takes quite a lot of accountability and responsibility that tech companies won't want.

But we'll start to see them brought in as clinical support tools, and there's a good parallel with drug companies, in that while their developments are very exciting, we need to make sure they're well evaluated and critically appraised.

Medscape UK: From a patient's point of view, will it be the case in the future that 'the robot will see you now'?

Dr Ross: I think there will always be a need for humans, and one thing we've learnt from this report is just how many tasks that doctors do are very human-centric. A lot of GP work requires picking up on subtle cues that will get lost in machine learning and big data, and being able to see the patient in front of you – safety netting, safeguarding, and all of that.

For the patient-doctor relationship, the dream is that AI can automate more of the drudge work that doctors do and free up more time to spend with patients and listen to patients and give advice.

Medscape UK: We've heard in recent years about the so-called 'worried well' with their Google searches of health conditions, Fitbit data, and healthcare apps. Is there a danger that technology in the hands of an untrained patient could have a negative effect?

Dr Ross: I don't want to overstate this because this is the angle taken by the newspapers.

I'm very keen for patients to take control of their health, and I think things like Fitbits and Apple watches have really encouraged people to be healthier.

I do think that apps that are marketed as 'wellness apps' that are unregulated are slightly dodging some of the responsibility here, and there is a question if they are giving reliable information that leads to a lot of unnecessary concerns for patients

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as:

processing....