Despite this, we may soon start to see things moving forward. The recent government reviewof AI and machine learning technologies in the UK was rightly upbeat about the potential of this technology in areas like health care. But even if AI were advanced enough to be a routine part of a pathway (it’s not), we are even further away from machines being able to operate independently of clinicians.
Instead, the technology currently being trialled supplements the role of clinicians, acting as a prompt to suggest further investigation rather than challenging the clinician’s judgment or taking their place.
In three areas human clinicians make decisions – screening, diagnosis and treatment planning – technology could attempt to do what a clinician does: assess complex and often conflicting data about a patient and use existing best practice and evidence to try to identify the best course of action.
More regulation for machine learning tools?
In many ways, software informed by machine learning – where it can improve over time based on experience – is just like any medical device in that it’s a tool a clinician uses to help them do their job. Devices without a machine learning component are regulated by the Medicines and Healthcare products Regulatory Agency (MHRA) and each machine learning device will need to be registered with that organisation – but machine learning tools may need more regulation than is currently in place for non-learning devices.
At its most advanced, machine learning learns while collecting data, and this can mean it can change its recommendations after time ‘in the field’. Also, some machine learning algorithms struggle to spell out exactly how they have come to a decision, and so with these ‘black box’ algorithms judging how much weight to give their recommendation can be challenging. These twin concerns could prove a potential headache for regulators that are used to dealing with more static clinical decision support tools.
The first problem – recommendations changing over time – is one of the new and interesting strengths of this emerging technology. Constantly learning from new data even while it’s deployed offers opportunities for the technology to keep improving its recommendations. However, this opportunity also presents a risk to regulators: what happens if the machine gets a bit less accurate rather than a bit more accurate? This is not a problem with current clinical decision support as it relies on a tried and tested iteration of the software.
Medicine is built on evidence, not faith.
The second problem – that of algorithms that are not able to articulate their decision process – is a growing concern in machine learning. Current clinical decision support, without machine learning, offers a rationale for a recommendation to be interrogated by a clinician. Advanced forms of algorithms do not always do this, and sometimes this is because they cannot provide such a rationale. Current regulations make this a difficult scenario to deal with, but more to the point it could put an unreasonable onus on the clinician to have a level of faith in technology. Medicine is built on evidence, not faith.
The solution to both of these issues is easy to articulate, but more challenging to deliver. Algorithmic transparency would soothe both concerns. It would, by definition, eliminate the black box problem by opening the inner workings of the box up to clinician scrutiny. Transparency might also make regulators less nervous about a learning machine being let loose in clinical settings by making its decisions ultimately auditable by the clinician. If this still feels too unsafe there might be an option for the ‘new’ version of the algorithm to offer its decision process next to the tested version of the algorithm. This would be a sort of A/B testing, where the clinician plays the arbiter.
Software engineers may yet find a way to crack black box algorithms in health care, and we should certainly insist on a safety-first approach to clinicians who work with these technologies. However, if the box remains black, it could cause a serious roadblock to machine learning being realised in clinical care. It is hard to see a future within the current regulatory framework where a clinician could take advice from a tool that could not justify its decision. To compound matters, the General Data Protection Regulation, coming into force in May, puts some of these algorithmic transparency requirements into law.
'Ensuring clinical governance standards are set are upheld'
If the NHS does find acceptable ways of letting clinicians use black-box learning algorithms in helping to decide a patient’s care, a more diverse range of regulators should think about issues thrown up for clinicians. Who is accountable when things go wrong? With current clinical decision support tools, the clinician remains responsible for safety incidents, but with machines that are dynamic, changing and sometimes opaque, is the clinician responsible or is the tool? These questions are for the future, but if the NHS is to realise the full potential of machine learning, they are questions that need answers. Organisations, such as the Care Quality Commission and NHS Digital, will need to think about their role in ensuring that standards of clinical governance are set and upheld. As has been argued elsewhere, the regulators of medical professionals, such as the General Medical Council, should consider how machine learning could exist within the current clinical standards.
There is a data-driven logic to this, but it runs counter to the paradigm of evidence-based medicine
Calls for further regulation may dismay those who wrote the government’s report into AI, who argued that AI governance should not be held as distinct from more general data governance. However, clinical governance might be seen as different: regulation, accountability and governance provide an all-important safety net for patients and clinicians. And this safety net does not allow for a faith in technology beyond that which auditable evidence allows.
In the final short story of Isaac Asimov’s 'I,Robot', the machines that plan the Earth’s economy start to make small errors in their calculations. When questioned about their discrepancies the machines respond enigmatically: ‘The matter admits of no explanation.’ The mistakes are in fact carefully calculated to deliver the best possible outcomes for humankind. We don’t know what this best possible outcome is: ‘Only the machines know, and they are going there and taking us with them.’ There is a data-driven logic to this, but it runs counter to the paradigm of evidence-based medicine. As important as outcomes are, it’s the explanatory power and the accountability that we really value.
Harry Evans will be chairing The King’s Fund’s upcoming conference on sharing health and care records, which takes place on 13 December in Leeds. You can view the event programme and book a place at the event here.
Harry Evans, Researcher, The King's Fund