Empathy, data and machine learning: the promise and limits of AI in healthcare

Dr Bayju Thakar is a former NHS doctor and founder of the digital health company Doctor Care Anywhere, who writes about the role of AI in digital health. 

There’s an almost ceaseless stream of media coverage now on AI in digital health. Whether wide-eyed forecasts that it’ll replace human clinicians in the foreseeable future, or alarm bells suggesting it’s a danger to public health, AI’s barely out of the news.

I’m occasionally somewhat bemused by all this. I’m certainly reluctant to see chatbots as a viable alternative to clinician care – on which more below – but I’d rather we didn’t make a bogeyman from these technologies either. I think a dose of patience is needed here – we mustn’t rush into adoption of such revolutionary tech, but equally let’s not scare ourselves witless over what it can do.

I see AI as rather like a child prodigy.  It’s already doing amazing things – and not just in routine triage and workflows, to which its true sceptics occasionally reduce it. In radiology, for instance, one trial saw AI beat 17 of 18 radiologists in spotting pulmonary nodules; another, in A&E, saw a 47% reduction in misinterpreted scans of wrist fractures. That’s pretty massive.

There’s been some real progress in cardiology, too: in another case I saw recently, an algorithm scored 92% accuracy over the 79% achieved by echo-cardiographers. That’s a really significant difference in such a serious area of medicine.

These early wins aren’t just the way the cookie crumbled, though – they’re a result of strong and sustained R&D.  And the lifeblood of that R&D – apart from money, of course – is quality, transparent data. Which is something you only get with widespread buy-in and trust from patients and clinicians alike – which, let’s face it, simply isn’t where we are now.

If we raise AI right – if we nurture it on the diet it needs of abundant, properly-labelled data – it can do things we can barely conceive of now.  But we must be realistic.  AI is a child prodigy, but it’s still a child.  And like any other child prodigy, while there’s a huge amount of promise, there’s quite a bit that can go wrong too.

As the adage goes, it takes a village to raise a child. And it takes a thoughtful and patient village to raise a child prodigy – especially one who goes on to achieve the amazing feats of which they’re capable, without going off the rails and hurting those around them.

I’m a digital healthcare provider myself, and a willing evangelist where the tech fulfils its promise. But sometimes turkeys need to vote for Christmas, when it’s the right thing to do – and I don’t think we’re quite there yet with AI.  As with any emerging technology – particularly one through which people’s safety is at stake – we need to walk before we can run.

So why do we do this? People in this sector aren’t fools, after all. What’s the draw?

Fear. Fear of spiralling costs, ever-increasing demand, complexity of disease, ageing populations and all the other grave challenges health systems face. The thought of a “cure-all” silver bullet is comforting – but illusory.

One of healthcare’s biggest challenges is variance in care – and one of AI’s big promises is reduction of variance, by standardising the source of that care.

I happen to think the idea of trying to eliminate variance by automating clinical interactions is rather a sad one.  But, more than that, I’m not convinced it would even work.

Why? What does AI lack that human clinicians can provide?

The main thing is empathy. What we call AI – as Ian Jackson wrote recently in these very pages – is better described as machine learning. AI can spot patterns very well indeed – in time it’ll be identifying trends and unlocking research that will enable transformational preventative care which, for the moment, we can only dream of.

But if we can’t condense something into properly labelled data, there’s no way of teaching it to a machine. And empathy remains mysterious to even the brightest minds on earth.

Empathy isn’t just a nice-to-have – it’s a critical ingredient in quality care. That’s not to say all doctors have stacks of empathy, but what empathy they do have can be a vital part of what they do.

One study for instance showed that diabetic patients of physicians with higher scores on the Jefferson Empathy Scale suffered far lower rates of acute metabolic complication than those whose doctors were more aloof. That was based on a sample of 21,000 participants, at a 95% confidence interval – it’s no anecdote.

In another, cold sufferers rated their doctors for empathy on the Consultation and Relational Empathy system, with white blood cell counts taken after 48 hours. Perceptions of clinician empathy, and length of interaction, were both positive indicators for rate of recovery.

This wasn’t subjective – the differences could be seen under a microscope, in a four-fold difference in neutrophil count. That’s an observable demonstration, I think, that empathy should be considered a key component of clinical competence.

So let’s not try to use AI as a replacement for things it can’t replace – let’s focus on what it can do, and help it raise its game there. For that to happen – to yield the data we need to realise AI’s full promise – people need to trust these tools.

I recently commissioned some polling from YouGov, to test appetite for AI chatbots as an alternative to contact with a GP. Just 7% of respondents opted for it. That doesn’t suggest to me that people trust such tools as things stand; it suggests we have a lot more to do first.

So, what should we do? What do we need in place for AI to win that trust?

First, it needs to be credible. We need clear, detailed oversight processes in research and governance, which will allow us to validate, regulate and implement these technologies as they develop. There’s a surprising lack of accountability or stringent testing of AI solutions in healthcare; and we’re still at the shallow end of the evidence curve as a result.

Then there’s what I’ll call intimacy – between the people designing these tools, and the people who’ll face them in their most vulnerable moments. Have the developers engaged thoughtfully with the patients and clinicians who’ll be using them, to really understand what makes them tick – or are they foisting them down people’s throats based on their own preconceptions?

If you want the appetite, adoption, outcomes and data that AI’s future relies on, you can’t do that. My own experience in digital healthcare, and medical practice more broadly, tells me it simply won’t work.

Then there’s self-orientation, which could make or break the whole thing. I’m thinking here of those who feed into this process: the big data players, the social media giants, the search engines, and some of the more starry-eyed tech-heads. Where they pursue an agenda at odds with delivering the best health outcomes – or even just fail to prioritise them – they put profoundly at risk the public’s trust on which this whole endeavour relies.

But, really, the big one is transparency. AI can’t live up to its promise if the data and the algorithms through which they are run on and rely are hidden away in siloes and black boxes. That data needs to be available to designers, practitioners, testers and regulators – and if we need to found a new agency to that end, then that’s something we should look at.

So if we’re going to raise the wonder child of AI right, let’s correctly raise our own expectations of it: what it can do, and do well, and what it shouldn’t even try to do. And, as it grows, let’s make sure we have the right institutional infrastructure around it, so it makes the best use of that diet we feed it – of the abundant and transparent data which I hope will grow in quality over the years ahead.

If that happens, so that everyone can learn from it – and it can serve all of us, not just those who can afford it – these tools can improve at a rate that will make those true sceptics who remain sing a rather different tune.



About


'Empathy, data and machine learning: the promise and limits of AI in healthcare' has no comments

Be the first to comment this post!

Would you like to share your thoughts?

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© 2019 Rapid Life Sciences Ltd, a Rapid News Group Company. All Rights Reserved.

Privacy policy

Terms and conditions