At the Nuance Healthcare Partner Event in Berlin, web content editor Ian Bolland caught up with Dr Simon Wallace, chief clinical information officer at Nuance Communications, to talk about what speech recognition and AI technology can offer to doctors.
A solution for clinician burnout and reducing the clinical documentation administrative chore; plus the opportunity to enrich the consultation between the clinician and the patient are key aspects of why speech recognition technology is thought of so highly at Nuance.
Dr Wallace explains the future development of this technology within a clinical consultation. He describes it as the ‘Clinic Room of the Future’ using the concept of ambient clinical intelligence (ACI): “A new word that’s come into my vocabulary is ‘diarise’. It’s that part of ACI where the whole of the conversation is recorded, a bit like that of a stenographer in court or that person that sits in the House of Commons just taking everything down. But then you need the smarts, what Nuance calls clinical language understanding that pulls out the key clinical components of the story and provides a SNOMED CT coded summary note of the consultation.
“Doctors have different ways of creating their notes – some document as they go, some would get the whole story and then type it up. Whichever way, a structured note eg. presenting complaint, history, examination etc. is the final output and if you like, the ACI is doing this automatically behind the scene which is really quite impressive. I get the best of both worlds; a fully diarised note as well as a codified summary without even touching the keyboard.”
The outcome from this new way of working is aimed to save time, allow for a more thorough consultation between the doctor and patient, and to reduce some of the administrative burden dominating the lives of doctors. It also gives them the opportunity to have that stronger degree of contact with the patient, allowing them to focus more on the doctor patient relationship rather than spending wasted time focussed on the keyboard.
And the ability to document more thoroughly in less time means a more accurate recording of that consultation can be achieved. For example:
“It’s really important to record pertinent negatives. So if someone comes in and they have hit their head, the pertinent negatives eg. no loss of consciousness, no vomiting, are just as important to record as the positive findings. Typing our records can mean that we are sometimes not as good at recording these, simply due to time constraints as it can take so much longer.
“So, by being able to diarise a full conversation where all these key questions have been asked, the clinical language understanding ensures that these pertinent negatives have been included in the final summary note. It’s the way all doctors have been trained to tease out the ‘patient’s story’ and this technology allows that take place and the admin chore of documenting taken away.
“However, this approach is new and it’s inevitably going to evolve as we use it more. How does the surgical specialty consultation compare to the mental health interview for example? It will mature as doctors get more experienced with this new way of documenting.
“So in summary, the artificial intelligence is taking out the administrative chore of having to document, and allowing a richer, fuller consultation to take place in the manner in which we’ve been trained to do. It’s giving us the smart tools to do it better.”
So, is it something we’re likely to see on a wider scale within the National Health Service? The first step on this journey towards the ‘clinic room of the future’ is doctors using cloud-based voice recognition to create their notes and letters. This is happening today in hospitals such as Homerton University Hospital Foundation Trust and Oxford University Hospitals NHS Foundation Trust where doctors have been completing their outpatient letters using speech recognition. This has led to letter turnaround times being reduced from 12-17 to 2-3 days, with a significant reduction in cost of outsourced transcription as well as a decrease in the need to recruit more secretarial and bank staff to cope with backlogs.
“I think the other thing that is really timely about this technology is that using speech is out there in our everyday lives. This means doctors, nurses and other healthcare professionals maybe using it in their home or cars for example. Then, when they come to work and their organisation has gone down this track, they say: “ah great.” Unlike some other digital technologies, voice recognition is a technology that is mature, is industrially robust, and is ready to go, for a population that has already experienced it in their non-working life.”