The Problem: Current Neural Network Models are Confining and Expensive
What is a neural network or deep learning? Simply put, these are networks that can support machine learning and natural language processing. MIT’s definition of neural networks is the following:
“Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.” Existing neural networks used to support natural language processing incur significant resource time for creating data models that can be used to train the NLP engines for their desired focus.
Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are the two most common neural network architectures used in deep learning. CNNs are a class of deep neural network, most often applied to analyze visual imagery, such as medical images. RNNs can use their internal state or memory to process variable length sequences of inputs. RNNs support speech recognition, and that is what companies like Suki, Nuance, and MModal do. The technique works well when analyzing the relationship between words that are close together, but it loses accuracy in modeling the relationship between words at either end of a long sentence or paragraph. Current natural language processing applications struggle because they analyze words as a single entry without context.
The Solution: Transformer Neural Networks Emerge to Improve Deep Learning Efficiency
Transformer neural networks emerged from the concept of attention in Google AI research. Attention relates to mathematical models of how things relate to, complement, and modify each other. Recent Google research is now focused on sparse attention by computing a limited selection of similarity scores from a sequence rather than all possible pairs as is done with attention models.
Two key transformer models are leading the effort to drive improved AI services, and they are Google BERT and OpenAI GPT-3.5. These two models have significantly increased the accuracy of natural language processing applications. Transformer models have two advantages. First, they understand the relationship between sequential elements that may be far from each other. Second, they provide faster sequence processing since they pay attention to the important parts. Researchers can now attack the significance of words with two different meanings, which are called polysemes. An example is the word “it” but not including Clinton’s definition. Transformer neural networks remember nearby words but connect them to words much farther apart in a document.
While transformer neural networks are currently expensive to build and maintain, the models can be repurposed to support many industry processes. In healthcare, the ability to read medical record text and create guidance on protocol compliance or variance as well as suggest potential alerts regarding a patient’s care will continue to improve the adoption and use of AI by frontline providers more accurately.
The Justification: Accurate NLP Will Drive Better Outcomes and Improved Patient Safety
NLP in healthcare continues to advance provider capabilities for capturing and evaluating text. NLP has been used to extract key words from clinical documentation to improve service coding. NLP is now being advanced to capture the physician’s speech during a patient intervention to create the clinical documentation in real time. Advances in NLP are driving significant market events. Microsoft just acquired Nuance to ensure their product can compete with NLP products that are emerging from Google and Amazon.
As NLP capabilities continue to improve in accuracy and relevancy, healthcare providers will see improvements in patient safety and outcomes from NLP that is integrated in patient care applications. These improvements will be necessary to support the long-term viability of provider organizations as they move to value-based care reimbursement.
The Players: Large Technology Companies Have First Mover Advantage
Google and OpenAI's GPT-3.5 are large technology companies focused on advancing transformer neural networks. As these companies advance their NLP models based on transformer neural networks, vendors of healthcare applications will integrate these NLP solutions to provide improved guidance and support for providers in real time. In many cases, these applications will be mobile, and that will further support the convenience of using these solutions.
As AI deep learning models continue to advance, healthcare will benefit from more accurate ambient voice and NLP solutions that will make EHR documentation more effortless, medical coding more accurate, and the navigation of clinical applications more intuitive. As an example, a physician could order medication for a patient by speaking the following words: medication order, ampicillin, 500 mg, q4 hours, 14 days, send a prescription to patient’s pharmacy on file. The order would not only be converted to a prescription to send to the pharmacy but also be placed on the medication list. When the physician ordered ampicillin, a clinical documentation specialist associated with the NLP would evaluate the order for medication conflicts. The medication order would be converted to a CPT4 code and added to the billing file for the encounter. This scenario may happen within the next few years as AI NLP algorithms conducted with transformer models are tested in consumer smart-device environments hosted by Amazon, Google, Microsoft, and Apple. After that, it is but a short step to get to healthcare applications. Look ma, no hands!
Photo credit: Elnur, Adobe Stock
End of Messages