Artificial intelligence will never replace a doctor. However, researchers at the Department of Energy’s Pacific Northwest National Laboratory took a big step toward the day when AI can help physicians predict medical events. A new approach developed by PNNL scientists improves the accuracy of patient diagnosis up to 20 percent when compared to other embedding approaches. The PNNL approach seeks to capture and re-create the types of connections physicians do naturally when they apply a lifetime of learning and knowledge to the patient standing in front of them in the exam room. The goal: Use the laboratory’s robust AI capabilities in machine learning and deep learning to improve patient care and save lives. At the heart of PNNL’s breakthrough is a dataset the lab created in collaboration with Stanford University of over 300,000 medical concepts defined by SNOMED Clinical Terms, a collection of standard medical terms, codes, synonyms and definitions used by medical researchers and practitioners. PNNL developed a graph-based learning method based on these terms which outperformed current models.
Pods of Science | Episode 4 | How to Predict Your Next Doctor’s Appointment
Welcome. I’m your host, Jess Wisse. On today’s episode we’ll talking about how artificial intelligence could take your doctor’s care to the next level.
Stay tuned to learn more.
JW: PNNL scientists have found a way to improve the accuracy of patient diagnosis by up to 20 percent! How? By using artificial intelligence. A PNNL project, called DeepCare, looked at ways to use AI to improve medical outcomes for patients.
Meet the project lead, Robert Rallo.
RR: I joined the lab three years ago, coming from Barcelona. My background is chemistry, but I was a professor in computer science for more than 20 years before joining the lab. My main area of expertise is machine learning and applications of machine learning in different areas, one of them being computational toxicology. The team working on this is different computer science scientists from PNNL, Khushbu Agarwal and Sutanay Choudhury. They are two computer scientists in the data sciences group at PNNL. We have strong collaborations also with the University Virginia Tech and Stanford and some of the students who have been summer students here at PNNL have been involved also in this type of biomedical work.
JW: We asked Robert why he got into the field of computer science. And here’s what he had to say:
RR: The fact that a computer is able to learn by itself from data is something that is really interesting for me, really intriguing, and what triggered my interest in this for me.
JW: Robert and his team at PNNL created a new embedding approach. The approach seeks to capture and re-create the types of connections physicians do naturally, in their heads, when they apply a lifetime of learning and knowledge to the patient standing before them in the exam room.
What’s embedding? Basically, it’s translation for computers. Using embeddings, computer scientists can take a piece of information that only humans can understand and then transform it into something a computer can use.
RR: Medical concepts is, for instance, when you have a specific diagnose this is a concept. You have fever, you have high blood pressure; these are concepts. And then, the way in which a machine learning algorithm or a computer can process these concepts requires them to be codified in a certain numerical way. So one of the ways in which we are making this coding is by developing a continuous numeric representation of these concepts that somehow captures the similarities, the relationships between each one of these individual concepts. So this idea of somehow transforming this textual set of concepts information into a representation which is suitable for machine learning is the embedding process. And what we want is that, this numeric representation will convey the same semantics, the same information, than the original concepts.
JW: One of the hardest parts about using AI in the medical field is the inability to combine multiple types of data. Think of all the information that’s captured when you go to the doctor. Now think of all the different forms it comes in. Computer-friendly data like blood work numbers or diagnosis codes are easier than unstructured data like chart notes or images from X-rays and MRIs.
RR: Well everybody knows that it’s a known fact that understanding hand-written doctors’ notes is like impossible. (laughing) And I say this because my sister is a medical doctor. But no, I'm joking now. But essentially if we are looking at different types of information, you have structural information in which everything is well classified, well cataloged, and it’s very easy to use. And then you have all this unstructured information in which you have maybe recordings of the patients in an interview for something related to mental health. You can have the notes
of doctor that can be written in different narrative styles. You can have different types of imaging data from x-rays to MRI. And each one of these modalities of data is complex and has
its own complexities. And again, the challenge is understanding being able to process in the proper way all this data. But the important thing is finding the ways on how we can combine all this information in the proper way to develop our models.
JW: OK. Doctors have been known to have less than ideal handwriting. But deciphering handwritten notes is not the end goal for computer scientists like Robert. The goal is to create models that can take multiple pieces of data, in many forms, and make connections between them. The models can even detect connections that a doctor may not consider.
RR: What we are trying to do, is we try to go beyond these single concepts and capture these in the context of binning. We have different concepts, different entities, and the relationship between all of them. And this representation is richer than the single concept because it contains the relationships between all the concepts that produce something. And the way in which we are extracting this from the clinical notes is by using natural language processing techniques and by identifying what are the elements in these clinical notes that correspond to specific concepts, and while which have the elements in these clinical notes that corresponds to a specific relationship. So, for instance, if in the clinical note we say we say that the patient has fever and I'm diagnosing six milligrams of Advil, whatever, for this patient. So, we are extracting all this information and saying, you know, this is one of the diagnostics for this patient, and we were we are administering this drug, we are giving this dose of drug, and all this process together is a medication event for a specific symptom.
JW: All of this work is to create one thing: a knowledge graph. A Robert describes these knowledge graphs as what’s naturally happening inside your doctor’s head when they see patients. Their medical knowledge and experience allows doctors to quickly make connections between symptoms and diseases.
RR: This knowledge graphs is what medical doctors have in their mind when they are diagnosing you, right? They have all these relationships and based on years of experience and training they are able to make these connections because they have this mental model of the relationships between symptoms, diseases, and everything. So this is what we are capturing with these knowledge graphs and what we are feeding to the machine learning algorithm together with the data
JW: But artificial intelligence will never replace a doctor. Robert and his team envision new AI models, like the one developed at PNNL, will be a powerful tools for physicians.
RR: We are trying to help the doctors with tools that can provide more information to them. Because these tools will have access to larger databases and larger amounts of information that the amount of information that we can store in our brain. And probably, these systems can provide clues of things that maybe a medical doctor, based on a given set of symptoms, could not initially consider but maybe is that the answer of what is happening. As the amount of information that we have on patients increases, not just in quantity but also in complexity, it will be more common to have all the omics analyses for the patients when they are treating a disease. All this information, and understanding the relationships between all this information, is going to be to become more and more complex, and this type of tools can help doctors in order to establish all these connections.
JW: Having a tool like this could literally save lives. Doctors are humans. Which means they can make mistakes. Think of their heavy work loads, the high stress environments, and long hours—all of these things combined could cause your doctor to unintentionally miss something. You could chalk this up to having “off day.”
RR: Professionals, they may make mistakes. And these mistakes can be for a number of reasons: stress at a given point in time, it could be because the information that they have been provided is not complete enough, maybe is because at a given point in time they are tired and they do not see something which could be evident. So hopefully these systems will help in this in this space. But probably the perception and the experience that a human, and the things that a human can sense when is interacting with a patient, perhaps cannot be captured properly by just, you know, some analytical methods or things like that. So what I am trying to say is maybe that the doctor can have some perception and see something in the patient that the machine learning, together with all the lab analysis or whatever we are doing, we cannot see. I think this is the power of this combination, right? Having this human/machine teaming in which we have these two components playing together is when we have this win-win situation.
JW: This allows doctors to give even better care than they do now and has the potential to completely change the way we receive care as patients.
RR: Probably in the future this technology, as I said, the first step will be like medical assistance. Where in which the system will help the doctor in the reasoning process in order to establish a diagnostic and will provide indications, probably quantitative indications, in terms of probabilities of the different types of diagnostics and also recommendations regarding what is the best course of treatment. As this progresses the systems will be more and more intelligent and probably the systems will be able to forecast in the longer term what is going to happen for a patient.
And maybe in the future, even with all these biometric devices that we have, maybe if this is connected with some medical system the medical system itself can alert the doctor for a patient saying, “you know these vitals for this patient are going in this direction, and we have seen in other patients with the same characteristics and maybe the same genetic profile that they are prone to have this type of this. And also it's good to be proactive.” And so this probably is the what is going to have more impact in the patient side right this moving from this medicine which is much more diagnostic to something which is much more predictive medicine in which we are looking at the longer term and trying to forecast what is going to happen to the patient before it happens instead of curing what is happening right now.
JW: The benefits of using AI for medical care are far reaching. Not only will these models give doctors a look into a patient’s medical future, they could also gain insight into solving current medical mysteries.
RR: I'm hopeful that by doing the analysis on the large database that the VA has for all the
veterans and focusing on specific diseases we are going to gain an incredible amount of novel insight on some of these diseases. I'm sure that we are going to be discovering things about
cardiovascular diseases or prostate cancer that we didn't know just few
years ago . And this will open, for sure, windows to this combination of artificial intelligence and medicine and the applications for the health of the of the veterans and for mitigating some of the diseases which are specifically targeting this population.
JW: What’s next for this research? A new dataset that’s part of a collaboration between the Department of Energy and the Veterans Administration. The VA-DOE Big Data Science Initiative includes investigating better ways of predicting suicide, cardiovascular disease and approaches to treatment of prostate cancer.
These are important issues. Someday in the future we hope to use AI to increase people’s length and quality of life, all thanks to artificial intelligence.
JW: Thanks for listening to Pods of Science. Want to learn more? Follow us on social media at PNNLab. We're on Twitter, Instagram, Facebook, and LinkedIn. You can also visit our website at pnnl.gov. Thanks for listening.