I used to hear a lot of high-level misconceptions about AI…
There used to be a lot of over-pessimism and over-optimism, but I really haven’t heard a lot of these comments recently. Those more extreme views have gone away and been replaced by more of a nuanced understanding of AI itself and what AI can do for both the pathologist and the lab.
Interview with Julianna Ianni
Vice President of Artificial Intelligence Research and Development at Proscia
BIOSKETCH: Julianna Ianni is VP of AI Research and Development at Proscia. She leads a team of engineers and scientists developing AI systems to help laboratories and research organizations improve quality and efficiency using digital pathology. Prior to her work at Proscia, Julianna earned her Ph.D. from Vanderbilt University in Biomedical Engineering in 2017, developing methods that enable faster and safer MRI, from fast image reconstruction techniques to predicting patient-tailored RF shim parameters for high field scanners.
Interview by Jonathon Tunstall – 09 Dec 2022
Published – 14 Mar 2023
JT – Welcome to Pathology News. Today I’m interviewing Julianna Ianni, Vice President of Artificial Intelligence Research and Development at Proscia.
Julianna, welcome to Pathology News.
JI – Thanks. Nice to be here.
JT – I see from your profile that you have a background in biomedical informatics, but perhaps you could tell us what drew you specifically to artificial intelligence in pathology.
JI – Sure. I have been interested in the medical field and medical technology for a very long time, but my interest in AI began during my undergraduate years at Vanderbilt. While pursuing a degree in biomedical engineering, I did an internship in bioinformatics. I saw the potential of data to unlock new insights and was drawn to imaging. At the time, I was struck by imaging as one of the biggest sources of data.
I later pursued medical imaging and went on to do my Ph.D. in biomedical engineering, with a focus on MRI. I was building algorithms to process MRI images, as well as building algorithms on the acquisition side of MRI, so also obtaining images. Much of the optimization process is not that different from what we call AI today, and it draws on a lot of the same principles that apply to machine learning and deep learning.
Meanwhile, deep learning was taking off, and my goal after finishing my Ph.D. was to pursue it specifically in medical imaging. I had only heard of pathology until this point. Then, I came across Proscia, which was developing applications using AI for pathology. I didn’t know that pathology was going digital, so it was exciting to find that match with my skills and interests.
JT – Great. So obviously you are now with Proscia, and you have a role managing AI research and development. What does Proscia aim to deliver, and how do you and your team fit into that process?
JI – Proscia’s mission is to change the way the world practises pathology. We have a software platform called Concentriq, which is powering routine pathology operations for research organisations and diagnostic laboratories. It’s also AI-ready.
My team develops the AI applications that plug into Concentriq to transform pathology images and data into information that makes pathologists, technologists, and lab managers’ jobs easier and the lab more efficient.
Very broadly, we work on two different sets of applications. The first set are applications for a diagnostic setting. One example is DermAI, which is currently available for research use only. It categorises cases of skin specimens and enables the routing of those cases to the appropriate pathologist. It also flags suspected melanoma cases and allows prioritisation of those cases. Our second broad area of AI development is process automation applications that help to ease the burden of tedious, manual tasks. One example is our Automated Quality Control application, which is focused on detecting artefacts in digitally scanned slides.
JT – Okay. So how does Automated Quality Control work? What would you say are the benefits it delivers to users?
JI – I should start by noting that quality control of slides is not a new process. It’s been performed for quite some time, since before pathology started to go digital. There can be many challenges with slide prep that lead to quality artefacts, or issues. These include tissue folds on the slide or tissue that has fallen off the edge of the slide.
However, many of the labs we talk to are ready to address quality control now that pathology has gone digital. This is because they’re dealing not only with artefacts due to prep but also issues that are due to aspects of the scanning process. There are also some issues that are only present in scanned images. For example, sometimes the tissue of interest can be missing from the slide, or it can get cut off at the edge of the scanning area. Scanners can also be out of focus in some regions. Our Automated Quality Control application detects all of those. We are finding that typically 10 to 30% of scanned slides have some kind of quality issue, although that can vary a lot depending on the setting and the application.
JT – Yeah, because of course we could be talking about stain variation across the slide or it could be a tiling scanner, which has incorrectly put together the boundaries of the tiles. Obviously, this is a single application that is trying to detect multiple issues. Does it also work across multiple scanning platforms?
JI – Yes, it does. It was developed and tested with tens of thousands of slides and annotations. It covers a wide array of scanned images.
JT – Well, one of the challenges here is that AI and the development of algorithms and the success of those algorithms is very much dependent upon the variability of inputs and their quality.
JI – This is actually one of the considerations that got us thinking about developing Automated Quality Control. We were seeing quality issues affect the development of our own AI algorithms, so a solution that can detect them is a great step towards being able to accurately assess how much they may be impacting an algorithm. We can then potentially screen out some of the regions containing issues from data analysis as well as when training and testing the applications.
JT – I do think that’s a very powerful approach to have the quality control in conjunction with the AI itself, as a precursor. Now I know that your team’s been very active in publishing your recent research findings, so perhaps you could break down some of the takeaways for us.
JI – Absolutely. We’ve had a few recent works published. One of those focuses on an AI system that we developed to predict melanoma concordance. It essentially predicts when experts are most likely to disagree on a melanoma diagnosis, and we presented our findings at the AIMIA Workshop at ECCV 2022.
When it comes to melanoma diagnosis, there can be a very high degree of disagreement among pathologists in terms of a diagnosis. For example, one pathologist might say it’s melanoma, and another pathologist might say it’s just a lesion with some dysplasia. That disagreement can be as high as 40% in some cases of melanocytic lesions. There are a few works out there already supporting this, and that’s what we saw in our own data.
We really wanted to be able to predict agreement among pathologists on a specific case because it might be correlated with any number of factors. For example, when pathologists tend to agree, there may be more certainty that it is melanoma. So, we’re interested in how that level of agreement plays out in the diagnosis itself. Are these lesions that are predicted to have low concordance truly melanoma? Can we find a way to flag them? Could we catch the cases where some pathologists said they were not melanoma, but we predicted a high level of disagreement? Could we catch those cases and get them reviewed again and vice versa–could we catch cases that were diagnosed as melanoma but predicted to have high agreement for being benign?
JT – It’s a general industry phenomenon, certainly a phenomenon in diagnostic pathology. We know from the studies that have been done that there are both inter- and intra-pathologist variability in interpretation. And, not just in melanoma, but across a lot of diseases and tissue types. It seems to me that this discordance must impact AI development.
JI – It does, and it’s led us down this road. I think we were a little shocked in the beginning by just how high the discordance rate is. This disagreement impacts both the training and development of the algorithms. It also has an impact during testing because it’s difficult to get to ground truth in the first place. You need samples from a few different pathologists reading each image, and you need those during both training and testing.
You may think ‘oh, it’s just random noise, and it probably averages out. Does it matter if one pathologist has a different take on a given case overall if we are working with a large amount of data?’ But it tends not to average out because there’s often some bias involved.
We presented findings along these lines back in December at the Digital Pathology and AI Congress in London. We demonstrated that there are two components of this sort of diagnostic discordance that we can assess with AI. One of those components is the random noise, and the other component is the bias. Some pathologists have a higher threshold for diagnosing cancer than others. We tend to see different groups emerging, one with a high bias for diagnosing cancer, and others who diagnose cancer a lot less frequently. So, we cannot assume that what seems like random noise is, in fact, random noise.
JT – Well obviously here we’re looking and thinking about labs adopting AI in routine clinical practice. The lack of discordance in the ground truth data is going to play a role in that. What are your thoughts here? What do you think is the impact of AI moving into a routine clinical practice?
JI – If your lab is looking to purchase AI or build AI for use in routine practice, you want to pay attention to how it handles discordance. Your lab might have a different bias toward diagnosing cancer compared to whatever the AI application is meant to do. Your lab might also have a different bias than the data on which the algorithm was trained. So, you generally want to know as much as you can about the data the AI system was built on.
With that said, I think it’s more important to know what data it has been tested on and understand as much as you can about it. You want the algorithm to be tested on data from experts. So, if you normally have dermatopathologists diagnose your cases, you want subspecialty pathologists to be the ones who provide the ground truth for the test data, and you want more than one of them reviewing each image. You also want to know what they did with the images that weren’t agreed on. Did they toss them aside? You obviously can’t do this in clinical practice. Or did they return results on those images as well? You’ll want to assess how similar these factors are to your real-life lab setting.
JT – Are perceptions changing in the market? You’ve got me thinking about a new pathologist who has not been involved in the development of an algorithm. We put this AI in front of him or her in a clinical setting and they have a perception that this is somehow going to help them with their work. Is this pathologist thinking, ‘is this algorithm better than me?’ Especially now that there’s a huge amount of work going on through Proscia and other companies, I wonder what you are hearing in the market in terms of objections from pathologists?
JI – I’ve seen a lot of changes in the past few years, and those changes are coming faster as AI adoption spreads. I used to hear a lot of high-level misconceptions about AI. Some people’s most immediate worry was, ‘will it take my job away?’ Others were convinced ‘AI will take a really long time to take off.’ Or ‘it’s way too dangerous for it to be used in diagnosis.’ That was one end of the spectrum. At the other end you heard people saying, ‘Great, AI can sign out all my cases and create reports for me.’
So, there used to be a lot of over-pessimism and over-optimism, but I really haven’t heard a lot of these comments recently. Those more extreme views have gone away and been replaced by more of a nuanced understanding of AI itself and what AI can do for both the pathologist and the lab. This is important because you’re only going to adopt an application if you understand how it’s going to help you. And I see this nuanced understanding deepening as pathologists get even more experience with a variety of AI systems.
I also think it’s starting to become more obvious that applications that aren’t diagnosing but are automating tasks, like our process automation applications, are very valuable and can help to make both pathologists and labs more efficient.
JT – The way I see it, it’s more of a synergy between the algorithm and pathologist and it seems to me that people are more accepting of that situation now. I think we’ve reached the stage when we can start asking; what are the next steps for AI in pathology?
JI – Agreed. We’re already seeing software platforms increasingly focus on AI, and I think they are helping routine operations to catch up with the fast pace of growth that we’ve seen in AI applications. From here, I expect a lot more to be unlocked in terms of functionality and utility when you have multiple AI applications interacting with one another in your lab.
On the clinical side, we’ll see a growing realisation of the impacts of workflow applications, especially in the diagnostic area and more narrow-focus applications. In the research space, we’ll see a lot of ‘no brainer’ applications really start to take off. These are the applications that don’t sound as shiny on the surface as diagnostics but could have an incredible impact on efficiency and workflow.
JT – Well, there must be barriers if we are considering larger scale adoption, regulation for example. What do you see as those wider barriers to adoption?
JI – You see one barrier when you consider that the lab must first go digital. Perhaps it goes without saying, but your lab needs to be digital before it can deploy AI.
I also think data is still a barrier. It’s becoming less and less of one, but the data to build AI applications is still somewhat scarce. It’s also both hard and slow to come by. There’s a lot of room for innovation in this area, and we’re seeing a lot of progress. But it’s certainly still an issue.
JT – I’ve spoken recently to a couple of companies that are currently working on unsupervised AI. There are a multitude of supervised learning projects that are currently going on and they are creating specific algorithms which are specific for a single disease. The human brain is incredible because you can put a slide in front of a pathologist and he/she knows instantly if it’s breast or GI or prostate. Do you think there’s any future in unsupervised learning where an algorithm will just decide on the tissue in the first place and then go on to assess the disease state?
JI- I do think there is a lot of room for unsupervised learning. I don’t know how far it will go in terms of deciding on a tissue type right now, but unsupervised learning is exploding in AI for pathology and for other fields as well. I didn’t talk about it earlier, but all our recent work has been making use of unsupervised learning. It’s amazing to see how much it can improve an AI application’s performance. As another benefit, unsupervised learning will certainly accelerate development in the field because the components are more reusable.
JT – It is a very exciting time, isn’t it.
JI – It’s extremely exciting and filled with a lot of opportunity ahead.
JT – I’m not a pathologist myself, but let me ask you what you want me to know about making the most of AI?
JI – I’d tell you that you really need to work with AI and be part of the design and building process. AI is not a choice, and it’s already here. You have to work with it so that you don’t end up working for it. What I mean by this is that you don’t want to end up working for AI the way that healthcare professionals have, in some ways, ended up working for EMRs. You want to have the AI working for you, and that only happens if you’re involved. It’s not just pathologists, whom I would say should be involved, but it’s also the lab techs, the histotechs, and the lab managers. Everyone who is part of the lab needs to be part of the design of these systems so that they can be adapted for easy use and don’t end up becoming a burden to you. This is why our team collaborates so closely with pathology practitioners. We want to empower them with the AI that we build, and we know that’s only possible through deep interaction.
JT – That sounds like a powerful message. Get involved and help to shape the future.
JI – You summed it up well.
JT – Well, thank you, Julianna. We’ll bring that to an end there, but thanks for your time today.
JI – Thank you.