Study Pinpoints Challenges for AI Explainability in Digital Pathology

Artificial intelligence (AI), particularly machine learning technologies, are transforming pathology workflows. By automating various steps in tissue analysis, AI algorithms accelerate diagnosis and improve diagnostic accuracy. However, the ability of humans to interpret AI algorithms is low, further complicating their regulatory approval for clinical use and limiting their widespread clinical adoption. To overcome challenges associated with AI interpretability, researchers have developed explainable AI (xAI) models. Nevertheless, there are no widely accepted criteria for determining the explainability of AI algorithms, and studies assessing the experience of clinicians with AI models are lacking.
In a recent study, researchers from the Distributed Artificial Intelligence Laboratory at the Technical University of Berlin conducted a first-of-its-kind mixed-methods study to assess the interpretation and usability of AI-assisted image analysis tasks by pathologists.1 The study shows that cognitive biases influence the interpretation of state-of-the-art xAI approaches, and that expectations of AI-assisted pathology are often unreasonable.
“Although AI assistance in pathology offers incredible benefits for patients and diagnosticians, it is important for AI vendors to consider both the social and psychological aspects of building explainability for their solutions. To adequately do so, explainable AI solutions must be developed with feedback and validation from real user studies,” said Theodore Evans, researcher at the Distributed Artificial Intelligence Laboratory and first author of the study.
Commenting on the implications of their findings on the development of AI models for pathology, Evans noted: “It is important for stakeholders in the development of regulatory aspects of clinical AI certification to carefully define the requirements for transparency and explainability, and to be aware of the second-order effects that mandating these as components of clinical AI systems may have.”
The study will appear in the August 2022 issue of Future Generation Computer Systems.

Study Rationale: Understanding the xAI Explainability Paradox
“While identifying promising research directions in explainable AI for medical imaging applications, we discovered that little work in this domain was grounded in an understanding of the potential user interactions with xAI systems,” Evans said. “Instead, the majority of state-of-the-art research came solely from the machine learning domain, based only upon the intuitions of researchers on what information about internal model workings might be valuable to any given stakeholder,” he added.
Evans also noted that, although studies have been conducted to investigate explainability requirements for AI systems in pathology, no studies have directly assessed the impact of existing xAI approaches on target users in this domain.
He explained that rather than adding yet another algorithmic approach to this already extensive body of work, they set out to better understand the implications of integrating existing methods into interactions between humans and AI in the context of digital and AI-assisted pathology.

Survey
To better understand the patterns in use and trust of existing xAI algorithms among clinicians, the team surveyed 25 pathologists or neuropathologists (12 consultants, six researchers, four pathologists in training, and three pathology technicians) using an online questionnaire. Additionally, six board-certified pathologists were questioned through video interviews.1

Gaps and Expectations of AI Solutions
The study shows key gaps in the understanding of what AI is and how it can be leveraged to improve or accelerate diagnosis. Many participants indicated that AI could primarily be used to assist with simple time-consuming tasks, including cell counting.1
The participants also noted that AI algorithms could be used to overcome inter-pathologist variability. In addition to their reproducibility, their ability to integrate multidimensional data (e.g., biomarker expression, tissue architecture, and cell morphology) was described as another key advance of AI systems.1
When asked about the expectations of AI algorithms, participants indicated speed as the highest priority. The long time required for slide scanning was often reported as a factor hindering the clinical adoption of AI-assisted pathology solutions. Other limitations of current AI solutions included concerns related to accuracy, data protection, and lack of standardized training data.1

Trust in AI Solutions
The participants indicated that their judgment and experiences from extensive use and testing of AI systems formed the basis for judging the accuracy and trustworthiness of the systems.1 Thorough external validation of the performance of AI models was suggested as a way to increase the trust of pathologists in AI-assisted diagnostic systems.
Four of the six pathologists surveyed through online interviews indicated that they trusted giving AI systems control over decision making and were open to AI systems giving a different result to their own.1 These findings suggest an overall trust in the reliability of AI.1

Expectations of AI Explainability
When asked about their expectations of AI explainability, most participants expressed a preference for simple visual explanations.1
All participants also indicated that they would be more likely to trust AI solutions if explanations could be related to images from training data, drawing parallels with a pathologist justifying a diagnosis by referring to previous cases.1
“Our findings highlight the importance of being aware of the cognitive biases to which this preference for simplicity makes us susceptible,” Evans said. “We, as users, tend to prefer relatable and easily digestible rationales for why a certain outcome was reached. This also applies to our interactions with AI and xAI systems, specifically the interaction between pathologists and AI-assisted clinical decision support systems,” he added.
The study also showed that pathologists were prone to accept prior expectations of what diagnostic features are important to an AI system when presented with ‘explanations’ of xAI.1 “This risks lulling pathologists into a false sense of confidence over AI results or general performance, and leading to unreasonable expectations of AI assistance in general,” Evans explained.

Unanswered Questions
“The design of the study was limited to qualitative assessments of the impact of xAI on users in pathology. It identified a number of important effects of which stakeholders in AI development must be aware, but leaves the door open for novel approaches to avoiding, mitigating, or taking advantage of these,” Evans said.
He added that there is scope for more quantitative investigations of the impact of different explanation modalities on their tendency toward particular cognitive biases. “The EMPAIA consortium and our research partners continue to conduct this research to better guide regulatory practice and lower hurdles to the safe application of AI to clinical pathology,” he noted.

References
1. Evans T, Retzlaff CO, Geißler C, et al. The explainability paradox: Challenges for xAI in digital pathology. Futur Gener Comput Syst. 2022;133:281-296. doi:https://doi.org/10.1016/j.future.2022.03.009

Share This Post

Leave a Reply