SPIE Conference Part 4: Human-AI Collaboration: How to Make The Relationship More Effective?

by Christos Evangelou, MSc, PhD – Medical Writer and Editor

SAN DIEGO, California – Despite the ever-increasing interest in the use of artificial intelligence (AI) and machine learning models to enhance medical imaging pipelines, there remain many challenges that limit the effective collaboration between humans and AI. In a keynote talk presented at SPIE Medical Imaging 2023 (San Diego, February 19–23), Professor Mark Steyvers from the University of California, Irvine, discussed the promises and pitfalls of AI-assisted decision making. He also presented data from his research on the effectiveness of AI-assisted decisions, as well as how humans and machines can efficiently form mental models to establish a more effective collaboration.

The notion that AI was set to replace human expertise was soon replaced by the “human-centered AI” concept and the idea that, when used effectively, AI can augment human expertise.

“Humans and machines have different strengths and weaknesses. AI is not quite ready yet to operate on its own, and for various legal and ethical reasons, humans still need to be in charge, especially for tasks like diagnosis,” Dr. Steyvers noted.

This concept has fueled the development of hybrid systems, in which humans and AI algorithms work together to combine their complementary strengths and weaknesses.

Convolutional neural networks (CNNs) are an integral part of human-AI hybrid systems and can be used to classify images based on knowledge gained from a training dataset. One of Dr Steyvers’s research programs focuses on combining different CNNs with human knowledge to assess whether their complementary strengths can enhance the performance of the system.

“CNNs can perform pretty well but sometimes make mistakes that a human would not make. On the other hand, CNNs perform very well in images that are challenging for humans to classify and are very confident about their decision,” Dr Steyvers noted.

To leverage the complementary strengths of humans and machines, researchers established a simple statistical technique to combine human and AI decisions. However, studies have shown that combining human expertise with AI does not always improve the performance of the system. “There are some boundary conditions that need to be met. One condition is that the differences between humans and AI cannot be too large,” Dr Steyvers explained.

Dr Steyvers presented mathematical analysis data based on the Bayesian approach and showed that the complementarity zone for the accuracy of human and machine classifiers is higher when the difference between humans and AI is not too large.

Dr Steyvers also discussed aspects of the human-AI interaction design that may help improve the performance of AI-assisted decision making. “AI provides advice to a human, and the human makes the final decision. The timing of AI advice and the type of information provided are factors that may influence the final decision by a human,” he said.

Behavioral studies from his laboratory showed that AI-assisted decision making using the sequential paradigm (i.e., the human makes an initial judgment, the AI algorithm provides advice, and then the human makes the final decision) results in better performance than that obtained by either the human or the AI algorithm alone. “We believe that calibrating the confidence of the AI algorithm and showing AI confidence scores are very important for the performance of AI-assisted decision making,” Dr Steyvers said.

Dr Steyvers also discussed the role of AI explanations in the performance of AI-assisted decision making. Contrary to popular belief, studies have shown that AI explanations do not always improve the accuracy of decision making.

“This is very disappointing,” Dr Steyvers said. He explained that cognitive overload, confusing explanations for confidence, and tension between complementarity and human understanding of AI are factors that might contribute to the limited effect of AI explanations on the accuracy of decision making.

“I think there are lots of questions about how useful it is to show these explanations. The whole point of AI-assisted decision making is that AI can help you in situations where you don’t fully understand the non-intuitive situation. But that’s exactly the case where it might have trouble explaining the unexplainable and how there’s a tension between complementarity and human understanding of the AI,” he argued.

Dr Steyvers concluded that human-AI collaboration could be very effective. However, we need to make sure that effective signals are provided to humans by the algorithms so that the information is useful without being overwhelming. The use of natural language may help enhance human-AI collaboration by providing a means of more effective communication between the two. He noted, however, that natural language dialogue should be used with caution.

“I’m very wary of the developments of natural language dialogue because AI can be very persuasive. Further efforts are needed to ensure that the information communicated by AI algorithms using natural language dialogue is accurate and reliable,” he said.

The study was supported by various institutions, including the Irvine Initiative in AI, Law, and Society and the National Science Foundation (NSF) Information and Intelligent Systems (IIS) Program.


SPIE Medical Imaging 2023

SPIE Medical Imaging 2023 started on Sunday, 19 February with conference presentations kicking off the next day. The meeting in San Diego offered a great opportunity to hear the latest advances in image processing, physics, computer-aided diagnosis, perception, image-guided procedures, biomedical applications, ultrasound, informatics, radiology, and digital and computational pathology.

Share This Post

Leave a Reply