What we have to learn from day one when we design these AI applications, is that pathology has to come with us. We cannot just design a network as computer scientists and then go to the pathologists just when we need to validate it. The pathologist has to be with us from the start.
Interview with Dr. Hamid Tizhoosh
Founder of KIMIA Lab, University of Waterloo, Waterloo, Ontario, Canada.
BIOSKETCH Dr. Hamid R. Tizhoosh is a Professor in the Faculty of Engineering at the University of Waterloo since 2001 where he leads the KIMIA Lab (Laboratory for Knowledge Inference in Medical Image Analysis). Before he joined the University of Waterloo, he was a research associate at the Knowledge and Intelligence Systems Laboratory at the University of Toronto where he worked on AI methods such as reinforcement learning. His research activities encompass artificial intelligence, computer vision, and medical imaging. He has developed algorithms for medical image filtering, segmentation, and search. As well, he has introduced the concept of “Opposition-based Learning”. Dr. Tizhoosh has received more than $6.0 M in funding since 2001 for his research and commercialization activities through NSERC, OCE, FedDev, MITACS, MaRS, HTX, IRAP, ORF-RE and industry partners. He is the author of two books, 14 book chapters, more than 150 journal and conference papers, and multiple patents. He has more than three decades of industry experience and has worked with numerous companies.
Interview by Jonathon Tunstall – 21, June, 2021
Published – 06, Sept, 2021
JT – Dr Tizhoosh. You are a career computer scientist and a specialist in image analysis. When did you come across the domain of pathology and begin applying your skills specifically to imaging in pathology tissue samples?
HT – I started working with radiology initially. As a PhD student back in Germany, I joined a European project back in 1996. This was a collaboration between multiple European universities, Lyon, Manchester, Liverpool, Magdeburg and the purpose was to improve radiation therapy. That was the beginning of my familiarity with medical imaging. At that time, we were using specific types of neural networks to look at live images which are captured during radiation therapy. Back then, the image quality was really bad. Later when I emigrated from Germany to Canada, I still continued to do radiology although it was fully digitized by then. In some ways radiology is quite a tough field, because it doesn’t normally give you a specific diagnosis for cancer or another serious chronic disease. You need to see stuff after biopsy under the microscope. Eventually, I switched from radiology to pathology and that was around 2012, and by then I had already worked for around 15 years in digital radiology. I found pathology more interesting because the nature of data was very different. There were color images, they were huge, and they offered a lot of challenges and possibilities. I also liked the fact that pathology is basically the end of the line with respect to diagnosis. In some ways it’s the ultimate diagnosis, so that fact increases its significance. That was all very enticing for me as a computer scientist.
JT – Was there a specific event that pushed you to make the switch to pathology, maybe you came across a scanning platform or a piece of software?
HT – I had a failed start up in radiology, I was searching for a new task, and I wanted to change field. I was looking for new challenges and from a conceptual perspective I chose to focus on image search. Before that I had looked at detecting and segmenting stuff in radiology. Then I realised that image search is an area where a lot of work has already been done, but we have failed to make it available in hospitals. Searching for patterns, morphology and similar structures is quite a challenging thing with many obvious benefits for diagnosis, for triaging diagnosis, for prognosis, drug discovery etc., and we had not looked at it at all as a community. So, at this point, I was changing focus from detection and segmentation to search and matching, and I thought, ‘why not go to the ultimate point of diagnosis which is pathology and take on a task which can open up many possibilities for a better understanding of human anatomy under the microscope?’
JT – There are still hundreds of labs which are not digital. What would you say to me if I was a pathologist and I felt skeptical about the benefits of digitization? How would you describe the key benefits to me?
HT – That is a difficult discussion that has been going on for some time. I guess after Covid, it has become easier to see the benefits of going digital, but some skepticism is understandable because the foundation of the science using microscopes is almost 400 years old. Also, the microscope is a symbol of science, it is not something that you can just take away and replace with a monitor. That doesn’t provide the same scientific data and reputation and prestige. These are psychological objections for any scientist, but pathologists have serious concerns of their own. Can I make the same diagnosis on a monitor that I can make using the microscope? Is it reliable? can I see everything? can I really go to the highest magnification without changing lenses? When you move from 10x to 20x on a microscope, you can adjust the focus, but if a computer is doing that for you and patches of the image are blurry, can that affect the diagnosis?
So, there are some serious concerns beyond just the psychology of losing the microscope, but many of those concerns have already been addressed. There is virtually no reason not to accept a digital analysis nowadays. There are many clinical reports that show agreement and concordance and that diagnosis with a microscope, versus with digital pathology, are virtually the same. From an administrative perspective, going digital also requires a lot of investment. Acquiring images and saving them on high performance storage devices is one of the biggest expenses that we have to incur to go digital, and that is a huge concern, not just for the pathologist, but also for the lab director. That is a real concern, but then we also have to consider the benefits over glass slides and the requirement to save millions of those slides in hospital basements.
So, the barriers to entry particularly after the pandemic are not really about, ‘can I do the same diagnosis with a digital as I can do with the microscope,’ it’s more about being able to see the benefits and understanding that when you go digital, image analysis and AI now open up a lot of horizons for quantification of the results which were not available via the microscope. I feel the main obstacle now, at least for the young and middle-aged generation of pathologists, is that many hospitals are not going digital yet because of the cost.
JT – I agree with you. In my mind there are clear and now established benefits to digitization in pathology. When you speak with a pathologist and ask the question as to why they went digital, the primary use case seems to be secondary consultation. Digitization provided the ability to share images with other people. Then quickly people learned that they could carry out their MDTs digitally and then other use cases came into the mix, such as education and research. I think the use cases and the benefits around digital pathology have gradually accrued and there are strong justifications, but now I am thinking specifically about image analysis. So, let’s assume that we have digitized slides, a digital operation, digital workflow. Now image analysis becomes available as almost a second layer of technology. Why would I add that capability? What are the specific benefits of adding image analysis to my existing digital operation?
HT – There are a huge number of tasks that pathologists cannot do without image analysis. Starting with simple tasks like counting the cells. If you count manually, it is very tedious. Image analysis also helps with detecting things, like detecting mitotic cells or specific regions of interest. Detecting any specific disease is a tedious task when you do it under the microscope but when you go digital and apply image analysis tools and computer vision, suddenly you can do it really fast. The pathologist gets quantifications in a very short time and applied over the entire image, not just over selected regions visible through the field of view of a microscope. For example, if you’re doing liquid aspirate biopsies, you have to find specific places on a gigantic image and when you put it under the microscope you have to go back and forth to find regions on the slide that are suitable to distinguish cell types and to separate them from each other. After that you have to start counting them until you can say ‘okay, I have this many of these cell types so it must be this diagnosis.’ It’s all very tedious and of course, a good pathologist can do targeted, but random sampling. Give it to a computer, and it can do hundreds of these measurements in a fraction of the time, and then provide you with the numbers and the statistics that you need. You as a pathologist are still in the pilot seat and can make the final decision. Image analysis does the dirty work for you, the finding, counting and detection, but the human still calls the shots and remains in control as the decision maker.
JT – So we have established that there are clear benefits to image analysis as an aid for the pathologist and that we can view this currently as a synergy between man and machine. That is because, at least for the moment, pathologists are still better at some tasks and computer systems are superior at counting, finding things, enumerating and storage.
HT – Finding things is probably the biggest argument for image analysis because imagine if you are looking at a glass slide of a biopsy, a tissue sample and you are thinking to yourself, ‘I know I had a case like this before,’ and it may be a rare case. At the moment you cannot find that case. You cannot take your glass slide and go into the basement of the hospital and compare your slide with thousands or tens of thousands of other slides to see if you already had a similar case. You just can’t do that; it would take years. Matching morphology in this way is actually the same thing as consulting. It is the same as asking a colleague, ‘have you seen a case like this before?’ and he or she, as a human being, has a lot of images stored in the brain. They can say, ‘yes, I had a case like that a couple of years ago, let me pull out the file and see how we treated it.’ However, this type of matching and searching for very rare cases is something that is absolutely impossible with our microscopes and our glass slides. So, you are missing a lot of opportunities if you are not able to digitally search huge archives of medical images of past cases. These are things that humans cannot possibly find, because of the vast size of the data.
JT – Okay, so let’s say this skeptical pathologist is now completely convinced of the advantages and benefits which can be brought about by both whole slide imaging and by the application of image analysis. Tell me something about your own work and how you apply image analysis at the KIMIA lab.
HT – We are a laboratory for knowledge inference in medical image analysis, short KIMIA Lab, is located in the faculty of engineering at the University of Waterloo and I founded the lab back in 2013. It started with three students and now we are 25 researchers, and we are working with five hospitals and three other research groups. Our operation now involves more than 50 researchers doing computational pathology of some sort. At KIMIA lab, I decided from the beginning that we will not do supervised AI, and that was a major decision. It was also a very risky decision, but as we speak, eight years after the founding of the KIMIA lab and around ten years after the first practical success stories of AI, it is still true. I think that was the right decision not to put the emphasis on supervised AI.
With Supervised AI you train an AI deep network for specific tasks. In the past, we have seen that we can achieve really accurate results and that is what has attracted a lot of researchers and a lot of companies to the field. There are many published articles now claiming that they can equal or even beat the pathologist at analysis and diagnosis. Unsupervised AI does not work like that. Often, it doesn’t need training and it’s not specific for one single task. Unsupervised AI is more general. In the AI community, we know that what we have at the moment is weak AI and what we want to get to, down the road, is strong AI. Strong AI, like the human brain, has to be unsupervised because we as humans learn mostly on our own. If you put us in the wilderness, in an unknown place, we will learn things as we go. We have the hardware in our brain to learn stuff and we do just that. Unsupervised technologies are not very popular at the moment in the mainstream of computational pathology, because their validation is more difficult, they don’t provide those impressive numbers, 98% accuracy and so on. However, we do know that unsupervised learning is the future.
In the case of supervised AI, when you train a specific network for a specific task, for example detecting mitotic cells or finding all the cells in the image, or making a binary decision, most of the time you are doing detection and segmentation. Even with deep networks, (which are the most successful branch of deep learning and the most successful branch of AI), most of the time, they are only making yes or no decisions. Is this cancer? What is the grade of this cancer? For sure, this is very valuable and many people are doing great work here. With supervised AI, you can certainly address a lot of small packages of problems in pathology workflow, but this will not bring about a revolution in pathology or in medicine.
What I would say to others is, ‘AI belongs to all of us, supervised, unsupervised. You focus on this, I’ll focus on that, and let’s see where we get to; but unsupervised AI, which is a collection of techniques of clustering, grouping, matching, searching and visualization, this is the future!’ These are techniques that do not specialize. When I search, it does not matter what I’m searching for, I am simply searching for information. Is this a carcinoma or an infection? With supervised AI you may train your algorithm for Covid, but it will not work on tuberculosis or carcinoma. Of course not, because you have trained it just for Covid. It’s as if from all the billions and billions of small circuits we have in the human brain we select just one tiny circuit. That is why we call it weak AI. It is impressive in that it can report 99% accuracy, but it is very limited, because you have narrowed down the application. Of course, when you narrow down and focus on one task, you become really good at that task, but pathologists are not like that, they have a broad knowledge and experience. Pathologists understand human anatomy and they have a lot of background knowledge to support their decisions.
At KIMIA lab we are entirely focused on searching and matching as the forefront of unsupervised AI, and we are happy that many others are doing supervised AI. That’s great, and you may need a supervised technique as a small component in an unsupervised package. With searching, matching, grouping, clustering and visualisation, you can provide techniques which enable the pathologist to get a lot of quantification and analytics in a very user-friendly way. This enables his or her decision making, and I am convinced that it is the way to go. Unsupervised learning will provide a suggestion to the pathologist, but at the end of the day, it is the pathologist who has to sign off and say, ‘I made this diagnosis,’ we want to support a pathologist in that way.
JT – So can we see supervised learning as a series of point solutions for defined environments, whereas in the case of unsupervised learning, we can continue to enhance the capabilities of the algorithms over multiple datasets, and it is ultimately learning from its own experience of what it has been exposed to?
HT – Yes, and if you have supervised learning for a point here, a point there, you cannot generalize. We do see this when we do external validation on a deep network, it collapses. You train something in your own hospital and then test it on the images of a new hospital, it collapses, and the 98% accuracy becomes 72%. What does that tell us? It tells us that supervised AI for many reasons is not generalized enough to give a good picture of human anatomy. There could be bias existing in the data from the hospital, hidden noise, visual clues that are irrelevant, artifacts, many reasons.
JT – Or perhaps it tells us that there is a lot of heterogeneity at the bottom end and that we haven’t yet standardised the input processes such as staining and slide preparation?
HT – Absolutely, yes that is true, and so when you go unsupervised you cannot report 98% accuracy because the unsupervised learning can only be validated by humans. This is the nature of “Turing Test”. Nothing has changed, Alan Turing taught us that the only way to validate AI is to let it be judged by a human. What does that tell us? It tells us that AI can never be smarter than a human being. That is what Alan Turing assumed and justifiably so.
JT – Does that really hold true in the future? If we think about unsupervised learning, then does there come a point at which the human being can no longer understand the software or the algorithms?
HT – Difficult question. The mechanism suggests that, if we carry out unsupervised learning, what we infer may not be achievable by a human. If you apply data on one million biopsy samples, no pathologist can possibly do that. So yes, the sheer amount of data being processed and analyzed by some computerized methods is beyond what is achievable for a human being, but does that mean that it is smarter than a human being? I doubt it. Then we get into a discussion about what is intelligence and then we need to discuss the “Chinese Room” that John Searle talked about.
The very nature of deep networks, the fact that they are gigantic structures with millions of parameters, is in violation of the basic principles of computer science. That principle is ‘Occam’s Razor’, that the simplest solution tends to be the right solution, and this is well recognised by the computer science community. So, we should find the smallest deep network that solves the problem. However, deep networks are susceptible to complexity, because our nature is to go deep to try to achieve amazing, surprising stuff and report 99% accuracy. Then we find that when we test with the data from an unseen hospital it collapses to 65%. That is because the AI didn’t really learn, it memorized. We know that if we have a lot of parameters that it is easy to just swallow the data. We are aware of that in the AI community, it is a well-known problem.
What we tend to forget in the AI community however, is that AI has been foreign to external validation in the medical community. That is a huge problem for the deployment of AI solutions and is the main thing that is hindering us getting AI into the hospitals sooner. We have externally validated many concepts and we have now created a huge repository of capable AI techniques. We have the technology; we just need to adjust it for the very sensible requirements of the pathologist. We can do it, but we have to learn how to validate stuff in a medical sense, not in a computer science sense. What we have to learn from day one when we design these AI applications, is that pathology has to come with us. We cannot just design a network as computer scientists and then go to the pathologists just when we need to validate it. The pathologist has to be with us from the start. We should say ‘let’s do it this way, let’s design it this way and let’s test it this way together.’ The next generation of pathologists has to be on board, they should study the basic computer science principles in medical school.
JT – Doesn’t that mean that we are asking the pathologist to start an unknown journey? In 20 years’ time, how does this technology change the role of the pathologist? How good do these algorithms become?
HT – They will become very good. I am extremely optimistic about that, though we are seeing some setbacks and sometimes things fall apart when validated. We do have the technology; we have just not been doing our testing in the right way. I also see that supervised AI, as impressive as it is, will not bring about a revolution in medicine. We need to go unsupervised to be able to process a large amount of data in a short time without the tedious tasks for the pathologist to sit down and give us “labelled” information. The AI has to be able to go in on the raw data and figure it out. That is when the revolution will come and that will be the equivalent of inventing the microscope and will change everything. We are not there yet, but I’m hopeful that we have a good chance post pandemic in the next five to ten years.
JT – Does the pathologist then become relegated to signing off cases with all the tasks pretty much handled by computer systems?
HT – It’s a shift, like when fax machines were invented. It is very difficult to speculate about the future and applications of the invention, but we do know that anytime technology comes and helps us, we move to a higher level. So, a pathologist will be able to dedicate himself or herself to other tasks, research, going into deeper analysis etc. Maybe we will delegate common cases to a computer and the pathologist will just sign off as correct or not. At the same time, the pathologist will still do things that the computer cannot do. It will be very different, but I am not worried that computers will replace humans.
JT – It’s an exciting future though, isn’t it?
HT – It definitely is, but because there are so many stakeholders involved, I am not worried that the robots will take over. It will happen step by step, it will happen after proper validation, it will happen after we have published thousands of white papers on the benefits and the pros and cons, the limitations and the restrictions. We have done this sort of things many times since the industrial revolution, and nothing is different about AI. It is definitely exciting right now in 2021. Digital pathology is taking off, AI has already had a lot of success, and we are just about to leave a pandemic behind us. We have things to do, and we have the technology to do it; we just need to organise ourselves around the right tasks.
JT – I’m old enough to remember the digital revolution in the newspaper industry and the print workers at that time protesting outside the newspaper headquarters for months. Ultimately digitization either wiped out their jobs or they had to learn new skills. For the next few years in pathology, will we see that at the laboratory level, in that there will be labs that are digitized and labs that are not digitized? Will the labs using digital technology simply outcompete the traditional labs based on microscopy and destroy them?
HT – We need to think about this from the human perspective again and at the moment I don’t have any substitute for the Turing Test in my mind. I have not read anything any other smart colleague has proposed. At the moment we have to stick with Alan Turing, which means there is an upper limit to the intelligence level of AI which is the intelligence of the human being. AI cannot be more intelligent than a human and humans at any time will be able to assess the quality of AI decisions. That is what I am assuming unless another genius like Alan Turing comes along and teaches us something else. Until then, I can only assume that it doesn’t matter how capable AI gets, what we will see in the next 10-20 years will be graphical and conversational AI acting as a digital assistance to a pathologist. The pathologist will go into their room and there will be a nice display. The pathologist will start talking to the display. ‘Find this patient! Open the file for me! go to that image! Can you count this? Can you compare this with that and then bring back the result for me? Okay so now write a report that this is lung adenocarcinoma grade whatever!’ The human brain has an exponential sphere of decision-making capability. We will not be able to match that for the foreseeable future and it doesn’t matter how smart we make AI, it will always be subordinate to human intelligence. At least for the foreseeable future. Beyond that, your question is rather like asking me, ‘do we put stop signs on Mars?’ It is irrelevant when we haven’t been to Mars yet.
JT – Personally, I think in the future we will see shared cloud-based platforms. If a pathologist needs a screening algorithm, say for prostate screening, and wants to see all his or her Gleason grade threes and above, then they will just apply an algorithm from a cloud-based system. There will also be a lot of standardization because this type of platform will work best with standardization of image types and tissue quality. There will also be connectivity into the pathology community, so you could give instructions such as ‘I’ve got this patient here, I don’t know what this is, can you send this to Dr. so and so.’
HT – It’s interesting you say that, because as human beings, we have a shared consciousness. Our needs and deepest desires are the same and the psychological profile of homo sapiens is the same anywhere you go. This cloud system that you mention would be a new manifestation of human consciousness. Yes, this will happen. No one can prevent that, because it is what happens when you have shared wisdom and knowledge. When humans invented fire, it was quickly shared around the planet and in the same way, we publish papers today and we share our knowledge and influence each other. With the cloud as a new concept in shared consciousness and knowledge, no one knows in which direction that will take us, but we will go that way, and nobody can prevent it. It will definitely bring about its own requirements and conditions and it will open up new problems so we will see things that we have not seen before. Diagnosing carcinomas will become almost trivial, it will be like suddenly being able to access the 4th dimension in our three-dimensional perception. Can we get to this point just by talking about the data? Maybe we can, but we have to come up with a more efficient way of sharing our knowledge.
JT – We are also discussing here a quite profound change to human behaviour as we have to move from a competitive scientific community to a more co-operative one. At the moment, we have laboratories competing against each other to publish, we have patents and proprietary image formats. This competitive and commercial nature in people needs to evolve as well, doesn’t it?
HT – It seems that competition is not compatible with intelligence. We are Homo sapiens and not just responsible for ourselves, but the entire planet. The nature of competition means that you separate yourself as a researcher or lab or country from others. That is a fake separation because in fact we are deeply connected by biology and also connected socially, economically and culturally. Competition is not compatible with sharing wisdom and sharing knowledge, and maybe it is because for a long time we have focused on the quantity of intelligence and not the quality of intelligence. Competition always goes with quantity of intelligence, but it doesn’t pay attention to the quality of intelligence. This is something beyond AI, this is something we need to accomplish in human society.
JT – Dr. Tizhoosh, we’ll leave it there. Thank you for your time today.