We can try to understand the mechanisms of our diagnosis through large scale machine-learning projects, but a big wave is about to hit us, and it is one which will completely transform the field of digital pathology.
Interview with Professor Jeff Chuang
The Jackson Laboratory for Genomic Medicine, Farmington, Conneticut, USA
BIOSKETCH: Jeff Chuang, Ph.D. is a professor at The Jackson Laboratory for Genomic Medicine (JAX-GM) and the University of Connecticut Health Center. He leads a computational group of 12 scientists, postdocs and students that studies problems at the intersection of cancer, evolution, and machine learning. His group’s projects focus on large scale data analysis from patient tumor samples and cancer model systems, especially for patient-derived xenografts. He is currently PI for the NCI PDXNet Data Commons and Coordination Center – an NCI Cancer Moonshot program — in which he coordinates data sharing, analysis, and collaborative project development for hundreds of xenograft models in partnership with 6 U54 teams and the NCI Patient Derived Models Repository. He has published extensively in the fields of cancer genomics, intratumoral heterogeneity, gene regulation, cancer image analysis, and molecular evolution.
Interview by Jonathon Tunstall – 21, July, 2021
Published – 29, Sept, 2021
JT – Professor Chuang, you began your career as physicist, but can you tell me something about how you came into the specific domain of digital pathology. When was that and was there a particular event which triggered your interest?
JC – I started off as a theoretical physicist by training and then around 2001, I switched to computational biology. That didn’t lead me immediately into digital pathology, but one of the problems I’ve been working on for around a decade or so now, is tumor heterogeneity. That is, in the context of trying to gain an understanding of the principal cell populations within tumors, based on information coming from groups which are looking specifically at genetic data. Many groups have attempted to look at tumors treated with a particular therapy such as chemotherapy or a targeted therapy such as radiation, in the hope of finding a distinctive tumor cell population which can be identified. One of the big challenges of this approach is that the profiling methods are primarily based on small scale tissue analysis. For example, you would sequence a tumor from a small fragment, maybe a millimetre across, and then you would treat the patient according to the information gleaned from the tissue analysis. After the treatment, the patient would have a recurrence or hopefully a better outcome and you would then do a second biopsy and sequence another piece maybe a millimetre across. One of the basic science questions that comes up over and over again, is whether the small pieces we are looking at are actually representative of the entire tumor cell population.
We started looking at this question, using a lot of evolutionary approaches and really thinking from the perspective of fundamental science and from a genetics point of view. How fast can a tumor evolve? How fast can cells replicate? Can we identify things like different immunological behaviors among different cell populations? Eventually, we started to think more and more that the central issue was that we really didn’t understand the spatial organization of the tissue. We always came back to this same question. Is this fragment of the tumor representative of the tumor as a whole? So, about three years ago we were thinking about this and feeling frustrated and what my group decided to do was to look at the only source of spatial data that we knew of, and that was histopathology images.
So, I could say I came to digital pathology from an outside perspective. My group at the time had started to work with some of the deep learning approaches, especially the convolutional neural networks that had become popular in other fields. This was also a time when ImageNet had just partnered with Google and this type of technology seemed to be making huge leaps forward. So, we came up with the idea of trying out these algorithms on digital pathology data. I think a number of research groups were already thinking about this, but there was now an opportunity to access the large-scale data sets which had been collected by organizations such as the Cancer Genome Atlas and which had primarily been built from a genetic standpoint. Now, all of a sudden, people started to realize that there was a lot of histopathology data in these collections and that that was very valuable. So, we started analyzing that data but we were really coming at it from a basic science point of view with questions such as, what is the resistance to treatment? My lab is unusual in the digital pathology field in that we are not pathologists, but we work with a lot of pathologists, and we have really a different point of view. We keep trying to combine the analysis we are doing on H&E with genetic data and with new kinds of genetic characterisations to get the best predictions from those data combinations.
JT – My understanding is that most tumors are proving to be highly heterogeneous and that you can sample a tumor in two different areas and find a different set of cell surface markers or even a completely different set of mutations.
JC – That is absolutely right. I think anecdotally, people have known this for a long time and when you talk to practising clinical oncologists, they will say, ‘of course, it looks different in this region than in this other,’ but from a genetics point of view, people have struggled with that problem because there have been limitations in how much you can profile a single tumor. Starting around a decade ago there were a few projects set up to do large scale spatial characterizations of tumors. One of the biggest of those was in England and they were looking at different regions of lung cancer. They profiled several hundred patients looking at metastases from different fragments and what they found was that there is quite a lot of variability. The questions then are, how do you characterise that variability? What is its phenotypic relevance? They found a few things which are important, such as some relevant oncogenes, but in the context of digital pathology, the variation that was most important was in immune activity. So, we not only need to think about the presence of things like lymphocytes but also the relevant changes at the genetic level, such as a loss of HLA which seemed to vary at lot in these lung tumors. Beyond these early observations, people are still struggling with what to do with a full set of genetic data. However, I think the great thing about the pathology field when combined with the genetic information is that pathologists have so much expertise when considering whether something is actually relevant to the outcome. On the genetics side, we don’t know how to do that. We don’t have this much expertise, and so I think the combination of the two perspectives leads to a really exciting time in this field.
JT – Would it be fair to say that there is so much variation between cell populations from the same tumor that we struggle to determine the predominant cell type and so to define the standard characteristics of the tumor?
JC – Yes, every region is different. We see a similar thing in Coronavirus, we have all these variants and so how do we decide if a variant is interesting or not? In practice, do we really need to know the mechanism of those variants? In some case we don’t, it’s just a matter of how it affects us in terms of the severity of the disease and the rate of transmission. If we had a tool to go after all the genetic differences and specifically target those, then that would be important but at this point we are not there yet and we don’t know the stability of those variants and how they are going to be affecting the population. So this is really an empirical question. You can have those discussions without knowing anything about the mechanism, and I think that is the first step. The second step is going back to the genetics and trying to target the specific mechanisms and biological pathways. So the way I think about this is to consider that pathology has assessed these variations at the level of outcome mostly (not completely of course). In terms of the mechanism, that’s much harder to trace back. So, what I think can be done with all these digital pathology ‘packets’ is that we can go back and learn the mechanisms. One way is to do that is through large scale machine learning projects, but I think a much bigger wave is about to hit us, and it is one which will completely transform the field. That is spatial profiling, through techniques such as spatial transcriptomics and protein level and metabolite level characterizations. All these things are about to happen and its just a question of how much data will be generated and the costs of handling and interpreting all the data, but certainly there is a wave coming.
JT – I agree with you. I think there is a very interesting merger of ‘omics’ on the horizon and that there will be a focal point of those technologies in tumor cell analysis.
JC – The way I think about it is imagine that at the moment we only see in red, green and blue. What we will soon be able to do is to see in infra-red, ultra violet and every wavelength of the electromagnetic spectrum. We will be able to see in 50,000 colors and associate a color with every gene, every transcript, every protein and even some of the small molecules. It’s a very exciting time and I think the nature of data analysis will change dramatically. Then there will be the question of how to merge it all with clinical practice, because you wouldn’t do that kind of profiling for every case; it’s expensive, at least in the near term.
JT – Tell me a little about how you are handling your images at the moment. Do you scan slides yourselves? Have you built your own analytics platforms?
JC – My lab is a computational biology lab, and we don’t get directly involved in the image scanning. We are fortunate to have a lot of collaborators all over the place and one of the things that has helped that, is that I run the data coordination center for the NCI cancer consortium on xenografts (PDXNET, pdxnetwork.org). We are working with many hundreds of samples that our teams are generating from primary cancers and from xenografts. We characterize those and follow up with treatment studies. We are getting a lot of data from those groups at the moment, and we are leaving it up to the individual teams to do the scanning. We do work on questions around stain normalization computationally but mostly we look at that topic from the empirical level. We want to know if we can make predictions that are robust across different groups and across slides scanned by different methodologies, as opposed to taking a fundamental approach to normalization.
JT – Your input then is the digital slide and those come to you from many sources. You talk about normalization, but I wonder if you find that the low-level heterogeneity of slide preparation method, staining protocol etc., actually impacts on the consistency of the analytics.
JC – I would say in some cases, yes and in others, no. We had a paper published last year on this topic and what we showed is that when we looked at some simple task that a pathologist might look at, for example, ‘are there regions of tumor tissue and regions of non-tumour tissue?’ What we saw in these simple cases was that if we trained convolutional neural networks for that task on say lung adenocarcinoma on frozen tissues, we could also do the same task on FFPE. So, that works even though there is quite a lot of tissue difference. We also saw that if we took slides from different sources, then we could do the same thing and it also worked. That means there is some robustness at least for relatively straightforward tasks. We also saw that with the same task, if we trained tumor/normal classification on lung adenocarcinoma and then tested it on breast cancer, a totally different tumor type, then we could still do very well and make a very accurate prediction. These accuracies were all around 90%, even across different types of cancer. For the question of frozen versus FFPE the numbers were higher but that was kind of surprising to us. This was something that was almost like a geneticist’s approach to computational biology, which would be looking for commonalities across cancers, but it’s a little different to how you would look at things as a practicing pathologist.
The next step is to try to look for more specific targets like particular genetic markers. There is a lot of activity in that space and these problems may be partially eliminated by the size of the data and they may also be affected by certain auxiliary variables. For example, if I’m looking at a genetic status like microsatellite instability, that may have associations with tumor infiltration by lymphocytes. I might pick up the tumor infiltration rather than the genetics of the cancer cell, so there are a lot of factors mixed together.
JT – I’m interested in how this all eventually fits together because on the one hand, we could see this as a series of point solutions, that is computational algorithms which are dedicated to a single task such as finding lung adenocarcinoma or counting mitoses etc. On the other hand, we could envisage a future of unsupervised learning where we let a single application run across our samples and tell us what is going on. That seems very sci-fi like of course, but do you think this type of unsupervised solution is ever possible?
JC – It may be possible eventually, but it is not the primary goal right now. We still have a lot of questions about the capability of these algorithms and what sort of decisions are they effective for. There has been a lot of activity around these questions that started in the past couple of years and that work is still ongoing. What people are trying to do is twofold. Firstly, to reproduce what has been possible for classical pathology assessments, but by using simpler data types than you might expect. For example, we and other groups are using convolutional neural networks to predict various markers for which the standard is IHC but we would do it using only H&E data. Secondly, to tweak those tools to look for things that are slightly more advanced and more difficult to determine. Questions like, can I identify the likelihood of treatment response from an H&E image, which is not directly measurable from IHC? These types of questions are related to stain normalisation and staining protocols, the scanner technician, etc. So I agree with you that supervised is better for right now, but eventually we will go into an era of unsupervised learning, once we have a handle on the robustness of the methods and their stability to different staining protocols. I think that the reason that we will definitely go into that space is that we will see the marriage of the traditional pathology IHC image with these more open profiling techniques such as spatial transcriptomics. From the spatial transcriptomics side, most of the action there has been by geneticists because geneticists are very used to these unsupervised problems where they are completely in discovery mode, trying to understand the underlying mechanisms. We will likely see a lot of completely open research in the near term from the spatial transcriptomics side, what we can call ‘unsupervised research.’ Then, maybe very soon after that, we will see many more collaborations between pathologists and geneticists and eventually those fields will merge. We will then have to think how we classify those different disciplines but that is when I think unsupervised learning will be more in what we consider the domain of pathologists.
JT – This is interesting because only around 15 or so years ago people were sounding the death knell of histological analysis in favour of molecular genetics and molecular profiling, and I think there has been a realization in recent years that the two-dimensional spatial information afforded by histology is a critical component of the analysis. Also, as you say, there are all these other domains which now converging on cancer biology and cancer profiling and each one is adding a new piece to the complex picture of cancers at both the tissue and the molecular level. Thinking about the convergence of these technologies through a future unsupervised AI driven interface, Alan Turing said that AI can never be smarter than the human as the human has to be the validator. Do you think that will remain true? Could we reach a point where AI becomes a complete black box which is operating in a way which is unknown and maybe even unknowable to us?
JC – This is an important issue because it gets to the heart of how we should use AI. In my conversations with people who have been trained as pathologists, this topic comes up very frequently, because pathologists have this critical responsibility that we as scientists don’t have, and that is, that they need to make a decision and that decision will affect a patient. I am often humbled by that notion because the algorithm has to be good and needs to help us make a critical decision which will affect someone dramatically. So, how does this relate to this question about AI? I think that for AI designed for specific tasks, the question is, can the AI make a better decision than a human? Well, the AI has to be trained by people, so ultimately it will not make a better decision than a single person would have made in a particular instance. However, computationally we can integrate the data, so this will lead to less uncertainty for the AI. It will continue to have further training and so I expect will continue to perform better, but will the AI have a better conceptual ability to discover things? I think that comes back to your question about unsupervised analysis. That is very hard to say. We are currently in a space where we have a lot of uncertainties regarding the quality of the data. I think the fact that neural networks are black boxes, this is a major problem, because we don’t have a way to tell when we are getting closer to the answer, except from an empirical perspective. I think for the moment, empirical evaluation is what people are going to be doing. We do the same thing in my lab, but we are also exploring the possibility of decomposing images into interpretable units. You can think of it like traditional signal processing, like Fourier decomposition or wavelet decomposition and mostly people have not used that approach in pathology. This has been because such decompositions ignore what we think of as the natural units in a tissue, namely cells. There is resistance to the idea of a signal processing-based decomposition because it may not be interpretable the way we can interpret what a cell is doing. Now, what we have been seeing is that it is possible to decompose into interpretable units, it is possible to do that much more than we would have thought using these CNNs and we are finding that out empirically. I guess I can’t truly answer your question but I think it is a big question which impacts all fields of AI.
JT – I’m reminded of the fact that whenever we give a pathologist an algorithm they will say, what is it doing? And specifically what features is it looking at? Is it looking at the same things I am looking at? And the answer is often that we don’t know. There are many instances where algorithms are using feature sets that we wouldn’t necessarily expect them to use, so doesn’t the notion of explainable AI become critically important in the future?
JC – Yes, I think for any AI activity where you are making clinical or economic decisions, you have to have some explainability because of the need to have generalizable models of risk. If your algorithm is completely a black box, then it becomes complex from a legal perspective. That’s why we are working on the explainable aspects of the science by relating the deep learning features from, say, image data, to something which geneticists would consider interpretable, such as gene expression data. Interpretable associations with patient outcome are more complex. However, I think we will improve in that space also, because there are fundamental genetic mechanisms we will eventually be able to identify.
JT – It’s hard to see the FDA signing of an algorithm if we are unable to explain what it does. Personally, I think we will see a series of supervised learning point solutions which are brought up to the clinical level through an LDT process and validated for use only in the single lab for which they have been validated. Surely it’s difficult to envisage general regulatory approvals for widespread use of multiple diagnostic algorithms any time soon?
JC – We will have a lot of progress in the basic science very soon because of the technologies of spatial transcriptomics and imaging mass cytometry and other protein profiling techniques. These are advancing very quickly, so in terms of explaining genetic mechanisms using AI, these technologies will put the algorithms to the test and are likely to soon provide much better explanations. However, for clinical outcome predictions of, for example, ‘how is this patient going to respond to this drug’ and making a decision only from an H&E — that has a number of hurdles, because the explanations are more complex and there is also greater need for those explanations to be understandable by doctors and patients.
JT – What happens to the pathologist in all of this? Do we still need pathologists in the future?
JC – I think it is a tremendously exciting time to be a pathologist, but change is inevitable, that is true in all fields. I think the scope of the questions that a pathologist will consider in future, will be broadened and there will be an opportunity for the pathologist to be more involved in the basic research. However, that does in itself create some pressure to do things in a slightly different way. On balance I think the new technologies will provide both challenges and also great opportunities for pathologists.
JT – It’s rather like flying an aircraft by computers, isn’t it? We all still want to see a human pilot sitting up front.
JC – I think that certainly the overall responsibility will continue to be that of the pathologist in the same way that the central responsibility for any patient treatment is that of the doctor. However, I think for pathologists, they are in this unique space, because even though they are making clinical decisions, what they do is actually quite close to basic research. Therefore, there are many opportunities; the research side is exploding at the moment, and so there will be lots of ways for pathologist to do new and exciting things. Maybe for them it is not just flying the plane but actually going to new planets.
JT – Professor Chuang, we’ll leave it there. Thank you for your time today.