It’s quite possible that in the future we will be able to use whole block imaging in the same way that we use whole slide imaging today.
Interview with Dr. Yukako Yagi
Director of Pathology Digital Imaging, Department of Pathology at Memorial Sloan Kettering Cancer Center.
BIOSKETCH:Â Dr. Yukako Yagi started her career as an electrical engineer at Nikon, Japan, developing robotic microscopes and their applications and received her Ph.D. in 2007 from Tokyo Medical University. She was later Director of Technologies Management at the University of Pittsburgh Medical Center where she developed and implemented telepathology systems for organ transplantation, frozen section and 2nd opinion consultations and also established clinical and technical standards for whole slide imaging systems. Dr. Yagi came to Massachusetts General Hospital, Boston, in 2007 and became Director of the MGH Pathology Imaging and Communication Technology Center. She later took up an Assistant Professor of Pathology role at Harvard Medical School and became the 1st President of the International Academy of Digital Pathology. Dr. Yagi is now Director of Pathology Digital Imaging, Department of Pathology at Memorial Sloan Kettering Cancer Center.
Interview by Jonathon Tunstall – 28, June, 2021
Published – 13, Sept, 2021
JT – Dr, Yagi, I know that you have a long history in digital pathology. What year would you say that you first start working in this field?
YY – It’s hard to say. I came to the United States in 1995 and before that I was already doing many types of telepathology and telemedicine in Japan (static. Dynamic, hybrid, with and without remote control though phone line – satellite, and NTCS-HDTV-Super HDTV). This was all through a collaboration between Nikon and the National Cancer Center.
I then had an opportunity to join Georgetown University Medical Center in the radiology department. They were looking for someone who had experience as no one worked with pathology there at that time. I was at GUMC two years, but at this time, we only had a single frame and regular microscope which was a commercial off the shelf system via a web application (straightforward, so anyone could use it). I was using a film scanner to make a low power WSI as a sample of a . I quickly learned that being able to view the entire slide was very important. A pathologist works from a whole slide perspective (even before putting the slide on the microscope) and I wanted to create that same capability in a simple, straightforward way.
In 1997, I moved to Pittsburgh University and joined the Division of pathology informatics. I was then making virtual slides using a robotic microscope-based imaging system, but the pathologists were not so interested initially. They didn’t see the benefits, as at that time, it took several hours to make one virtual slide. However, once I showed them the complete whole slide images made at 20X through tiling and stitching, they really liked the concept and realized the potential of having a scanner.
JT – Did you build this initial scanner system yourself or was it purchased? You must have needed a software component to handle the stitching of the tiled images.
YY – Initially we were working partially with Nikon and Olympus. I had experience with stitching together the image tiles, as that is what I had been doing in Japan. In Pittsburgh UPMC formed a company around 1998. That was InterScope Technologies and they focused on developing a scanner. BY 2000, The images were good quality, and the scanner was very fast. I would even say that the software interface and the database were much better than many current systems, but unfortunately that company no longer exists. I was a consultant rather than an employee of InterScope and when we received an Air Force grant to support Air Force hospitals around the world using whole slide imaging and telepathology, I had to make a decision. I had to pick a scanner, and at that time, Aperio had just started. I visited San Diego and met the CEO and their engineers. They didn’t have a product yet, but I believed in their vision and felt that this was the right choice of scanner for the project.
JT – So you bought the Aperio scanner without seeing the product, just from some pictures? You must have had great belief in them?
YY – Well, we were able to take a prototype back to the engineers in Pittsburgh who tested it thoroughly.was starting the next year and would continue for five years. Although it was a difficult decision, I decided that the Aperio scanner was the most suitable for the project as Aperio had already accomplished some of their promised developments. For example, they had already adapted static image telepathology for organ transplantation to support real-time frozen section work. Also, we were able to modify their scanner for our general pathology work. At this time, WSI was more of an ongoing research project than a real clinical application and so it was important that we could adapt the scanner for our own work.
JT – So what was the initial use case in those days?
YY – We were using WSI mainly for education and testing of medical students and for validation work. With the Air Force project of course, bases.
JT – And were you able to manage MDTs using the scanner at that time
YY – Yes also
JT – And were there other early use cases?
YY – Even before I went into whole slide imaging, I was working on telepathology for organ transplants between UPMC and a new UPMC transplant center in Italy. In the 1st year, all frozen sections were consultations from a pathologist in Italy to pathologists in Pittsburgh and handled real-time using a static telepathology system we had developed. After I left, I heard that this system ed first to a robotic system and then to a whole slide scanner. Also, we were using a robotic microscope for telepathology consultations between rural area in PA and Pittsburgh. At that time, we were using the Nikon Coolscope, which was a very simple robotic system, for remote diagnosis between pathologist and non-pathologist. That was a very popular system at the time, but later discontinued by Nikon.
JT – After your experiences in Pittsburgh, I believe you moved to Harvard Medical School?
YY – Yes, I was in Pittsburgh for nine years and then, after I finished my PhD, I moved to MGH and continued working with whole slide imaging systems. I worked with the pathology informatics division director at MGH. We started a pathology informatics program. At first, when I started work there, we didn’t have a budget to buy a scanner. We were doing spectral imaging, using basic H&E, but rendering the images in different spectra. We worked with a few companies including DMetrix, 3D Histech, Hamamatsu and Philips who all loaned us equipment for our research.
JT – Did you struggle with connectivity and networking issues in those early days?
YY – Yes of course. Most people were using 40x and at that magnification there were a lot of latency issues. So, we switched to 20x which is fine for tissue imaging unless you are doing detailed work such as counting mitoses. 20x worked out fine for most of our work.
JT – There are a lot of scanners on the market now. Have you seen a lot of changes in specifications and capabilities?
YY – I don’t think there have really been big technical changes in the last 10 years. I know that the vendors have been working hard on improvements, but as I see it, a lot of that energy has gone into seeking FDA approvals and increasing slide capacity. Certainly, compared to the previous 10 years, the rate of technical improvement seems to have slowed considerably. I have seen very few innovations although the speed of scanning has improved considerably.
JT – Well I guess then that you see technical change in the clinically validated side of the market but not when considering the instruments designed and sold solely for research purposes?
YY – Yes, that is true, but I am not interested in FDA approved instruments as they are not necessarily stable in my environment. For clinical use we perform our own internal validation to establish an LDT status, then we can use an instrument in our own lab for clinical purposes. We also have to remember that the scanner is only one component of the clinical workflow, and we are not concerned with developing a product, only validating its use in our clinical process. Some vendors might be able to help or collaborate with us for actual clinical implementation and as such move towards FDA approval for their product.
JT – If you are using multiple instruments across your network, do you have any issues with image file formats and the ability to open and share the images from different scanners?
YY – There are many file formats including DICOM as a potential standard, but I respect the fact that each vendor has worked hard and put a lot of thought into their own file formats. For this reason, I often prefer to do some development work with the SDKs
JT – You say that there haven’t been major technical advances in the past 10 years or so, but I imagine your use cases have changed. What are your main uses of digital pathology currently compared to the early days?
YY – Speed has certainly been improved but most scanners are still using standard 20x and 40x with a relatively low NA. One scanner from 3D Histech now uses water immersion to achieve a 1.3NA and the quality is really good. We used to use lower NAs around 0.5 which gives a longer depth of focus, but image quality has become the most important issue and the 1.3 NA gives us an image which is as close as .
Regarding use cases, when I was at MGH, I started doing 3D imaging. I was using hundreds of slides, scanning them all, then reconstructing in 3D. We even had some special equipment to section slides which meant we were able to get very high-quality reconstructions. We actually found some new things using 3D imaging, but it was so expensive. You can imagine if you have 200 slides and, in some cases, we also needed to do IHC and fluorescence on the same slide, so the cost is quite prohibitive for any routine use.
Due to these financial limitations and the time required, I had to plan another way and I discovered Micro-CT (computer tomography). So right now, I am combining the Micro-CT image with the whole slide image. With Micro-CT we can find vascular invasion, or different lymph nodes which are metastatic, which pathologists cannot find on the clinical slide. At the moment, I am focusing on moving this approach to a clinical application.
JT – Are you doing any work on tumor boundaries to render them in 3D or via the Micro-CT approach?
YY – Yes, we are creating histology 3D using Micro-CT on H&E and using the whole block. With Micro-CT, we can scan the whole block or whole case like an entire organ So, sometimes depending on the case we scan the whole tissue.
I was also working with the company MicroDimensions for a while to help develop their WSI based 3D software. That company doesn’t exist anymore, but we have access to the original application, and we managed to get some rights to continue using it.
JT – Let me move you on to talk about your current use of image analysis. Are you using image analysis for diagnostic purposes?
YY – So, remember that I am a user and also a developer. My group develops AI and image analysis applications and at the same time we look at the market to see if there is an existing application that we can purchase. I prefer to purchase but if there is no existing application, we develop it ourselves. One application we are working on is for when we first evaluate the tissue in H&E and then we order IHC and then perhaps also FISH, depending on the score of IHC. In this case we have to count signals per nucleI (around 20-40 for H&E and around 50-100 for FISH) and that’s a hard job by eye under the microscope. The hospital also wants to swap from FISH to CISH and so we are developing applications that a pathologist can use for counting signals in a nucleus under the different staining conditions. The region selected for counting is also important, as in some cases, we may want to count a simple, straightforward area, but in other cases, we may need to count nuclei in a complex invasive region. So, we are developing AI algorithms for breast cancer to first select the invasive areas in the H&E, then we analyse the same area in IHC, CISH and FISH. This is an application that we are validating and we have already done around 100 cases. We can then integrate that application into our clinical workflow so that it can be used by everyone who has access to whole slide images. They can then access cases as usual through the LIS and we can connect the application into their viewer.
JT – So, it’s a multi-modal approach which you can validate as an application and then move to the clinical side to assist clinical evaluation?
YY – Yes, during validation until the point it becomes clinical, we are using the study ID to analyse the test cases. Once we are ready to use clinically, we move to a (or test LIS) case number and integrate into the viewer. It still then takes time to implement into the clinical workflow and to have all pathologists using the application.
JT – So you can put together an integrated system in increments according to your needs?
YY – Yes, we can build it gradually with our own applications working alongside other applications which have been purchased from a vendor. For example, we are working with one vendor on the research side at the moment. As they have a cloud based system it has been difficult to get hold of the annotations. However, it looks like they will be able to supply those annotations as XML files so that we can then integrate them with our own applications.
JT – So if we look forward 10 years or 20 years, do you think that these types of algorithms combined with AI will take over a lot of the roles of the traditional pathologist.
YY – I think this is more likely for scoring and grading rather than actual decision-making. Computational pathology can, for example, help to find a slide with cancer from many biopsy slides of one case. It saves a lot of time and effort for the pathologist. I also believe that t
JT- You can imagine a situation where you have a lot of prostate samples, and you say to the computer. ‘Please show me all of a particular Gleason grade and above.’ That seems very possible and reduces the workload doesn’t it?
YY- yes, so in prostate that would probably be useful. if you found cancer you could move to the next case, but if the algorithm does not detect cancer, then you have to check for the false negatives. We have to define what is the accepted rate of false negatives.
JT – Some people envisage a future where the pathologist will just sit signing off cases and the cases themselves are completely handled by a computer system. personally, I think we will see more of a synergy between pathologist and machine. It’s fair to say though isn’t it that the pathologist is superior at some tasks and the machine is superior at other tasks?
YY – Classifications will change, guidelines and scoring systems will change and we will all have to follow the guidelines. Due to regulatory requirements, AI will not be able to keep up with these changes particularly when you think they are evolving across multiple countries simultaneously. So, in a world where we can keep the same diagnostic rules, AI can continue to improve and to take over specific tasks. AI may even suggest new rules to us as it can discover features which the pathologist cannot detect by eye. If at some point AI becomes a part of the classification process, then we can all move forward together using AI.
It is similar to the way we have developed 3D imaging and how we want to integrate that into the clinical work. When we started, I was spending four hours to produce a single virtual slide, but right now we have that down to 20 seconds. It’s quite possible that in the future we will be able to use whole block imaging in the same way we use whole slide imaging today. As an example, we are developing AI for whole block imaging and whole tissue imaging to discover how many vessels or how many lymph nodes are present, and the spatial relationship between the tumor and the vessels. So, if we can scan the block quickly enough, when the pathologist signs-on, they will already have the tumor location and know exactly where the lymph nodes are. We could also use this method to map vascular invasion, so if we find invasion of a different grade, we could quickly give a different diagnosis and order different treatment. Collaboration with radiology is also very useful here.
JT – Well that is an interesting point because pathology imaging and radiological imaging exist as two separate fields at the moment and for a long-time people have spoken about a merger of these two disciplines for cancer detection and diagnosis.
YY – In truth I don’t work with many radiologists myself, but I think that is due more to physical separation of our departments (lab). I work with endoscopists, and we evaluate the tissue and publish papers together.
JT – It sounds as though you are in some ways pushing the boundaries in your work, you are at the forefront of the field and developing new diagnostic techniques. It’s interesting to think that at the same time, there are still many labs with no digital pathology at all. I wonder if in the future as younger people come into this science, they will want to work in the traditional or the digital labs? Do you think that we will see two parallel paths, classes of laboratory, or will the digital labs simply outcompete the traditional labs?
YY – I think to get to that point, we need even better scanners. Image quality from digital slide scanners is very good, but if you compare with the microscope, there is still a big difference. I want young researcher and pathologists to see the microscope before using a digital image. Though it is not always the case, when we see a digital Image, we can normally guess how it looks under the microscope. That is why we are happy to use most current scanners.
JT – And also digital requires a certain standard of input preparation. Staining, mounting, sectioning, that’s all very important for digital pathology isn’t it?
YY – I am working on this kind of stuff too, how to section and how to ensure that we have a high standard of staining quality on the cases we are receiving from surgery. This is something we have had to focus on with the 3D imaging. If one section is flipped, it’s very hard to make a 3D reconstruction, and so the quality of sectioning becomes critical. There are many new technologies for standardization of color, scoring etc. There are many nice new applications around and it’s Important to connect all the pieces together to make it all work effectively.
JT – So it sounds like you don’t see algorithms taking over the world?
YY – I think computer systems will have a bigger and bigger role, Certainly AI can help us, but there will always be plenty of things that the pathologist needs to do. In any case, we just have to make sure that it is a future of improved patient care, that is the critical point to always remember.
JT – Dr. Yagi, we’ll leave it there, Thank you for your time today.