by Christos Evangelou, MSc, PhD – Medical Writer and Editor
SAN DIEGO, California – In a new study, researchers at the University of Manchester, UK, developed a new transformer-based computational approach to facilitate the translation of H&E to multiplexed immunohistochemistry (IHC) stains. The researchers compared three deep learning models and found that the TransUNet model provided superior performance in translating H&E to multiplexed IHC stains when applied to a translation dataset. The study findings were presented at SPIE Medical Imaging 2023, which took place in San Diego on February 19–23.
Histological analysis of surgical or biopsy tissue sections plays a key role in cancer diagnosis. H&E and IHC are the most commonly used stains for the pathological examination of tissues and provide complementary information. H&E staining provides information on the spatial distribution of cells, whereas multiplexed IHC provides information on the tumor microenvironment.
To overcome the high costs and complex tissue preparation required for multiplexed IHC, researchers have developed a convolutional neural network (CNN)-based computational approach for translating H&E-stained input images into multiplexed IHC-stained images of the same tissue section.
“We established a translation dataset that contained pixel-wise registered H&E and multiplexed IHC staining images. After image-block matching, image cropping into 1735 patches of 512 x 512 pixels, and color normalization, we trained deep learning models to translate H&E inputs into their corresponding multiplexed IHC image versions,” explained Chang Bian, PhD researcher at the University of Manchester.
Using the U-Net and pix2pix models as baseline methods, researchers evaluated the translation performance of TransUNet. Structure similarity index (SSIM) and Pearson correlation were used as performance metrics. They also compared the impact of different losses on the performance of the models.
The study confirmed the feasibility of using deep learning-based algorithms to translate H&E-stained images into multiplexed IHC-stained images. However, U-Net and pix2pix showed poor performance in translating H&E image patches where the cells were densely clustered together. The team hypothesized that introducing a transformer architecture into the network structure could overcome these limitations of deep learning models without a transformer.
Indeed, they found that the TransUNet model was superior to the U-Net and pix2pix models in terms of translation performance, providing SSIM scores of 0.862 for L1 loss and 0.805 for L2 loss. Pearson correlation scores across various IHC markers were also higher with TransUNet than with U-Net and pix2pix when L1-L2 combined loss results were taken into account.
“Based on the fact that the transformer architecture improves the translation performance, we intend to develop and multiscale transformer-based network structure for stain translation tasks because we believe that images of different magnifications may contain different levels of information,” Bian noted. The team also plans to validate the performance of the transformer-based algorithm using a clinical dataset from patients with colon cancer.
SPIE Medical Imaging 2023
SPIE Medical Imaging 2023 started on Sunday, 19 February with conference presentations kicking off the next day. The meeting in San Diego offered a great opportunity to hear the latest advances in image processing, physics, computer-aided diagnosis, perception, image-guided procedures, biomedical applications, ultrasound, informatics, radiology, and digital and computational pathology.