Researchers from the Department of Pathology, Memorial Sloan Kettering Cancer Center (New York, USA) and Weill Cornell Graduate School for Medical Sciences (New York, USA) recently developed an innovative deep learning-based method for multi-class breast cancer image segmentation. This new deep multi-magnification network outperforms other single and multi-magnification-based tissue segmentation methods and is likely to improve breast cancer diagnosis.
This work paves the way for the use of deep learning-based models to enhance pathology workflows and increase diagnostic accuracy for the detection of malignant breast lesions. The study was published in the journal Computerized Medical Imaging and Graphics in March 2021.
New trends in pathological diagnosis of breast cancer
Pathology plays a critical role in the diagnosis of different subtypes of breast cancer. By examining morphological tissue features, tumor growth patterns, and cytologic features, pathologists can determine whether excised breast tissues contain malignant cells or not.1
Additionally, pathological evaluation of specimens surgically removed from patients with breast cancer can help determine the completeness of tumor resection based on surgical excision margins. In turn, complete or incomplete tumor resection can influence post-surgical disease management to prevent local tumor recurrence.1
Traditional pathology methods to diagnose breast cancer entail manual observation of formalin-fixed, H&E-stained tissue slides under a microscope. However, tissue processing and staining are time-consuming and laborious. Additionally, diagnosis based on manual observation of stained tissues is subjective and lacks standardization.2
Digitized technologies, and whole slide imaging (WSI), in particular, can enhance the efficiency of pathological diagnosis based on H&E staining of tissues. The use of deep learning-based computational approaches to analyze whole slide images can automate digital pathology pipelines to further improve the accuracy and reliability of diagnosis.3
In particular, semantic segmentation—also known as pixel-wise classification—of whole slide images is a deep learning-based computational method that can assist pathological diagnosis by segmenting histologic structures in digital slides and providing information on the location, size, and shape of objects in tissue images.4
Although patch-based approaches to analyze whole slide images using convolutional neural networks (CNNs) overcome certain challenges of integrating deep learning with WSI, most currently available patch-based methods provide a narrow field of view and fail to incorporate morphological information from low magnifications.2
To overcome these limitations, a team led by Dr. Thomas J. Fuchs developed a deep multi-magnification network (DMMN) for tissue segmentation. This patch-based segmentation CNN could help enhance the efficiency and reproducibility of breast cancer diagnosis.2
Deep multi-magnification network architecture
In contrast to deep single-magnification networks (DSMNs), DMMNs incorporate the morphological features of tissue patches at different magnifications (5×, 10×, and 20× magnification). The integration of tissue features from multiple magnifications dramatically increases the field of view and enhances the accuracy of multi-class tissue segmentation. The proposed DMMN architecture has multiple encoders, decoders, and concatenations between decoders to produce rich feature maps in the intermediate layers.2
“Deep Multi-Magnification Network (DMMN) looks at a set of patches from multiple magnifications to fully utilize morphological features from high magnification and low magnification,” says Dr. David Joon Ho, the first author of the study. “By partially annotating whole slide images to train the model, Deep Multi-Magnification Network (DMMN) can accurately segment multiple tissue subtypes.”
Researchers trained three DMMN models with multi-encoder, multi-decoder, and multi-concatenation architecture to perform tissue segmentation in different clinical settings: 1) in breast cancer specimens to identify malignant and benign margins, 2) in lung cancer samples to identify tumor subtypes based on histological patterns, and 3) in osteosarcoma specimens to assess tissue necrosis from multiple slides for pre-operative treatment response assessment.5
All three DMMN models were trained using a set of partially annotated patches in digitized whole slide images at multiple magnifications. After training, DMMN models were evaluated for their ability for automated multi-class tissue segmentation.2,5
Performance of the network
The proposed DMMN could provide information on both cellular features (e.g., nuclear characteristics) from high magnifications and architectural growth patterns (e.g., distribution of tissue types) from low magnifications. This feature of the proposed CNN architecture opens new avenues for high-resolution and accurate segmentation of different tissue regions.2
The trained DMMN was used to analyze whole slide images from patients with triple-negative breast cancer (TNBC) and high-grade invasive ductal carcinoma (IDC). A second dataset contained whole slide images from lumpectomy and breast margins from patients with IDC and ductal carcinoma in situ (DCIS). The algorithm could successfully identify malignant margins in whole slide images of breast tissues, providing feature maps with smooth and clear boundaries. Additionally, the breast DMMN model proved to be a sensitive method to identify tumor regions in breast tissues.2
In line with the high diagnostic accuracy in breast tissues, DMMN-based tissue segmentation could accurately identify necrotic regions in osteosarcoma specimens, providing an error rate similar to that of manual assessment by a pathologist. Furthermore, multi-class tissue segmentation was able to identify histological patterns in lung cancer specimens.5
Notably, the multi-encoder, multi-decoder, and multi-concatenation architecture described in this study outperformed other single and multi-magnification-based tissue segmentation architectures, providing a higher mean intersection-over-union.5
“DMMN achieves outstanding multi-class tissue segmentation of whole slide images by utilizing morphological features from multiple magnifications,” says Dr. Ho. “DMMN outperforms other deep single-magnification networks using a patch from a single magnification for segmentation.”
Future perspectives
The findings of this study strongly support that DMMN-based tissue segmentation models can be used to objectively and accurately diagnose different types of cancer, including breast cancer. By integrating histological information from different magnifications, DMMNs surpass the ability of deep single-magnification networks to segment biological structures in tissue images.
Typically, a large set of annotations is required for supervised learning of deep learning algorithms; this can significantly increase the time and effort needed for algorithm training. The possibility to train DMMN-based tissue segmentation models using partially annotated whole slide images can reduce annotation burdens for pathologists.
“Tissue segmentation of whole slide images is a prerequisite step for objective diagnosis and assessment of cancers,” says Dr. Ho. “DMMN can be used as a screening tool to select potential malignant slides for efficient assessment of shaved margins for breast lumpectomy specimens. Additionally, DMMN can be used as a reproducible system to estimate necrosis ratio from multiple osteosarcoma slides, which is known to correlate with patients’ survival.”
However, a key limitation of the proposed model was its inability to accurately segment well-differentiated carcinomas. This may have resulted from the underrepresentation of well-differentiated carcinomas in the training set. Moreover, the model was relatively sensitive to background noise, which could result in mis-segmentation of background regions.2 Future efforts are warranted to develop and validate more accurate DMMN models to segment whole slide images of various cancer subtypes while recognizing background noise patterns.
Despite these limitations, the clinical implementation of multi-magnification networks to segment tissues on whole slide images may reduce the workload of pathologists, speed up diagnosis, and improve diagnostic accuracy in breast cancer.
References
- Leong AS-Y, Zhuang Z. The changing role of pathology in breast cancer diagnosis and treatment. Pathobiology. 2011;78(2):99-114. doi:10.1159/000292644
- Ho DJ, Yarlagadda DVK, D’Alfonso TM, et al. Deep Multi-Magnification Networks for multi-class breast cancer image segmentation. Comput Med Imaging Graph. 2021;88:101866. doi:10.1016/j.compmedimag.2021.101866
- Aeffner F, Zarella MD, Buchbinder N, et al. Introduction to Digital Image Analysis in Whole-slide Imaging: A White Paper from the Digital Pathology Association. J Pathol Inform. 2019;10:9. doi:10.4103/jpi.jpi_82_18
- Wang S, Yang DM, Rong R, Zhan X, Xiao G. Pathology Image Analysis Using Segmentation Deep Learning Algorithms. Am J Pathol. 2019;189(9):1686-1698. doi:https://doi.org/10.1016/j.ajpath.2019.05.007
- Ho DJ. Deep learning-based whole slide image segmentation for efficient and reproducible assistance in pathology. In: Pathology Visions. ; 2021.