Typically, a pathologist will stain tissue samples and then proceed to observe the tissue by their chosen means. However, the histopathology staining procedure has multiple caveats with it requiring; skilled technicians, specialised laboratory equipment, and unique infrastructure not to mention it being very time consuming. Furthermore, the handling of tissue slides by histology technicians and the staining process can propel misdiagnoses. Adding to the complications, histochemical staining techniques have irreversible impacts on the tissue meaning the original tissue sample is not preserved and cannot undergo alternative processes to aid in the patient’s diagnosis.
As artificial intelligence (AI) advances, research groups are utilising AI to facilitate the pathology workflow and address the problems facing the industry. In a recent study from the University of California Los Angeles (UCLA), published in Intelligent Computing researchers used deep neural networks to virtually stain microscopic images of unlabelled tissue.
Previous research has already worked to artificially stain unlabelled tissue section images using deep neural networks, avoiding the time-consuming histochemical staining processes while preserving the tissue for further analysis. This process does, however, carry its own limitations. “In all the label-free virtual staining methods, the acquisition of in-focus images of the unlabelled tissue sections is essential. In general, focusing is a critical but time-consuming step in scanning optical microscopy used to correct focus drifts caused by mechanical or thermal fluctuations,” the authors said.
The most widely used autofocusing method demands many focus points across the tissue slide area with high focusing precision, and the best focal plane is determined by an iterative search algorithm, which is time consuming and may introduce photodamage and photobleaching on the samples.
To overcome these problems, the authors present a new deep learning-based fast virtual staining framework. They say that “this framework uses an autofocusing neural network (termed Deep-R) to digitally refocus the defocused autofluorescence images. Then a virtual staining network is used to transform the refocused images into virtually stained images.”
Compared to the standard virtual staining framework, the new framework demonstrated by the authors uses fewer focal points and reduces the focusing precision for each focus point to acquire coarsely-focused whole slide autofluorescence images of tissue.
This new virtual staining framework can significantly reduce the time for autofocusing and the entire image acquisition process. The authors say that “the deep learning-based framework decreases the total image acquisition time needed for virtual staining of a label-free whole slide images (WSI) by ~32%, also resulting in a ~89% decrease in the autofocusing time per tissue slide.”
Despite loss of image sharpness and contrast compared to standard virtual staining frameworks, high quality staining can still be produced, closely matching the corresponding histochemically stained ground truth images. Furthermore, this framework can also be used as an add-on module to improve the robustness of the standard virtual staining framework.
This fast virtual staining framework will have more development prospects in the future. “This fast virtual staining workflow can also be expanded to many other stains, such as Masson’s Trichrome stain, Jones’ silver stain, and immunohistochemical (IHC) stains,” the authors said. “Although the virtual staining approach presented here was demonstrated based on the autofluorescence imaging of unlabeled tissue sections, it can also be used to speed up the virtual staining workflow of other label-free microscopy modalities.”
Yijie Zhang et al, Virtual Staining of Defocused Autofluorescence Images of Unlabeled Tissue Using Deep Neural Networks, Intelligent Computing (2022). DOI: 10.34133/2022/9818965
Artificial intelligence methods may replace histochemical staining (2022, October 31)
retrieved 2 November 2022
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.