Citations below are CAP TODAY
See also PATHOLOGIST 1/2026 on saving time in cytopathology (staff)
https://thepathologist.com/issues/2026/articles/january/solving-the-cytopathology-staffing-challenge
### CAP TODAY:
https://www.captodayonline.com/ai-bottleneck-putting-the-algorithms-to-work/3/
AI Bottlenecks
Mayo Clinic in Rochester is evaluating the potential applications of AI-enhanced pipelines for analysis of measurable residual disease flow cytometry data in order to accelerate interpretation and throughput in the cell kinetics clinical laboratory.
Dr. Seheult and his colleagues first introduced AI into clinical MRD testing in 2024, when a deep neural network (DNN)-assisted, human-in-the-loop workflow was deployed for the CLL MRD assay (Seheult JN, et al. Cancers (Basel). 2025;17[10]:1688). The laboratory incorporated that model into the existing CLL workflow using a scripting interface to call the AI model from the software already in use. “Our technologists were familiar with that software solution and they didn’t want to move away from it at that time,” he says. The manual approach for the CLL assay takes about 15 minutes.
With the AI-enhanced workflow, “we dropped that analysis time to under two to four minutes,” Dr. Seheult says.
##
How we implemented a digital PAP test system
https://www.captodaymag.com/captoday/library/item/january_2026/4316649/
A retrospective review by Doxtader, et al., was published in 2025 comparing the Hologic Genius Digital Diagnostics System versus manual glass slide review.5 They had 596 cases and a washout period of four weeks, which met the criteria for SR No. 1 and SR No. 3, respectively. The authors found that digital interpretation was concordant with the original interpretation in 578 of 596 (97 percent) cases, while manual interpretation was concordant with the original interpre tation in 577 of 596 (97 percent) cases, which met the SR No. 2 criteria. The study also showed that digital review had higher sensitivity than manual review for detection of high-grade disease (100 percent versus 86 percent) but was less specific (93 percent versus 98 percent). Similar to the results of the Cantley, et al., study, the average digital review time per case was statistically significantly shorter than manual review time (194.5 seconds versus 485.0 seconds, p < .001).
In 2024, a validation study by Cantley, et al., using the Genius Digital Diagnostics System was published.4 Their study consisted of 319 ThinPrep Pap test cases, which met the SR No. 1 criteria of at least 60 cases, and had a two-week washout period between light microscopic and digital evaluation, which met SR No. 3. While their results showed that digital had a higher diagnostic accuracy compared with light microscopy, the moderate agreement with the ground-truth diagnosis for each review method was 62.1 percent for digital and 55.8 percent for light microscopy, which fell short of the recommended 95 percent concordance (SR No. 2). Their study investigated and acknowledged multiple factors that may have impacted the agreement. Time to evaluate cases was shorter for digital (mean = 3.2 min, SD = 2.2) compared with manual review (mean = 5.9 min, SD = 3.1) (t(352) = 19.44, p < 0.001).
#
#
Path Informatics Abstracts
TITAN
Summary
TITAN, a multimodal whole slide foundation model, outperforms existing slide encoders in various diagnostic and prognostic tasks, including rare disease classification. While promising, TITAN’s limitations, such as partial image representation and domain shift, necessitate further validation and workflow integration before clinical use. A study comparing seven high-throughput whole slide imaging devices revealed significant variability in scan times, image quality, and technician involvement, emphasizing the importance of in-house evaluation before procurement.
Editors: Liron Pantanowitz, MD, PhD, MHA, chair of the Department of Pathology and professor of pathology, University of Pittsburgh Medical Center, and Matthew G. Hanna, MD, vice chair of pathology informatics and associate professor, Department of Pathology, University of Pittsburgh Medical Center.
Assessment of a multimodal whole slide foundation model for pathology
January 2026—Traditional digital pathology models have largely operated on small image patches, limiting their ability to capture global slide context and often ignoring accompanying textual data, such as pathology reports. The authors conducted a study to address these limitations by introducing a unified model that can learn from gigapixel whole slide images (WSI) and associated text, thereby offering broad generalization capabilities across diagnostic and prognostic tasks. The authors’ large-scale, multimodal foundation model, named TITAN (Transformer-based Pathology Image and Text Alignment Network), represents a major advancement in computational pathology. The authors pretrained TITAN on the Mass-340K data set, which comprises more than 335,000 WSI spanning 20 organ types and multiple staining protocols. Their pretraining pipeline included three key stages: first, vision-only self-supervised learning on large (8,192 × 8,192 pixel) image crops; second, region-level vision–language alignment using more than 420,000 synthetic image–caption pairs generated by a multimodal generative model; and third, slide-level alignment with more than 180,000 real slide–report pairs. The model’s transformer architecture incorporates attention mechanisms that can handle the long-range dependencies inherent in WSI. The authors rigorously evaluated TITAN and found that it outperformed such state-of-the-art slide encoders as Prism, Prov-GigaPath, and CHIEF across a wide range of downstream tasks, including morphological classification, molecular prediction, survival analysis, and text–image retrieval. TITAN also demonstrated strong few-shot and zero-shot learning capabilities, particularly in rare disease classification, for which it achieved substantially higher accuracy compared with other models. Moreover, its cross-modal functionality enabling slide-to-report and report-to-slide retrieval and automatic report generation marked a significant step toward integrated digital pathology workflows. The authors emphasized TITAN’s potential to act as a general-purpose foundation model for pathology that is capable of powering a variety of diagnostic and research tasks without extensive retraining. Its ability to handle multimodal data and its robustness with limited training examples make it particularly promising for applications in rare diseases and low-resource settings. However, it does have noteworthy limitations. For example, even though TITAN was trained on a massive data set, each image crop still represents only a portion of a whole slide image. Furthermore, domain shift issues, such as variation in scanners, staining, or institutions, may hinder real-world deployment. The authors acknowledge that larger and more diverse data sets, as well as clinical validation studies, are necessary to ensure the generalizability and reliability of the technology. While further validation and workflow integration are needed before this technology can be used in a clinical setting, the authors have established a strong technical and conceptual foundation for the future of AI-augmented pathology.
Ding T, Wagner SJ, Song AH, et al. A multimodal whole-slide foundation model for pathology. Nat Med. 2025;31:3749–3761.
Correspondence: Dr. Long Phi Le at long.le@mgh.harvard.edu or Dr. Faisal Mahmood at faisalmahmood@bwh.harvard.edu
#
#
Digital slide scanning at scale: comparison of whole slide imaging devices
Advances in digital pathology have led to the availability of many commercial whole slide imaging systems. The problem with having multiple options is determining which one best meets a laboratory’s needs. Choosing a scanner is an important decision because scanner throughput and image quality are critical to efficient clinical operations. The authors conducted a study in which they provided a detailed comparison of seven high-throughput whole slide imaging devices used in real-world conditions, focusing on system performance, scan times, image quality, and operational requirements. The study, conducted at Memorial Sloan Kettering Cancer Center, focused on 347 glass slides representing a variety of pathology subspecialties and stain types that were scanned sequentially on 16 scanners from the seven vendors (Leica, 3DHistech, Philips, Hamamatsu, Hologic, Huron, and Pramana). Scanning was performed at 40 times or 20 times magnification, depending on the device. Technician time for pre- and post-scan activities, as well as manual image quality review, was recorded.
The study found that scan times, including the technician’s time, ranged from 13.5 to 47 hours, highlighting significant variability among devices. Image quality errors—for example, those related to missing tissue, blurred or out-of-focus images, barcode failures, tiling, or overexposure—were detected in eight to 61 percent of slides per run.
Technician involvement ranged from five to 52 percent of total scan time, depending on scanner automation and ease of use. Inter-instrument variability was notable, even among identical models, underscoring the importance of calibration and quality control. While automation device features, such as continuous loading and automated focus, can reduce technician time, this study indicated that they may introduce image errors if not properly managed. Given that high-throughput whole slide imaging devices differ substantially in throughput, image quality, and operational requirements, successful implementation of digital pathology requires careful consideration of scanner performance and workflow integration. Real-world performance data, rather than vendor specifications, should guide procurement decisions. Therefore, the authors recommend evaluating scanners in-house before making a purchase.
Ardon O, Manzo A, Spencer J, et al. Digital slide scanning at scale: Comparison of whole slide imaging devices in a clinical setting. J Pathol Inform. 2025. doi.org/10.1016/j.pi.2025.100446
Correspondence: Dr. Orly Ardon at ardono@mskcc.org or Allyne Manzo at manzoa@mskcc.org
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.