Cookie Settings

We use cookies to optimize our website. These include cookies that are necessary for the operation of the site, as well as those that are only used for anonymous statistic. You can decide for yourself which categories you want to allow. Further information can be found in our data privacy protection .

Essential

These cookies are necessary to run the core functionalities of this website and cannot be disabled.

Name Webedition CMS
Purpose This cookie is required by the CMS (Content Management System) Webedition for the system to function correctly. Typically, this cookie is deleted when the browser is closed.
Name econda
Purpose Session cookie emos_jcsid for the web analysis software econda. This runs in the “anonymized measurement” mode. There is no personal reference. As soon as the user leaves the site, tracking is ended and all data in the browser are automatically deleted.
Statistics

These cookies help us understand how visitors interact with our website by collecting and analyzing information anonymously. Depending on the tool, one or more cookies are set by the provider.

Name econda
Purpose Statistics
External media

Content from external media platforms is blocked by default. If cookies from external media are accepted, access to this content no longer requires manual consent.

Name YouTube
Purpose Show YouTube content
Name Twitter
Purpose activate Twitter Feeds

Publication highlights - Methods for Intelligent Systems

Deployment of Image Analysis Algorithms under Prevalence Shifts

Main contribution: We systematically analyze the various effects of prevalence shifts potentially occurring in the deployment of image classification algorithms, and comprehensively compensate for them by assembling a novel, overarching workflow.

Domain gaps are among the most relevant roadblocks in the clinical translation of machine learning (ML)-based solutions for medical image analysis. While current research focuses on new training paradigms and network architectures, little attention is given to the specific effect of prevalence shifts on an algorithm deployed in practice. Such discrepancies between class frequencies in the data used for a method's development/validation and that in its deployment environment(s) are of great importance, for example in the context of artificial intelligence (AI) democratization, as disease prevalences may vary widely across time and location. Our contribution is twofold. First, we empirically demonstrate the potentially severe consequences of missing prevalence handling by analyzing (i) the extent of miscalibration, (ii) the deviation of the decision threshold from the optimum, and (iii) the ability of validation metrics to reflect neural network performance on the deployment population as a function of the discrepancy between development and deployment prevalence. Second, we propose a workflow for prevalence-aware image classification that uses estimated deployment prevalences to adjust a trained classifier to a new environment, without requiring additional annotated deployment data. Comprehensive experiments based on a diverse set of 30 medical classification tasks showcase the benefit of the proposed workflow in generating better classifier decisions and more reliable performance estimates compared to current practice.

Godau, P., Kalinowski, P., Christodoulou, E., Reinke, A., Tizabi, M., Ferrer, L., Jäger, P. F., & Maier-Hein, L. (2023). Deployment of Image Analysis Algorithms Under Prevalence Shifts. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (pp. 389–399). [MICCAI 2023 straight accept, oral, shortlisted for Best Paper and Young Scientist Award] [pdf] [video]

© dkfz.de

Unsupervised Domain Transfer with Conditional Invertible Neural Networks

Main contribution: First domain transfer approach that combines the benefits of conditional Invertible Neural Networks (exact maximum likelihood estimation) with those of GANs (high image quality).

Synthetic medical image generation has evolved as a key technique for neural network training and validation. A core challenge, however, remains in the domain gap between simulations and real data. While deep learning-based domain transfer using Cycle Generative Adversarial Networks and similar architectures has led to substantial progress in the field, there are use cases in which state-of-the-art approaches still fail to generate training images that produce convincing results on relevant downstream tasks. Here, we address this issue with a domain transfer approach based on conditional invertible neural networks (cINNs). As a particular advantage, our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood training. To showcase our method's generic applicability, we apply it to two spectral imaging modalities at different scales, namely hyperspectral imaging (pixel-level) and photoacoustic tomography (image-level). According to comprehensive experiments, our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks (binary and multi-class). cINN-based domain transfer could thus evolve as an important method for realistic synthetic data generation in the field of spectral imaging and beyond. The code is available at https://github.com/IMSY-DKFZ/UDT-cINN.

Dreher, K. K., Ayala, L., Schellenberg, M., Hübner, M., Nölke, J.-H., Adler, T. J., Seidlitz, S., Sellner, J., Studier-Fischer, A., Gröhl, J., Nickel, F., Köthe, U., Seitel, A., & Maier-Hein, L. (2023). Unsupervised Domain Transfer with Conditional Invertible Neural Networks. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (pp. 770–780). [pdf] [video]

© dkfz.de

Self-distillation for surgical action recognition

Main contribution: We present the winning solution to the 2022 Surgical Action Triplet Detection challenge, which uses the concept of self-distillation as a means of addressing class imbalance and potential label ambiguity in surgical video analysis.

Surgical scene understanding is a key prerequisite for contextaware decision support in the operating room. While deep learning-based approaches have already reached or even surpassed human performance in various fields, the task of surgical action recognition remains a major challenge. With this contribution, we are the first to investigate the concept of self-distillation as a means of addressing class imbalance and potential label ambiguity in surgical video analysis. Our proposed method is a heterogeneous ensemble of three models that use Swin Transfomers as backbone and the concepts of self-distillation and multi-task learning as core design choices. According to ablation studies performed with the CholecT45 challenge data via cross-validation, the biggest performance boost is achieved by the usage of soft labels obtained by self-distillation. External validation of our method on an independent test set was achieved by providing a Docker container of our inference model to the challenge organizers. According to their analysis, our method outperforms all other solutions submitted to the latest challenge in the field. Our approach thus shows the potential of self-distillation for becoming an important tool in medical image analysis applications.

Yamlahi, A., Tran, T. N., Godau, P., Schellenberg, M., Michael, D., Smidt, F.-H., Nölke, J.-H., Adler, T. J., Tizabi, M. D., Nwoye, C. I., Padoy, N., & Maier-Hein, L. (2023). Self-distillation for Surgical Action Recognition. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (pp. 637–646). [pdf] [video]

© dkfz.de

Why is the winner the best?

Main contribution: First comprehensive multi-center study on 80 international benchmarking competitions, which investigated winning solution characteristics and common participation strategies.

International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The "typical" lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.

Eisenmann, M., Reinke, A., Weru, V., Tizabi, M. D., Isensee, F., Adler, T. J., ... & Maier-Hein, L. (2023). Why is the winner the best?. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 19955-19966). [pdf]

© dkfz.de

Task Fingerprinting for Meta Learning in Biomedical Image Analysis

Main contribution: First study in the context of Surgical Data Science to compare methods for task similarity quantification as meaningful gateways for knowledge transfer between data sets.

Shortage of annotated data is one of the greatest bottlenecks in biomedical image analysis. Meta learning studies how learning systems can increase in efficiency through experience and could thus evolve as an important concept to overcome data sparsity. However, the core capability of meta learning-based approaches is the identification of similar previous tasks given a new task - a challenge largely unexplored in the biomedical imaging domain. In this paper, we address the problem of quantifying task similarity with a concept that we refer to as task fingerprinting. The concept involves converting a given task, represented by imaging data and corresponding labels, to a fixed-length vector representation. In fingerprint space, different tasks can be directly compared irrespective of their data set sizes, types of labels or specific resolutions. An initial feasibility study in the field of surgical data science (SDS) with 26 classification tasks from various medical and non-medical domains suggests that task fingerprinting could be leveraged for both (1) selecting appropriate data sets for pretraining and (2) selecting appropriate architectures for a new task. Task fingerprinting could thus become an important tool for meta learning in SDS and other fields of biomedical image analysis.

Godau, P., & Maier-Hein, L. (2021). Task Fingerprinting for Meta Learning in Biomedical Image Analysis. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (pp. 436–446). [MICCAI 2021 straight accept, oral] [pdf]

© dkfz.de

to top
powered by webEdition CMS