Medical Image Computing Research

Helmholtz Metadata Collaboration (HMC) - Hub Health


The Helmholtz Metadata Collaboration Platform develops concepts and technologies for efficient and interdisciplinary metadata management spanning the Helmholtz research areas Energy, Earth and Environment, Health, Matter, Information, Aeronautics, Space and Transport. As HMC Hub Health, we support researchers and clinicians in structuring, standardizing, and expanding the collection of metadata to facilitate the re-use, interoperability, reproducibility, and transparency of their data.

More information:

--> Dr. Barbara Port
--> Melanie Forche
--> Christian Koch
--> Lucas Kulla
--> Katharina Rink
--> Dr. Beatrix Tettmann


End-to-end Deep Learning Architectures


Current Medical Image Analysis approaches often comprise a set of separate processing steps such as Registration, Normalization, Segmentation, Feature Extraction and Classification. This project develops techniques for the integration of these components into one end-to-end deep learning architecture. This enables simultaneous optimization of all component w.r.t the ultimate clinical task (e.g. disease classification).  

 -> Paul Jäger 

Machine Learning-based dMRI Processing


This project deals with processing, analysis and visualization of neurological datasets with focus on diffusion-weighted magnetic resonance imaging (dMRI). Major fields of research are the development and implementation of new methods for segmentation or tractography of white matter tracts, as well as tissue segmentation and modelling. Besides the classical methods that are used in this field, we explore the application of machine learning in the context of diffusion-weighted image processing.

-> Peter Neher

-> Jakob Wasserthal

Validation of Fiber Tractography


The quantitative evaluation of fiber tractography is a long-standing challenge for the field that represents an essential prerequisite for widespread application and meaningful interpretation of the approach. In this project we develop phantom-based as well as in-vivo methods to approach this challenge and validate tractography results in large-scale evaluation studies and international challenges.

-> Peter Neher

Learning from Noisy Annotations


The annotation of medical images suffers from high inter- and intra-rater variability, caused by a number of factors such as strong ambiguities in the images and the subjectivity of annotators. This projects seeks to develop methods that can handle such ambiguous ground truth labels, e.g. by modelling the distribution over annotations via generative adversarial models.

-> Simon Kohl

Learning from Weak Annotations


The full annotation of training data with individual labels on each observation can be cumbersome or even impossible in many clinically relevant scenarios. This project aims at developing machine learning methods that can handle weakly annotated data. This can drastically reduce the annotation effort in applications like semantic image segmentation and also enable the application of machine learning in settings where sufficient training data were missing so far.

-> Michael Götz

Efficient learning from microscopic images


Microscopic or pathology images are rich source of data; however, the sheer size of these images makes it difficult and time-consuming to extract all relevant information. Artificial Intelligence can become a useful tool to overcome this limitation and help to understand and interpret the data. This project aims at developing the necessary machine learning algorithms with a focus on two aspects: The efficient use of annotation with weakly supervised learning and efficient handling of large-size data with large-scale data learning algorithm.

-> Michael Götz

Multiple Myeloma Image Analysis


The goal of this project is to establish an imaging based prognostic staging system for patients with multiple myeloma. In symptomatic patients focal lesions can be present in extensive numbers. The project aims to develop a set of automated detection, segmentation and characterization techniques for whole body image analysis. The system will quantify and assess the tumor mass trend over time.

-> Andre Klein

-> Amir Kalali

Knowledge-based Large-scale Image Segmentation


This project aims at the development of flexible and accurate methods for an automatic semantic image segmentation on large clinical datasets. We focus on a combination of model-based methods with techniques from Active Learning, Online Learning, Transfer Learning and Deep Learning. They allow an optimized training on sparse data, a continuous learning at runtime and an assessment of segmentation quality, as a prerequisite for the successful annotation of large and heterogeneous datasets.

-> Tobias Norajitra

Radiomics- and Shape-based Detection of Diabetes related Tissue Changes in the German National Cohort


This project aims at an extensive computational analysis of medical data from the multi-center study 'German National Cohort (GNC)'. The GNC offers highly standardized MR image acquisition, holistic documentation of clinical parameters and patient history, as well as large case numbers organized in a nation-wide effort. This makes the examined GNC data an ideal subject for analysis using modern, data-driven machine-learning algorithms. Pre-diabetes and diabetes type 2 are likely associated with quantity and distribution of abdominal tissue types (adipose fat, or fat and iron in the liver) and with liver shape. In this project, information on tissue and shape characteristics will automatically be derived from GNC images and be correlated with pathologic findings from the GNC. The project will focus on strategies for computational quality-assessment of learning-based annotation algorithms, on continuous algorithm re-training, and on complementary radiomics analyses based on hand-crafted and deep learning derived image features.

-> Tobias Norajitra

Image-based Stratification of Glioblastoma Patients


This project focuses on uncertainty in radiomics. Traditionally, a multitude of parameters is computed from images with which decision support systems are learned. These parameters may however be very sensitive to segmentation errors, resulting in overfitting and degrading performance. We make use of deep learning segmentation algorithms that allow uncertainty estimation, based upon which we can learn more robust decision support algorithms.

-> Fabian Isensee



In Radiomics, high-throughput computing techniques are systematically employed for the conversion of images to higher-dimensional data, i.e. predictive features. The aim is to improve decision support by the subsequent analysis of these features. We study comprehensive MRI imaging phenotypes for linkage of imaging with clinical, biological and genomic parameters in several entities including prostate cancer, breast cancer and brain tumors.

-> Michael Goetz

-> Paul Jäger

-> Tobias Norajitra

Probabilistic Modeling of Time-Series Data


In this project, we develop algorithms to predict future clinical assessments of glioblastoma patients from existing longitudinal data. We specifically try to estimate spatial invasion probabilities, incorporating novel uncertainty measures that allow us to quantify the confidence of our models’ outputs. This project is carried out in close cooperation with the Department of Neuroradiology at Heidelberg University Hospital, where we have established infrastructure that allows us to test cutting edge algorithms in clinical routine.

-> Jens Petersen

"Deep Probabilistic Modeling of Glioma Growth​", Petersen et al., MICCAI 2019 (

Joint Imaging Platform

Within the German Cancer Consortium (DKTK) the Joint Funding Project “Joint Imaging Platform” will establish a distributed IT infrastructure for image analysis and machine learning in the member institutions. It will facilitate pooling of analysis methods that can be applied in an automated and standardized manner to patient data in the different centers, allowing for unprecedented cohort sizes. The biggest research challenge is the combination, aggregation and distribution of training data, processes and models for non-shareable sensitive data as well as the validation of quantitative imaging biomarkers across a multi-institutional consortium. On the implementation side we investigate distributed learning methods as well as latest private cloud technologies for a robust deployment of data management and processing. More information

 -> Jonas Scherer

 -> Klaus Kades

 -> Jasmin Metzger

 -> Peter Neher

Anomaly Detection using Unsupervised Deep Learning


Key challenge in deep learning is to optimally leverage not only of labeled but also unlabeled data. Our aim in this project is to learn beneficial feature representations using unsupervised methods like generative modelling, density estimation and self-supervised learning. This will allow the application of deep learning in scenarios where data availability was insufficient so far.​

-> David Zimmerer

Medical Imaging Interaction Toolkit (MITK)

MITK is a very modular and versatile open-source software development platform for applications in medical image processing. It's being developed at the German
Cancer Research Center since 2002 with contributions and users from an
international community in research and industry.

-> MITK Team

-> MITK Homepage

MITK Workbench

The MITK Workbench is a free medical image viewer, which supports all kinds of DICOM modalities like CT, MRT, US and RT as well as a number of research file formats. In addition to 2D, 3D and 3D+t visualization capabilities, there are numerous plugins for segmentation, registration and other processing steps available.

-> MITK Team

-> MITK Homepage

Research data management and automated processing

Additionally to the interactive processing focused in MITK, today's research questions often require the standardized and automated processing and easy data access while reducing emerging hassles of data transfer, data protection and data storage. We provide scientific cloud and platform solutions and evaluate new exploration and processing capabilities to medical imaging researchers.

 -> Hanno Gao

Internal support and infrastructure

Within the research program "Imaging and Radiooncology", the MICO team develops and provides solutions for scientific software development and testing infrastructure in various departments. Building on the experience of MITK development, we provide source code control and issue trackers as well as custom support for e.g. the integration of applications and data types in MITK-based workflows.

 -> Stefan Dinkelacker

Intraoperative assistance system for mobile C-Arm devices


IIntraoperative imaging can help to improve the quality of reduction results in trauma surgery. However, the mobility of the device comes with lack of information about its orientation to the patient. Screws, plates and pathologies further increase the challenge of image understanding. We aim to assist the surgeon in upper ankle surgery by incorporating prior knowledge and information of the contralateral site using 2D/3D reconstruction and deep learning based methods.

 -> Sarina Thomas

Intraoperative assistance for mobile C-arm positioning


Intraoperative imaging guides the surgeon through interventions and leads to higher precision and reduced surgical revisions. For evaluation purposes the surgeon needs to acquire anatomy-specific standardized projections. We aim to replace the current manual positioning procedure of the C-arm involving continuous fluoroscopy by an automatic procedure, thereby reducing the dose and time requirement. We tackle this problem employing data simulation techniques and deep learning based methods.

-> Lisa Kausch



"RTToolbox" is a robust and flexible C++ software library developed to support quantitative analysis of treatment outcome for radiotherapy. It provides the import of radiotherapy data (e.g. plans, dose distributions and structure sets) in DICOM-RT standard format as well as in ITK supported data formats. Core features of the RTToolbox include: calculation of DVHs and dose comparison indices, and computation of radiobiological models (TCP/NTCP). Using the RTToolbox radiotherapy evaluation applications can be built easily and quickly.

 -> Clemens Hentschke


The aim of AVID is to provide an open solution to orchestrate software components via workflow scripts. The main areas of application are scalable and flexible cohort analysis as well as variation effect and uncertainty quantification.

AVID is especially used in the context of radiotherapy. Therefore, many of the integrated components are provided by RT-Toolbox. However, it was also used successfully in other areas like radiological image analysis and image registration error effect studies

-> Ralf Floca

-> Clemens Hentschke


To make forensic radiology feasible, we develop workflows and processes that combine local and remote radiological infrastructure as well as latest technologies of medical image processing. To minimize required resources at image acquisition sites, we design automated image processing pipelines and secure data transfers to minimally occupy human resources and minimally interfere with previously existing on-site routine workflows.

This project is a cooperation between DKFZ, Institute for Legal and Traffic Medicine (University Clinic HD) and Institute for Anatomy (University HD)

 -> Ignaz Reicht

MITK-ModelFit and Perfusion

Model fitting plays a central role in the quantitative analysis of medical images. One prominent example is dynamic contrast-enhanced (DCE) MRI, where perfusion-related parameters are estimated using pharmacokinetic modelling. Other applications include mapping the apparent diffusion coefficient (ADC) in diffusion weighted MRI, and the analysis of Z-spectra in chemical exchange saturation transfer (CEST) imaging.

The ready-to-use model fitting toolbox is embedded into MITK and provides tools for model-fitting (ROI-based or voxel-by-voxel), fit evaluation and visualization. Any fitting task can be performed given a user-defined model. Being part of MITK, MITK-ModelFit applications can be easily and flexibly incorporated into pre-and post-processing workflows and offer a large set of interoperability options and supported data formats.

A special emphasis is put on pharmacokinetic analysis of DCE MRI data. Here, a variety of pharmacokinetic models is available. In addition tools are offered for arterial input function selection and conversion from signal to concentration.

-> Ralf Floca
-> Ina Kompan


Magnetic Resonance imaging is moving towards ultra-high field strengths (7T and higher) to achieve higher resolutions, more enhanced image contrasts and an increased signal-to-noise-ratio. This provides the means for improved diagnostic methods and facilitates non-proton MRI such as sodium imaging. However, at ultra-high-field, physical and hardware limitations, such as RF field inhomogeneity, show as image distortions and artifacts. To overcome these limitations, novel pulse sequences, post-processing procedures and hardware extensions such as parallel transmit coils are required.
The "MRUtilities"-toolbox is an extension of MITK which aims to support physicists and physicians conducting research at ultra-high field MRI. Provided applications include B1-mapping and optimized RF-shimming for parallel transmit coils.


-> Ina Kompan



Hyppopy is a python toolbox for blackbox-function optimization providing an easy to use interface for a variety of solver frameworks. Hyppopy gives access to gridsearch, random and quasirandom, particle and bayes solvers. The aim of Hyppopy is making hyperparameter optimization as simple as possible (in our cases e.g. for the optimization of image processing pipelines or machine learning tasks). It can be easily integrated in existing code bases and provides real-time visualization of the parameter space and the optimization process via a visdom server. The internal design is focused on extensibility, ensuring that custom solver and future approaches can be integrated.

-> Ralf Floca

-> André Klein


D:cipher is a technology platform for Distributed Computational Image-based PHEnotyping and Radiomics. It is designed to establish a better link between clinical (imaging) data, computational power and methodical tools.

D:cipher supports single-institutional use, where it improves direct workflow integration of computing tools or analysis of meta data. However, d:cipher also scales to multi-institutional settings, where it scales with the computational resources, the sizes of cohorts and with the number of methods available. The federated computing capabilities are readily built in - so no centralization (neither for data nor for methods) is needed. Leveraging state of the art open source technologies we aim at a high interoperability with existing standards and solutions.

-> Ralf Floca
-> Jonas Scherer


HiGHmed is a highly innovative consortial project in the context of the "Medical Informatics Initiative Germany" that develops novel, interoperable solutions in medical informatics with the aim to make medical patient data accessible for clinical research in order to improve both, clinical research and patient care. Our image analysis technology (d:cipher) is part of the Omics Data Integration Center (OmicsDIC) that offers sophisticated technologies to process data and to access information contained in data - from genomics to radiomics. In HiGHmed we also improve the interoperability of image based information by working on the mapping between different important standards like DICOM, HL7 FHIR or OpenEHR.

Link to homepage: HiGHmed

-> Ralf Floca
-> Christian Haux


Artificial intelligence in medical research can accelerate the acquisition of scientific knowledge, facilitate early diagnosis of diseases and support precision medicine. The necessary analyzes of large amounts of health data can only be achieved through the cooperation of a large number of institutions. In such collaborative research scenarios, classic centralized analysis approaches often reach their limits or fail due to complex security or trust requirements.
The goal of the multidisciplinary team in the Trustworthy Federated Data Analytics (TFDA) project is therefore not to store the data centrally, but instead to bring the algorithms for machine learning and analysis to the data, which remains local and decentralized in the respective research centers. As a proof of concept, TFDA will establish a pilot system for federated radiation therapy studies and deal with the necessary technical, methodological and legal aspects to ensure the quality and trustworthiness of the data analysis and to guarantee the privacy of the patients.

Uncertainty Modeling in Medical Object Detection

The annotation of medical images suffers from high inter- and intra-rater variability, caused by a number of factors such as strong ambiguities in the images and the subjectivity of annotators. This projects seeks to develop methods that can handle and model such ambiguity in data and labels.

Machine learning in federations of research institiutions

Sharing data between medical research institutions often poses legal and practical problems, although all centers have a common interest in machine learning models with accurate predictions. To overcome these hurdles and make more data available for research, we explore the use of distributed learning algorithms adapted to the conditions in federations of medical centers. The important aspect of protecting sensitive information is addressed by keeping data access limited to the data's owner and enforcing privacy guarantees on the resulting models by differential privacy.

-> Maximilian Zenk

Robust deep learning applications in cardiology

Cardiovascular diseases are a main cause of death worldwide. Image based examinations of the heart required time consuming delineations of the heart's parts. This is especially obstructive for the analysis of large study cohorts with ten-thousands of patients. In this project we aim to deliver precise and reliable automatic segmentations to facilitate fast and reproducible large cohort analyses and deepen the understanding what differentiates sick from healthy hearts.

-> Peter Full

Derivation of heuristics across multiple data sets

Deep learning based methods are applied to a large number of different medical domains. These methods need to be adjusted for each new data set separately which assumes large computational resources and expert knowledge. This project aims at deriving new heuristics across a large number of data sets to define rules which automatically adjust hyperparameters of deep learning methods accordingly. Especially, detection algorithms will be examined due to their high number of hyperparameters.

-> Michael Baumgartner


Uncertainty estimation in deep learning

Clinical Natural Language Processing


In the process of diagnostic imaging it is common that physicians write down their findings in clinical reports. These findings contain valuable information like observations from the image itself, additional information from the study (e.g. from a physical exam) or diagnostic analyses. We are using the reports to extract additional information, generate labels for imaging and apply different downstream tasks like patient grading, medical entity recognition or textual similarity. Additionally, we employ multi-modality methods to relate different parts of the text with the corresponding image regions. This allows us to combine the knowledge in the text with the information that is hidden in the images in order to facilitate tasks like cohort retrieval, phenotyping or predictive modelling. Furthermore, by using the d:cipher toolkit we work on solutions to make the developed algorithms easily accessible for other researchers and clinicians by integrating the methods into a standardized research platform.

-> Jan Sellner
-> Klaus Kades

Self-Supervised Representation Learning

The acquisition of large amounts of annotated medical imaging data is very expensive and in some cases even impossible due to a lack of attainable ground truth information. Recent advances in deep visual representation learning have shown promising results by formulating a pretext task exploiting the known structure of the raw data itself as a supervision signal. This enables learning representations which yield valuable information about a variety of downstream tasks. In this project, we develop algorithms which can leverage the large amounts of unlabeled data in the context of the unique challenges faced with in medical images.

-> Gregor Köhler

to top
powered by webEdition CMS