Medical Image Computing Research
End-to-end Deep Learning Architectures

© dkfz.de
Machine Learning-based dMRI Processing

© dkfz.de
Validation of Fiber Tractography

© dkfz.de
Learning from Noisy Annotations

© dkfz.de
Learning from Weak Annotations

© dkfz.de
Efficient learning from microscopic images

© dkfz.de
Microscopic or pathology images are rich source of data; however, the sheer size of these images makes it difficult and time-consuming to extract all relevant information. Artificial Intelligence can become a useful tool to overcome this limitation and help to understand and interpret the data. This project aims at developing the necessary machine learning algorithms with a focus on two aspects: The efficient use of annotation with weakly supervised learning and efficient handling of large-size data with large-scale data learning algorithm.
Multiple Myeloma Image Analysis

© dkfz.de
Knowledge-based Large-scale Image Segmentation

© dkfz.de
This project aims at the development of flexible and accurate methods for an automatic semantic image segmentation on large clinical datasets. We focus on a combination of model-based methods with techniques from Active Learning, Online Learning, Transfer Learning and Deep Learning. They allow an optimized training on sparse data, a continuous learning at runtime and an assessment of segmentation quality, as a prerequisite for the successful annotation of large and heterogeneous datasets.
Radiomics- and Shape-based Detection of Diabetes related Tissue Changes in the German National Cohort

© dkfz.de
This project aims at an extensive computational analysis of medical data from the multi-center study 'German National Cohort (GNC)'. The GNC offers highly standardized MR image acquisition, holistic documentation of clinical parameters and patient history, as well as large case numbers organized in a nation-wide effort. This makes the examined GNC data an ideal subject for analysis using modern, data-driven machine-learning algorithms. Pre-diabetes and diabetes type 2 are likely associated with quantity and distribution of abdominal tissue types (adipose fat, or fat and iron in the liver) and with liver shape. In this project, information on tissue and shape characteristics will automatically be derived from GNC images and be correlated with pathologic findings from the GNC. The project will focus on strategies for computational quality-assessment of learning-based annotation algorithms, on continuous algorithm re-training, and on complementary radiomics analyses based on hand-crafted and deep learning derived image features.
Image-based Stratification of Glioblastoma Patients

© dkfz.de
Radiomics

© dkfz.de
In Radiomics, high-throughput computing techniques are systematically employed for the conversion of images to higher-dimensional data, i.e. predictive features. The aim is to improve decision support by the subsequent analysis of these features. We study comprehensive MRI imaging phenotypes for linkage of imaging with clinical, biological and genomic parameters in several entities including prostate cancer, breast cancer and brain tumors.
Probabilistic Modeling of Time-Series Data

© dkfz.de
"Deep Probabilistic Modeling of Glioma Growth", Petersen et al., MICCAI 2019 (https://arxiv.org/abs/1907.04064)
Joint Imaging Platform
Within the German Cancer Consortium (DKTK) the Joint Funding Project “Joint Imaging Platform” will establish a distributed IT infrastructure for image analysis and machine learning in the member institutions. It will facilitate pooling of analysis methods that can be applied in an automated and standardized manner to patient data in the different centers, allowing for unprecedented cohort sizes. The biggest research challenge is the combination, aggregation and distribution of training data, processes and models for non-shareable sensitive data as well as the validation of quantitative imaging biomarkers across a multi-institutional consortium. On the implementation side we investigate distributed learning methods as well as latest private cloud technologies for a robust deployment of data management and processing. More information
Anomaly Detection using Unsupervised Deep Learning

© dkfz.de
Medical Imaging Interaction Toolkit (MITK)
Cancer Research Center since 2002 with contributions and users from an
international community in research and industry.
MITK Workbench
The MITK Workbench is a free medical image viewer, which supports all kinds of DICOM modalities like CT, MRT, US and RT as well as a number of research file formats. In addition to 2D, 3D and 3D+t visualization capabilities, there are numerous plugins for segmentation, registration and other processing steps available.
Research data management and automated processing
Internal support and infrastructure
Intraoperative assistance system for mobile C-Arm devices

© dkfz.de
IIntraoperative imaging can help to improve the quality of reduction results in trauma surgery. However, the mobility of the device comes with lack of information about its orientation to the patient. Screws, plates and pathologies further increase the challenge of image understanding. We aim to assist the surgeon in upper ankle surgery by incorporating prior knowledge and information of the contralateral site using 2D/3D reconstruction and deep learning based methods.
Intraoperative assistance for mobile C-arm positioning

© dkfz.de
Intraoperative imaging guides the surgeon through interventions and leads to higher precision and reduced surgical revisions. For evaluation purposes the surgeon needs to acquire anatomy-specific standardized projections. We aim to replace the current manual positioning procedure of the C-arm involving continuous fluoroscopy by an automatic procedure, thereby reducing the dose and time requirement. We tackle this problem employing data simulation techniques and deep learning based methods.
RTToolbox

© dkfz.de
AVID
AVID is especially used in the context of radiotherapy. Therefore, many of the integrated components are provided by RT-Toolbox. However, it was also used successfully in other areas like radiological image analysis and image registration error effect studies
CSI-HD
To make forensic radiology feasible, we develop workflows and processes that combine local and remote radiological infrastructure as well as latest technologies of medical image processing. To minimize required resources at image acquisition sites, we design automated image processing pipelines and secure data transfers to minimally occupy human resources and minimally interfere with previously existing on-site routine workflows.
This project is a cooperation between DKFZ, Institute for Legal and Traffic Medicine (University Clinic HD) and Institute for Anatomy (University HD)
MITK-ModelFit and Perfusion
Model fitting plays a central role in the quantitative analysis of medical images. One prominent example is dynamic contrast-enhanced (DCE) MRI, where perfusion-related parameters are estimated using pharmacokinetic modelling. Other applications include mapping the apparent diffusion coefficient (ADC) in diffusion weighted MRI, and the analysis of Z-spectra in chemical exchange saturation transfer (CEST) imaging.
The ready-to-use model fitting toolbox is embedded into MITK and provides tools for model-fitting (ROI-based or voxel-by-voxel), fit evaluation and visualization. Any fitting task can be performed given a user-defined model. Being part of MITK, MITK-ModelFit applications can be easily and flexibly incorporated into pre-and post-processing workflows and offer a large set of interoperability options and supported data formats.
A special emphasis is put on pharmacokinetic analysis of DCE MRI data. Here, a variety of pharmacokinetic models is available. In addition tools are offered for arterial input function selection and conversion from signal to concentration.
MITK-MRUtilities
Magnetic Resonance imaging is moving towards ultra-high field strengths (7T and higher) to achieve higher resolutions, more enhanced image contrasts and an increased signal-to-noise-ratio. This provides the means for improved diagnostic methods and facilitates non-proton MRI such as sodium imaging. However, at ultra-high-field, physical and hardware limitations, such as RF field inhomogeneity, show as image distortions and artifacts. To overcome these limitations, novel pulse sequences, post-processing procedures and hardware extensions such as parallel transmit coils are required.
The "MRUtilities"-toolbox is an extension of MITK which aims to support physicists and physicians conducting research at ultra-high field MRI. Provided applications include B1-mapping and optimized RF-shimming for parallel transmit coils.
Hyppopy

© dkfz.de
Hyppopy is a python toolbox for blackbox-function optimization providing an easy to use interface for a variety of solver frameworks. Hyppopy gives access to gridsearch, random and quasirandom, particle and bayes solvers. The aim of Hyppopy is making hyperparameter optimization as simple as possible (in our cases e.g. for the optimization of image processing pipelines or machine learning tasks). It can be easily integrated in existing code bases and provides real-time visualization of the parameter space and the optimization process via a visdom server. The internal design is focused on extensibility, ensuring that custom solver and future approaches can be integrated.
D:cipher
D:cipher is a technology platform for Distributed Computational Image-based PHEnotyping and Radiomics. It is designed to establish a better link between clinical (imaging) data, computational power and methodical tools.
D:cipher supports single-institutional use, where it improves direct workflow integration of computing tools or analysis of meta data. However, d:cipher also scales to multi-institutional settings, where it scales with the computational resources, the sizes of cohorts and with the number of methods available. The federated computing capabilities are readily built in - so no centralization (neither for data nor for methods) is needed. Leveraging state of the art open source technologies we aim at a high interoperability with existing standards and solutions.
HiGHmed
HiGHmed is a highly innovative consortial project in the context of the "Medical Informatics Initiative Germany" that develops novel, interoperable solutions in medical informatics with the aim to make medical patient data accessible for clinical research in order to improve both, clinical research and patient care. Our image analysis technology (d:cipher) is part of the Omics Data Integration Center (OmicsDIC) that offers sophisticated technologies to process data and to access information contained in data - from genomics to radiomics. In HiGHmed we also improve the interoperability of image based information by working on the mapping between different important standards like DICOM, HL7 FHIR or OpenEHR.
Link to homepage: HiGHmed
TFDA
Artificial intelligence in medical research can accelerate the acquisition of scientific knowledge, facilitate early diagnosis of diseases and support precision medicine. The necessary analyzes of large amounts of health data can only be achieved through the cooperation of a large number of institutions. In such collaborative research scenarios, classic centralized analysis approaches often reach their limits or fail due to complex security or trust requirements.
The goal of the multidisciplinary team in the Trustworthy Federated Data Analytics (TFDA) project is therefore not to store the data centrally, but instead to bring the algorithms for machine learning and analysis to the data, which remains local and decentralized in the respective research centers. As a proof of concept, TFDA will establish a pilot system for federated radiation therapy studies and deal with the necessary technical, methodological and legal aspects to ensure the quality and trustworthiness of the data analysis and to guarantee the privacy of the patients.
Uncertainty Modeling in Medical Object Detection
The annotation of medical images suffers from high inter- and intra-rater variability, caused by a number of factors such as strong ambiguities in the images and the subjectivity of annotators. This projects seeks to develop methods that can handle and model such ambiguity in data and labels.
Machine learning in federations of research institiutions
Sharing data between medical research institutions often poses legal and practical problems, although all centers have a common interest in machine learning models with accurate predictions. To overcome these hurdles and make more data available for research, we explore the use of distributed learning algorithms adapted to the conditions in federations of medical centers. The important aspect of protecting sensitive information is addressed by keeping data access limited to the data's owner and enforcing privacy guarantees on the resulting models by differential privacy.
Robust deep learning applications in cardiology
Cardiovascular diseases are a main cause of death worldwide. Image based examinations of the heart required time consuming delineations of the heart's parts. This is especially obstructive for the analysis of large study cohorts with ten-thousands of patients. In this project we aim to deliver precise and reliable automatic segmentations to facilitate fast and reproducible large cohort analyses and deepen the understanding what differentiates sick from healthy hearts.
Derivation of heuristics across multiple data sets
Deep learning based methods are applied to a large number of different medical domains. These methods need to be adjusted for each new data set separately which assumes large computational resources and expert knowledge. This project aims at deriving new heuristics across a large number of data sets to define rules which automatically adjust hyperparameters of deep learning methods accordingly. Especially, detection algorithms will be examined due to their high number of hyperparameters.
Uncertainty estimation in deep learning
Clinical Natural Language Processing

© dkfz.de
In the process of diagnostic imaging it is common that physicians write down their findings in clinical reports. These findings contain valuable information like observations from the image itself, additional information from the study (e.g. from a physical exam) or diagnostic analyses. We are using the reports to extract additional information, generate labels for imaging and apply different downstream tasks like patient grading, medical entity recognition or textual similarity. Additionally, we employ multi-modality methods to relate different parts of the text with the corresponding image regions. This allows us to combine the knowledge in the text with the information that is hidden in the images in order to facilitate tasks like cohort retrieval, phenotyping or predictive modelling. Furthermore, by using the d:cipher toolkit we work on solutions to make the developed algorithms easily accessible for other researchers and clinicians by integrating the methods into a standardized research platform.
Self-Supervised Representation Learning
The acquisition of large amounts of annotated medical imaging data is very expensive and in some cases even impossible due to a lack of attainable ground truth information. Recent advances in deep visual representation learning have shown promising results by formulating a pretext task exploiting the known structure of the raw data itself as a supervision signal. This enables learning representations which yield valuable information about a variety of downstream tasks. In this project, we develop algorithms which can leverage the large amounts of unlabeled data in the context of the unique challenges faced with in medical images.