Cookie Settings

We use cookies to optimize our website. These include cookies that are necessary for the operation of the site, as well as those that are only used for anonymous statistic. You can decide for yourself which categories you want to allow. Further information can be found in our data privacy protection .

Essential

These cookies are necessary to run the core functionalities of this website and cannot be disabled.

Name Webedition CMS
Purpose This cookie is required by the CMS (Content Management System) Webedition for the system to function correctly. Typically, this cookie is deleted when the browser is closed.
Name econda
Purpose Session cookie emos_jcsid for the web analysis software econda. This runs in the “anonymized measurement” mode. There is no personal reference. As soon as the user leaves the site, tracking is ended and all data in the browser are automatically deleted.
Statistics

These cookies help us understand how visitors interact with our website by collecting and analyzing information anonymously. Depending on the tool, one or more cookies are set by the provider.

Name econda
Purpose Statistics
External media

Content from external media platforms is blocked by default. If cookies from external media are accepted, access to this content no longer requires manual consent.

Name YouTube
Purpose Show YouTube content
Name Twitter
Purpose activate Twitter Feeds

Medical Image Computing Research

Helmholtz Metadata Collaboration (HMC) - Hub Health

© dkfz.de

The Helmholtz Metadata Collaboration Platform develops concepts and technologies for efficient and interdisciplinary metadata management spanning the Helmholtz research areas Energy, Earth and Environment, Health, Matter, Information, Aeronautics, Space and Transport. As HMC Hub Health, we support researchers and clinicians in structuring, standardizing, and expanding the collection of metadata to facilitate the re-use, interoperability, reproducibility, and transparency of their data.

More information: https://www.helmholtz-metadaten.de

Dr. Marco Nolden
Lucas Kulla

Predicting Immunotherapy Outcome of Lung Cancer Patients by Composite Radiomics Signatures in CT Scans

© dkfz.de

Radiomics and Deep learning can help to extract essential information from medical images to predict future events. The need of predicting potential therapy outcome in an early stage of therapy is shown by the high number (about 50 %) of advanced lung cancer patients which are assigned to receive immunotherapy but do not respond to this treatment and succumb to disease progression. Considering that the information in standard clinical settings is sometimes not sufficient for assignment of immunotherapy (e.g. molecular biomarker, clinical measurements). The aim of this project is to make the prediction of the potential treatment outcome in lung cancer patients at an early stage of therapy more precise through the inclusion of radiomics signatures. We classify the patients into potential responders or non-responders and therefore increase the potential survival by opening the possibility to select other treatments with better prognosis for the individual patient.
 

Leveraging Similarity between Learned Representations

Deep learning methods provide us with powerful learned representations. Despite different learning methods, architectures and optimizers, these representations exhibit a surprising amount of similarity when training them on a related task. This similarity is mostly used as a tool to gain insights and further understanding of not understood phenomena of deep learning methods. The goal of this project is to utilize the similarity not only to gather insights but to also utilize the redundancy to merge similar representations between trained models. Developing such methods will enable novel life-long learning methods and redundancy reduction schemes in groups of CNNs.
 

Deep Learning for Discovering Predictive Biomarkers in Glioblastoma Imaging

It is believed that there may be certain sub-groups of patients that are more perceptive to a glioblastoma treatment using bevacizumab (BEV).The overall goal of the project is identify such subgroups determined by predictive non-invasive imaging biomarkers. To this end we develop a deep-learning method to learn those predictive biomarkers from pre-treatment imaging data while distinguishing them from prognositc biomarkers, which influence the treatment outcome regardless of treatment status.
 

Learning from multicentric medical imaging data

Sharing data between medical research institutions often poses legal and practical problems, although all centers have a common interest in machine learning models with high accuracy and robustness. To overcome these hurdles and make more data available for research, we explore the use of distributed learning and analysis algorithms in federations of medical centers. By keeping the data within its owners IT-systems and bringing the algorithms to them, such collaborations can provide very diverse datasets. This project aims to develop methods for learning from multicentric data and make deep learning algorithms more robust against dataset shifts that occur in such multicentric settings.
 

Self-configuring Medical Object Detection and Instance Segmentation

© dkfz.de

Simultaneous localisation and categorization of objects in medical images, also referred to as medical object detection, is of high clinical relevance because diagnostic decisions often depend on rating of objects rather than e.g. pixels. For this task, the cumbersome and iterative process of method configuration constitutes a major research bottleneck. This project aims to systematize and automate the configuration process for medical object detection and instance segmentation. The effectiveness of the developed methods is evaluated on a diverse pool of medical data sets.
 

Anomaly Detection Using Unsupervised Learning for Medical Images

© dkfz.de

An assumption-free automatic check of medical images for potentially overseen anomalies would be a valuable assistance for a radiologist. Deep learning and especially generative models such as Variational Auto-Encoders (VAEs) have shown great potential in the unsupervised learning of data distributions. By decoupling abnormality detection from reference annotations, these approaches are completely independent of human input and can therefore be applied to any medical condition or image modality. In principle, this allows for an abnormality check and even the localization of parts in the image that are most suspicious.

David Zimmerer

Multitask Segmentation using partially annotated datasets

There are many partially annotated public and private datasets, but only a few multiorgan datasets. The annotation of medical images is very time-consuming and costly. However, for many applications, especially in radiotherapy, segmentations of all organs at risks are needed. This project aims to use the potential of several partially annotated datasets to train a multitask segmentation network.
 

Transformers and self-attention in medical image anaylsis

Transformer based architectures have been introduced to computer vision with the Vision Transformer (ViT). Since then, research efforts have sought to replicate the breakthroughs in natural language processing in vision based tasks such as classification, detection and segmentation using transformers. Medical image segmentation has seen its share, with some research incorporating ViT backbones while others using hierarchical feature search alongside transformer layers promising high performance. This project seeks to explore the effectiveness of transformer based architectures, in comparison to well-understood convolutional nets, in the face of realistic dataset sizes in medical image analysis.

Saikat Roy

Characterization and prediction of COPD as a comorbidity from computed tomography imaging

Chronic Obstructive Pulmonary Disease (COPD) is a common lung disease characterized by persistent or recurrent respiratory symptoms. Different patterns ("phenotypes") of lung damage are observed with different consequences for individual therapy, e.g. the destruction of the alveolar sac (emphysema) or bronchial wall thickening and obstruction with mucus (airway disease). Medical air-flow lung function tests typically fail to detect subtle changes within the lungs or even sub-regions of the lung. Beyond visual inspection of computed tomography (CT), quantitative CT analysis using computer-aided detection and deep learning techniques promises more insight into the lung and its reactions to the disease or medication. We aim to further analyze the CT images by exploring unseen patterns and clusters by deep learning in order to find new methods for the classification and monitoring of COPD.

Silvia Dias Almeida

Large scale image analysis and computational pathology

The diagnosis of many diseases is based on histapathological examination of tissue samples. Various stains are used to evaluate tissue, which are intended to visualize different characteristics of the tissue sample. By default, Hematoxylin and Eosin (H&E) staining is applied to visualize the general characteristics of the tissue. However, further special Immunhistochemical stains (IHC) are necessary for accurate diagnosis. The goal of this project is to use DeepLearning to predict the IHC expressions of a tissue sample, based on its H&E staining. In addition we want to enable computational pathology workflows on the JIP by providing a standardized infrastructure for pathology data. Currently digital pathology is not very well established due to proprietary file formats of different vendors.
 

Helmholtz Federated IT Services (HIFIS) Consulting

HIFIS offers free-of-charge consulting as a service to research groups under the Helmholtz umbrella. We help you to deal with specific licensing issues, pointing out solutions on how to improve your software or setting up a new projects. We are also very happy to discuss other software engineering topics such as software engineering process in general. We are a small team that tries to help as many researchers and research groups as possible across all Helmholtz institutes.

Ashis Ravindran

Addressing misalignment for enhanced prostate MRI analysis

Diagnosis of prostate cancer is one of the most challenging tasks in oncology, requiring multi-modal MRI images. Deep learning techniques have already been applied successfully for such medical datasets to support various analytical tasks, such as image classification, object detection and semantic segmentation. However, there has not been a common standard yet to preprocess images with misalignment which naturally occur between the different MRI modalities. The goal of this project is to find optimal strategies for misalignment handling optimized for clinically applicable tasks, like object detection and semantic segmentation for enhanced prostate cancer diagnosis.
 

Joint Imaging Platform

Within the German Cancer Consortium (DKTK) the Joint Funding Project “Joint Imaging Platform” will establish a distributed IT infrastructure for image analysis and machine learning in the member institutions. It will facilitate pooling of analysis methods that can be applied in an automated and standardized manner to patient data in the different centers, allowing for unprecedented cohort sizes. The biggest research challenge is the combination, aggregation and distribution of training data, processes and models for non-shareable sensitive data as well as the validation of quantitative imaging biomarkers across a multi-institutional consortium. On the implementation side we investigate distributed learning methods as well as latest private cloud technologies for a robust deployment of data management and processing. More information

Jonas Scherer
Klaus Kades

Self-Supervised Representation Learning in Medical Image Analysis

The achievements of supervised deep learning models are highly dependent on plentiful, high-quality annotated data. Despite the availability of large amounts of unlabeled medical images, annotation is especially expensive in the context of Medical Image Analysis. To reduce the costly and time-consuming annotation effort, training models which don't rely on many annotated labels remains an important goal in this field. Self-supervised learning methods allow pretraining networks with unlabeled data. The representations obtained from such pretraining can then serve to fine-tune networks on limited amounts of labeled data. In this project, we investigate which inductive biases these self-supervised methods have to introduce in order to allow efficient downstream training in the context of various Medical Image Analysis tasks.
 

Medical Imaging Interaction Toolkit (MITK)

MITK is a very modular and versatile open-source software development platform for applications in medical image processing. It's being developed at the German
Cancer Research Center since 2002 with contributions and users from an
international community in research and industry.
 

MITK Modelfit and Perfusion

Model fitting plays a central role in the quantitative analysis of medical images. One prominent example is dynamic contrast-enhanced (DCE) MRI, where perfusion-related parameters are estimated using pharmacokinetic modelling. Other applications include mapping the apparent diffusion coefficient (ADC) in diffusion weighted MRI, and the analysis of Z-spectra in chemical exchange saturation transfer (CEST) imaging.
The ready-to-use model fitting toolbox is embedded into MITK and provides tools for model-fitting (ROI-based or voxel-by-voxel), fit evaluation and visualization. Any fitting task can be performed given a user-defined model. Being part of MITK, MITK-ModelFit applications can be easily and flexibly incorporated into pre-and post-processing workflows and offer a large set of interoperability options and supported data formats.
A special emphasis is put on pharmacokinetic analysis of DCE MRI data. Here, a variety of pharmacokinetic models is available. In addition tools are offered for arterial input function estimation and conversion from signal to concentration.

Ina Kompan
Ralf Floca

Research data management and automated processing

Additionally to the interactive processing focused in MITK, today's research questions often require the standardized and automated processing and easy data access while reducing emerging hassles of data transfer, data protection and data storage. We provide scientific cloud and platform solutions and evaluate new exploration and processing capabilities to medical imaging researchers.

Hanno Gao

Automatic Image Analysis in Patients with Multiple Myeloma

Multiple Myeloma (MM) is a malignancy of bone marrow plasma cells, so-called myeloma cells, which disrupt the production of new blood cells and cause bone breakdown. In recent years, modern imaging technologies such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) have gained a lot of attention in diagnosis and staging of MM and the standardized, comprehensive evaluation of whole-body imaging is of great interest. In this project, we investigate fully automatic image analysis methods for the diagnosis and image staging of myeloma patients. This includes bone marrow segmentation and consecutive radiomics analysis.
 

End-To-End Text Classification of Multi-Insitutional Radiology Findings

In the process of diagnostic imaging it is common that physicians write down their findings in clinical reports. These findings contain valuable information like observations from the image itself, additional information from the study (e.g. from a physical exam) or diagnostic analyses. We are using the reports to extract additional information, generate labels for imaging and apply different downstream tasks like patient grading, medical entity recognition or textual similarity. Additionally, we employ multi-modality methods to relate different parts of the text with the corresponding image regions. This allows us to combine the knowledge in the text with the information that is hidden in the images in order to facilitate tasks like cohort retrieval, phenotyping or predictive modelling. Furthermore, by using the kaapana toolkit we work on solutions to make the developed algorithms easily accessible for other researchers and clinicians by integrating the methods into a standardized research platform.
 

Intraoperative assistance for mobile C-arm positioning

© dkfz.de

Intraoperative imaging guides the surgeon through interventions and leads to higher precision and reduced surgical revisions. For evaluation purposes the surgeon needs to acquire anatomy-specific standardized projections. We aim to replace the current manual positioning procedure of the C-arm involving continuous fluoroscopy by an automatic procedure, thereby reducing the dose and time requirement. We tackle this problem employing data simulation techniques and deep learning based methods.

Lisa Kausch

Automatic image-based pedicle screw planning

© dkfz.de

CT-navigated spinal instrumentation requires intraoperative screw tranjectory planning in CT volumes. In the current clinical routine this is often performed manually, which is prone to error and time-consuming. This project focuses on the development of deep learning-based methods for automatic image-based pedicle screw planning. Leveraging a large intraoperative planning dataset the screw planning task is interpreted as a segmentation task and screw dimensions, loation and orientation is automatically predicted based on the image context.
 

Radiological Cooperative Network - RACOON

© dkfz.de

The RACOON projects aims to create a country-wide infrastructure for the structured acquisition of radiological imaging data of COVID-19 cases. By connecting 36 university clinics providing images including structured reports of the diagnosis, it is able to form a solid foundation for COVID-19 related radiological research in Germany. In later phases the created infrastructure and collected data sets are of use for early detection systems and AI supported medical decision support systems and are therefore an important step towards archiving pandemic preparedness.
The department of medical image computing provides its expertise in building federated machine learning infrastructures. It contributes the Kaapana software platform allowing federated learning and image analysis as well as method sharing between the partners by providing support to executed containerized methods either on-site as part of the local RACOON-Nodes or in the central component as part of RACOON-Central.
 
More information: https://racoon.network/
 

Computational analysis of subclinical comorbidities in clinical routine CT data

Extensive research has been conducted in the field of image-based computational analysis of major clinical diseases. By contrast, little is known about the potential variety of interrelations between typical co-occuring pathologies. During clinical routine, diagnosis and treatment are normally targeted at a primary disease, while co-occuring pathologies often remain undetected and underdiagnosed, despite their expected substantial effect on overall prognoses and treatment outcomes. This project focuses on a more holistic analysis of imaging data from clinical routine, as a means to automatically detect and quantify an expected variety of co-occuring diseases. Therefor, a well-defined subset of pathologies and datasets at University Hospital Heidelberg serves as a foundation for the development of learning based image analysis algorithms, with the goal to promote a deeper understanding of comorbidities encountered in clinical practice. For seamless translation of research methods into the clinical workflow and for future scalability of the project, an integrated system for automated algorithm deployment is developed based on the joint imaging platform (JIP).

Taisiya Kopytova, Silvia Almeida, Tobias Norajitra

Temporal and Global Consistency Enforcing Segmentation for Real World Radiological Applications

State-of-the-art segmentation frameworks show impressive performance with dice scores comparable to human inter- and intra-rater variability. However, these models frequently fail with severe and unexpected errors when being brought into the clinical environment. In this project a model will be developed which will incorporate the insights obtained from the analysis of the performance of current methods, as well as temporal and global information of samples in order to improve and stabilize the generated segmentations.

Yannick Kirchhoff

Assisting Breast Cancer Decisions using a Software System utilizing Deep Learning Trained on Diffusion Weighted MRI Data

Breast cancer is the most invasive cancer for women throughout the world, in both developed and developing countries. This project aims to create robust models for breast lesion detection and classification using diffusion-weighted MR images and to produce a software platform, up to the high standards of market medical products, that can interactively assist clinical decision making by providing valuable feedback and diagnostic suggestions. The aspiration is that the deep learning model will be able correctly identify and classify malignant lesions with high sensitivity, while minimizing false positive results which are common in standard clinical screening practice.

Dimitrios Bounias
Michael Baumgartner

Trustworthy Federated Data Analytics (TFDA)

© dkfz.de

Artificial intelligence in medical research can accelerate the acquisition of scientific knowledge, facilitate early diagnosis of diseases and support precision medicine. The necessary analyzes of large amounts of health data can only be achieved through the cooperation of a large number of institutions. In such collaborative research scenarios, classic centralized analysis approaches often reach their limits or fail due to complex security or trust requirements.The goal of the multidisciplinary team in the Trustworthy Federated Data Analytics (TFDA) project is therefore not to store the data centrally, but instead to bring the algorithms for machine learning and analysis to the data, which remains local and decentralized in the respective research centers. As a proof of concept, TFDA will establish a pilot system for federated radiation therapy studies and deal with the necessary technical, methodological and legal aspects to ensure the quality and trustworthiness of the data analysis and to guarantee the privacy of the patients.

More information: https://tfda.hmsp.center/

Santhosh Parampottupadam, Kaushal Parekh, Ralf Floca

VISSART: VISualiSation And Ranking Toolkit (joint project with IMSY division)

VISSART is an open-source framework (based on challengeR) for analyzing and visualizing challenge/benchmarking/algorithm results. It offers a set of tools for comprehensive analysis and visualization of method results. It applies a number of simulated and real-life computations such as visualizing assessment data, ranking robustness, and ranking stability, to demonstrate specific strengths and weaknesses of various algorithms. Furthermore, it also supports single-task and multi-task challenges. Thanks to this online version, there is no need of installing interpreters, necessary packages, libraries, etc. We facilitate the use of the challengeR framework to developers not familiar with the R language.

Ali Emre Kavur

Hierarchical instance segmentation of mineral particles for automated particle composition identification

The analyzation and identification of particle compositions plays a central role in increasing the effectiveness of ore processing and recycling techniques. These types of analyses are typically performed by crushing ores of unknown composition into particles, which are then placed in a synthetic resin embedding and CT-scanned in micron resolution. The resulting CT-scan is then manually analyzed in a time intensive and error-prone manner to identify the individual composition of multiple hundreds of particles.
The objective of this project is to automatically generate hierarchical instance segmentations of every particle in order to reduce the current time intensive approach and increase the correctness of the analysis.

Karol Gotkowski

DCE/DSC Lexicon as part of the „Open Science Initiative for Perfusion Imaging“

Perfusion-related quantities derived from dynamic contrast-enhanced (DCE) and dynamic susceptibility contrast (DSC) magnetic resonance imaging (MRI) are useful biomarkers of vascular function. To generate perfusion-related quantities, the acquired data is typically analyzed using a sequence of processes which define an image analysis pipeline. Currently, there is a lack of clear reporting guidelines and standardized nomenclature for applied perfusion analyses pipelines. This means that analysis steps are often not accurately captured, leading to user-variability in reporting, which fundamentally limits reproducibility of perfusion-based research.The Open Source Initiative for Perfusion Imaging (OSIPI) is an initiative of the International Society for Magnetic Resonance in Medicine (ISMRM) perfusion study group with the mission is to promote the sharing of perfusion imaging software, improve the reproducibility of perfusion imaging research. As part of OSIPI a consensus-based DCE/DSC lexicon and a reporting framework are developed with the aim to improve reproducibility and in perfusion image analysis.

Ina Kompan

Digital Cancer Prevention

To develop a research-supporting risk prediction platform for the National Cancer Prevention Center, we are currently assembling an interdisciplinary digital cancer prevention team. The focus of the working group is on the development of a specific and evidence-based portal for the individual calculation of personal cancer risk. In doing so, existing prediction models will be validated, curated and merged according to a standardized procedure. Interested citizens should be able to use the portal to assess their individual cancer risk and receive information adapted to their personal cancer risk. For example, demographic data, information on lifestyle, family history, and results of previous tests can be included in the calculation. At the same time, these data are used to further optimize the prediction models and maintain a continuously high level of performance. In the long term, the aim is to develop a research-capable platform for sustainable data collection and access to research data in modern prevention research.

Angela GoncalvesKlaus Maier-Hein, Elias Müller

Kaapana

© dkfz.de

Kaapana is a technology platform for Distributed Computational Image-based PHEnotyping and Radiomics. It is designed to establish a better link between clinical (imaging) data, computational power and methodical tools.
Kaapana supports single-institutional use, where it improves direct workflow integration of computing tools or analysis of meta data. However, Kaapana also scales to multi-institutional settings, where it scales with the computational resources, the sizes of cohorts and with the number of methods available. The federated computing capabilities are readily built in - so no centralization (neither for data nor for methods) is needed. Leveraging state of the art open source technologies we aim at a high interoperability with existing standards and solutions.

More information:
https://www.kaapana.ai/
https://github.com/kaapana/kaapana.

Jonas Scherer

HiGHmed

HiGHmed is a highly innovative consortial project in the context of the "Medical Informatics Initiative Germany" that develops novel, interoperable solutions in medical informatics with the aim to make medical patient data accessible for clinical research in order to improve both, clinical research and patient care. Our image analysis technology (d:cipher) is part of the Omics Data Integration Center (OmicsDIC) that offers sophisticated technologies to process data and to access information contained in data - from genomics to radiomics. In HiGHmed we also improve the interoperability of image based information by working on the mapping between different important standards like DICOM, HL7 FHIR or OpenEHR.

More information: HiGHmed

Ralf Floca

CCE-DART

© dkfz.de

The EU-funded project CCE-DART (CCE Building Data Rich Clinical Trials) aims to develop novel methods for the design and implementation of newer, more efficient and effective clinical trials. At DKFZ experts from five departments contribute to this goal. This includes the department of medical image computing which provides its expertise in federated image analysis to build a data sharing and analysis platform. The platform will be based on the Kaapana technology platform and will allow researchers to find relevant imaging data and perform federated image analysis.

More information: https://cce-dart.com/

Philipp Schader
Marco Nolden

CSI-HD

To make forensic radiology feasible, we develop workflows and processes that combine local and remote radiological infrastructure as well as latest technologies of medical image processing. To minimize required resources at image acquisition sites, we design automated image processing pipelines and secure data transfers to minimally occupy human resources and minimally interfere with previously existing on-site routine workflows.

This project is a cooperation between DKFZ, Institute for Legal and Traffic Medicine (University Clinic HD) and Institute for Anatomy (University HD).

Ignaz Reicht

Hyppopy

© dkfz.de

Hyppopy is a python toolbox for blackbox-function optimization providing an easy to use interface for a variety of solver frameworks. Hyppopy gives access to gridsearch, random and quasirandom, particle and bayes solvers. The aim of Hyppopy is making hyperparameter optimization as simple as possible (in our cases e.g. for the optimization of image processing pipelines or machine learning tasks). It can be easily integrated in existing code bases and provides real-time visualization of the parameter space and the optimization process via a visdom server. The internal design is focused on extensibility, ensuring that custom solver and future approaches can be integrated.

Ralf Floca

to top
powered by webEdition CMS