Cookie Settings

We use cookies to optimize our website. These include cookies that are necessary for the operation of the site, as well as those that are only used for anonymous statistic. You can decide for yourself which categories you want to allow. Further information can be found in our data privacy protection .


These cookies are necessary to run the core functionalities of this website and cannot be disabled.

Name Webedition CMS
Purpose This cookie is required by the CMS (Content Management System) Webedition for the system to function correctly. Typically, this cookie is deleted when the browser is closed.
Name econda
Purpose Session cookie emos_jcsid for the web analysis software econda. This runs in the “anonymized measurement” mode. There is no personal reference. As soon as the user leaves the site, tracking is ended and all data in the browser are automatically deleted.

These cookies help us understand how visitors interact with our website by collecting and analyzing information anonymously. Depending on the tool, one or more cookies are set by the provider.

Name econda
Purpose Statistics
External media

Content from external media platforms is blocked by default. If cookies from external media are accepted, access to this content no longer requires manual consent.

Name YouTube
Purpose Show YouTube content
Name Twitter
Purpose activate Twitter Feeds

Publication highlights - Intelligent Systems in Photoacoustic Imaging

Unsupervised Domain Transfer with Conditional Invertible Neural Networks

Main contribution: First domain transfer approach that combines the benefits of cINNs (exact maximum likelihood estimation) with those of GANs (high image quality).

Synthetic medical image generation has evolved as a key technique for neural network training and validation. A core challenge, however, remains in the domain gap between simulations and real data. While deep learning-based domain transfer using Cycle Generative Adversarial Networks and similar architectures has led to substantial progress in the field, there are use cases in which state-of-the-art approaches still fail to generate training images that produce convincing results on relevant downstream tasks. Here, we address this issue with a domain transfer approach based on conditional invertible neural networks (cINNs). As a particular advantage, our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood training. To showcase our method's generic applicability, we apply it to two spectral imaging modalities at different scales, namely hyperspectral imaging (pixel-level) and photoacoustic tomography (image-level). According to comprehensive experiments, our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks (binary and multi-class). cINN-based domain transfer could thus evolve as an important method for realistic synthetic data generation in the field of spectral imaging and beyond.

Dreher, K. K., Ayala, L., Schellenberg, M., Hübner, M., Nölke, J.-H., Adler, T. J., Seidlitz, S., Sellner, J., Studier-Fischer, A., Gröhl, J., Nickel, F., Köthe, U., Seitel, A., & Maier-Hein, L. (2023). Unsupervised Domain Transfer with Conditional Invertible Neural Networks. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. [pdf] [video]

Photoacoustic image synthesis with generative adversarial networks

Main contribution: First adversarial approach to the simulation of realistic tissue geometries in the specific context of photoacoustic imaging.

Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties with high spatial resolution. However, previous attempts to solve the optical inverse problem with supervised machine learning were hampered by the absence of labeled reference data. While this bottleneck has been tackled by simulating training data, the domain gap between real and simulated images remains an unsolved challenge. We propose a novel approach to PAT image synthesis that involves subdividing the challenge of generating plausible simulations into two disjoint problems: (1) Probabilistic generation of realistic tissue morphology, and (2) pixel-wise assignment of corresponding optical and acoustic properties. The former is achieved with Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data. According to a validation study on a downstream task our approach yields more realistic synthetic images than the traditional model-based approach and could therefore become a fundamental step for deep learning-based quantitative PAT (qPAT).

Schellenberg, M., Gröhl, J., Dreher, K. K., Nölke, J.-H., Holzwarth, N., Tizabi, M. D., Seitel, A., & Maier-Hein, L. (2022). Photoacoustic image synthesis with generative adversarial networks. Photoacoustics, 28, 100402. [pdf]

SIMPA: an open-source toolkit for simulation and image processing for photonics and acoustics

Main contribution: Open-source toolkit for simulation and image processing of optical and acoustic imaging.

Significance: Optical and acoustic imaging techniques enable noninvasive visualisation of structural and functional properties of tissue. The quantification of measurements, however, remains challenging due to the inverse problems that must be solved. Emerging data-driven approaches are promising, but they rely heavily on the presence of high-quality simulations across a range of wavelengths due to the lack of ground truth knowledge of tissue acoustical and optical properties in realistic settings.
Aim: To facilitate this process, we present the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit. SIMPA is being developed according to modern software design standards.
Approach: SIMPA enables the use of computational forward models, data processing algorithms, and digital device twins to simulate realistic images within a single pipeline. SIMPA's module implementations can be seamlessly exchanged as SIMPA abstracts from the concrete implementation of each forward model and builds the simulation pipeline in a modular fashion. Furthermore, SIMPA provides comprehensive libraries of biological structures, such as vessels, as well as optical and acoustic properties and other functionalities for the generation of realistic tissue models.
Results: To showcase the capabilities of SIMPA, we show examples in the context of photoacoustic imaging: the diversity of creatable tissue models, the customisability of a simulation pipeline, and the degree of realism of the simulations.
Conclusions: SIMPA is an open-source toolkit that can be used to simulate optical and acoustic imaging modalities. The code is available at: , and all of the examples and experiments in this paper can be reproduced using the code available at:

Gröhl, J., Dreher, K. K., Schellenberg, M., Rix, T., Holzwarth, N., Vieten, P., Ayala, L., Bohndiek, S. E., Seitel, A., & Maier-Hein, L. (2022). SIMPA: An open-source toolkit for simulation and image processing for photonics and acoustics. Journal of Biomedical Optics, 27(8), 083010. [pdf]

Semantic segmentation of multispectral photoacoustic images using deep learning

Main contribution: First fully-automatic multi-label semantic image annotation of photoacoustic images with deep learning.

Photoacoustic (PA) imaging has the potential to revolutionize functional medical imaging in healthcare due to the valuable information on tissue physiology contained in multispectral photoacoustic measurements. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images to facilitate image interpretability. Manually annotated photoacoustic and ultrasound imaging data are used as reference and enable the training of a deep learning-based segmentation algorithm in a supervised manner. Based on a validation study with experimentally acquired data from 16 healthy human volunteers, we show that automatic tissue segmentation can be used to create powerful analyses and visualizations of multispectral photoacoustic images. Due to the intuitive representation of high-dimensional information, such a preprocessing algorithm could be a valuable means to facilitate the clinical translation of photoacoustic imaging.

Schellenberg, M., Dreher, K. K., Holzwarth, N., Isensee, F., Reinke, A., Schreck, N., Seitel, A., Tizabi, M. D., Maier-Hein, L., & Gröhl, J. (2022). Semantic segmentation of multispectral photoacoustic images using deep learning. Photoacoustics, 26, 100341. [pdf]

Tattoo tomography: Freehand 3D photoacoustic image reconstruction with an optical pattern

Main contribution: Novel approach to 3D photoacoustic image reconstruction from an acquired sequence of 2D photoacoustic image slices.

Purpose: Photoacoustic tomography (PAT) is a novel imaging technique that can spatially resolve both morphological and functional tissue properties, such as vessel topology and tissue oxygenation. While this capacity makes PAT a promising modality for the diagnosis, treatment, and follow-up of various diseases, a current drawback is the limited field of view provided by the conventionally applied 2D probes.
Methods: In this paper, we present a novel approach to 3D reconstruction of PAT data (Tattoo tomography) that does not require an external tracking system and can smoothly be integrated into clinical workflows. It is based on an optical pattern placed on the region of interest prior to image acquisition. This pattern is designed in a way that a single tomographic image of it enables the recovery of the probe pose relative to the coordinate system of the pattern, which serves as a global coordinate system for image compounding.
Results: To investigate the feasibility of Tattoo tomography, we assessed the quality of 3D image reconstruction with experimental phantom data and in vivo forearm data. The results obtained with our prototype indicate that the Tattoo method enables the accurate and precise 3D reconstruction of PAT data and may be better suited for this task than the baseline method using optical tracking.
Conclusions: In contrast to previous approaches to 3D ultrasound (US) or PAT reconstruction, the Tattoo approach neither requires complex external hardware nor training data acquired for a specific application. It could thus become a valuable tool for clinical freehand PAT.

Holzwarth, N., Schellenberg, M., Gröhl, J., Dreher, K., Nölke, J.-H., Seitel, A., Tizabi, M. D., Müller-Stich, B. P., & Maier-Hein, L. (2021). Tattoo tomography: Freehand 3D photoacoustic image reconstruction with an optical pattern. International Journal of Computer Assisted Radiology and Surgery, 16(7), 1101–1110. [IPCAI 2021 Audience Award for Best Innovation: Runner-up] [pdf]

Deep learning for biomedical photoacoustic imaging: A review

Main contribution: Analysis of the state of the art in deep learning applications that address various unresolved issues in photoacoustic imaging, including the acoustic and optical inverse problem, image post-processing, and semantic image annotation.

Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.

Gröhl, J., Schellenberg, M., Dreher, K., & Maier-Hein, L. (2021). Deep learning for biomedical photoacoustic imaging: A review. Photoacoustics, 22, 100241. [pdf]

to top
powered by webEdition CMS