Computer Assisted Medical Interventions

COMBIOSCOPY: Computational biophotonics in endoscopic cancer diagnosis and therapy

Replacing traditional open surgery with minimally-invasive interventions represents one of the most important challenges in modern healthcare. Minimally-invasive procedures provide numerous advantages over open surgery, including reduced surgical trauma, lesser need for pain medication, earlier convalescence, better cosmetic results, shorter hospitalization terms and lower costs. Furthermore, they are often the only promising treatment option for patients that are not eligible for surgery due to old age or poor overall medical condition, for example. Conventional medical imaging equipment in minimally-invasive procedures (e.g. endoscopes, laparoscopes), however, often offers poor tissue differentiation (e.g. healthy vs (pre)malignant or perfused vs not perfused), which results in inadequate treatment, long procedure times and high complication rates. Fusion of interventional imaging data with diagnostic data has shown promise to overcome some of these issues but suffers from the fact that tissue dynamics (e.g. hemodynamic changes resulting in ischemia) cannot be taken into account. Given these challenges, the goal of the COMBIOSCOPY project was to develop new safe and cost-effective concepts for interventional imaging that are particularly well-suited for supporting endoscopic interventions.

In the scope of the COMBIOSCOPY project, we developed new imaging concepts that (1) provide real-time discrimination of local tissue with a high contrast-to-noise-ratio, (2) are radiation-free to prevent the patient and staff from being exposed to harmful ionizing radiation and (3) feature a compact design at a low cost for a wide range of applicability and acceptance. Our methodology leverages recent spectral imaging techniques including multispectral optical and optoacoustic imaging as well as modern machine learning techniques to enable augmented reality visualization of a range of important morphological and functional parameters invisible to the naked eye. New methods for uncertainty analysis ensure high error awareness and robustness of the approach when applied in a clinical setting. According to a multistage validation process involving ongoing in-human studies, the methodology holds great potential for clinical translation.

Figure 1. Machine learning - based real-time quantification of tissue oxygenation in laparoscopic surgery.
© dkfz.de

The imaging concept we pursued within the COMBIOSCOPY project stands upon four pillars: the development of (1) cutting-edge spectral imaging hardware, (2) innovative methods for live monitoring of oxygenation and hemodynamic changes, (3) novel methods for automatic classification of tissue, and finally, (4) a framework for uncertainty handling. These four principal pillars were rounded off by an unscheduled fifth topic related to meta science and validation. Our contributions to all five topics are detailed in the following:

Spectral imaging hardware

When light enters biological tissue, it undergoes complex interactions, including reflection, absorption, and scattering. In this project we exploited the fact that different tissue components have unique optical scattering and absorption properties. Specifically, we built upon two spectral imaging techniques that make use of multiple bands across the electromagnetic spectrum:


Multispectral imaging (MSI) is a passive technique based on 2D reflection images which requires no contact with the object under investigation (Clancy et al. 2020). It is based on capturing the reflectance spectrum of the tissue - i.e. the reflectance for a range of different wavelengths of light - over an entire surface, which encodes structural and functional surface and subsurface information otherwise not visible to the naked eye. In the scope of the COMBIOSCOPY project, we developed the first multispectral laparoscopic imaging setup featuring a compact and lightweight laparoscope and the possibility to complement the conventional endoscopic view on the patient with relevant morphological and functional information in real time (Wirkert et al. 2016, Wirkert 2018b, BVM Award 2019). We further explored approaches that are compatible with flexible medical devices (Lin et al. 2017, Lin et al. 2018, Medical Image Analysis Best Paper Award 2018), as required in colonoscopic procedures, for example. Finally, new methods for automatically selecting the most relevant wavelengths for a given application (Wirkert et al. 2014, Best Paper Award, runner-up at MICCAI CARE workshop 2014, Wirkert et al. 2018a, Wirkert et al. 2019) have been proposed to enable (near) real-time acquisition speed for MSI.


The depth sensitivity of MSI is only several millimeters at most and can thus only reveal tissue characteristics on or close to the visible surface. Photoacoustic imaging (PAI) addresses the issue of limited depth range by measuring optical properties via acoustic signals ('light in – sound out' approach) (Gröhl et al. 2021a). In this method, the probe requires contact with the tissue which is illuminated by means of light pulses, leading to the absorption of photons and subsequent heating of the tissue. The resulting thermoelastic expansion generates pressure waves, which can then be detected by broadband ultrasonic transducers and converted into absorption images. In this manner, tomographic images of optical properties can be produced at a high level of resolution and with a range of up to several centimeters. Leveraging this principle, we developed a hybrid imaging device that simultaneously reconstructs functional (PAI) and structural (ultrasound) information (Kirchner et al. 2016, Kirchner et al. 2018b, Kirchner et al. 2019c, Sattler et al. 2018, Waibel et al. 2018a, Waibel et al. 2018b). To provide additional context information, we further developed a novel approach to compounding individual image slices to a full 3D image, which does not rely on external tracking devices and can thus be smoothly integrated in clinical workflows (Holzwarth et al. 2020, patent P3 pending).

Live oxygenation monitoring

The primary challenge addressed in this project was to convert the spectral imaging data into clinically relevant information. In this context we focused on recovering blood volume and tissue oxygenation from MSI data (Wirkert et al. 2016, Wirkert et al. 2017, Ayala et al. 2019a, Ayala et al. 2019b, Bench to Bedside Award MICCAI Workshop OR 2.0 2019, SMIT Young Investigator Award 2019) and photoacoustics data (Kirchner et al. 2018a, Kirchner 2019d, Gröhl et al. 2017, Gröhl et al. 2019b, Gröhl et al. 2020b, Gröhl 2020c, Gröhl et al. 2021a, Thomas Gessmann Award 2017, conhIT Nachwuchspreis 2017) in order to monitor hemodynamic changes in a spatially resolved manner. While previous methods for functional parameter estimation are based on either simple linear methods or complex model-based approaches exclusively suited for offline processing, our novel approach combines the high accuracy of model-based approaches with the speed and robustness of modern machine learning methods. In this context, the COMBIOSCOPY project pioneered the idea of teaching machine learning algorithms on the basis of physics-based simulations (P1, P2 patents pending). While the simulations in the training data cover a broad range of possible patient and acquisition conditions, methods for automatic light source estimation (Ayala et al. 2020, P4 patent pending) and domain adaptation (Wirkert et al. 2017) enable the adaptation of the method to a specific clinical setting. According to our validation studies, the award-winning and patented concepts are well-suited for monitoring hemodynamic changes in various tissues including colon (Wirkert et al. 2016), kidney (Wirkert 2018b) and brain (Kirchner et al.. 2019a, Kirchner et al. 2019b, Ayala et al. 2019a, Ayala et al. 2019b). An ongoing patient study investigates automatic detection of ischemia in minimally invasive surgery (https://www.drks.de/drks_web/navigate.do?navigationId=trial.HTML&TRIAL_ID=DRKS00020996).

Tissue classification

Accurate and robust local tissue classification can be of great benefit in many clinical applications where exact delineation of organs, tumors and other anatomical structures is important for diagnostic or therapeutic purposes. While conventional intraoperative imaging modalities such as ultrasound and laparoscopic imaging are limited in their ability to allow correct differentiation of those structures, we developed novel machine learning-based tissue classification approaches specifically designed for multispectral and photoacoustic imaging data (Moccia et al. 2018, Zhang et al. 2017, Gröhl et al. 2021b). As a foundation for this work, we assembled a huge data base of more than 1,000 spectral images featuring 20 different organs and various pathologies. Based on this data, we were the first to show that (1) the variability in spectral reflectance is primarily explained by the tissue type rather than by the individual from which the measurements are taken or the specific acquisition conditions (paper about to be submitted) and that (2) classification of tissue with MSI is substantially more robust compared to the conventional approach relying on standard RGB video images.

Uncertainty handling

A key limiting factor for translating research results into clinical practice is often not the accuracy of a method but its robustness. We thus put a particular focus on the uncertainty awareness of our methodology (Gröhl et al. 2018a, Gröhl et al. 2018b, Ardizzone et al. 2019, Adler et al. 2019a, Adler et al. 2019b). While a lot of research has addressed uncertainty related to the potential intrinsic randomness of the data generation process (so-called aleatoric uncertainty) as well as to insufficient training data (so-called epistemic uncertainty), a type of uncertainty that has received very little attention in literature is the potential inherent ambiguity of the problem. Converting the measured spectrum in a pixel to a single estimation of oxygenation (so-called point estimate), for example, neglects the fact that multiple plausible solutions may exist. Consequently, the estimations cannot generally be trusted to be close to the ground truth. We addressed this problem by designing neural network architectures that are inherently invertible (Ardizzone et al. 2019, Adler et al. 2019a, Adler et al. 2019b) and thus have the capability to generate full probability distributions for the predicted values compared to point estimates. Leveraging this new architecture, we showed that the uncertainty in a measurement as well as the ambiguity of a problem depends crucially on the measurement device (Adler et al. 2019) and pose (Nölke et al. 2021) and can thus be compensated by adapting device design and measurement protocols. We further demonstrated that invertible network architectures can be leveraged for determining whether an algorithm is qualified to process a given new data set based on the data it was trained on (Adler et al. 2019a, Adler et al. 2019b).

Meta Science

An unexpected contribution of the project was related to the validation of biomedical image analysis algorithms. For comparing different methods, international competitions ('challenges') providing common benchmarking datasets have become increasingly customary. In the scope of this project we developed the hypothesis of there being a great discrepancy between the importance of challenges and their quality. Although not envisioned in the original proposal, we found this hypothesis sufficiently alarming to react to it. In a multi-center study, we revealed major flaws in common practice (Maier-Hein et al. 2018, Reinke et al. 2018), which has already led to changes in the way the most important biomedical image analysis society (International Society of Medical Image Computing and Computer Assisted Intervention - MICCAI) conducts its challenges today. To further contribute to good scientific practice in the specific field of computational biophotonics, several members of the COMBIOSCOPY team are active in the International Photoacoustics Standardisation Consortium (IPASC, www.ipasc.science, accessed March 2nd 2021, Bohndiek et al. 2019), partially as work package lead (Gröhl et al. 2019a, Gröhl et al. 2020a).

Alumni

  • Dr. Janek Gröhl (Doctoral Student)
  • Dr. Anant Vemuri (Scientist)
  • Dr. Thomas Kirchner (Doctoral Student)
  • Franz Sattler (Student Assistant)
  • Dr. Sebastian Wirkert (Doctoral Student)
  • Dominik Waibel (Master's student)
  • Angelika Laha (Master's student)
  • Dr. Sara Moccia (PhD intern)
  • Yan Zhang (Master's student)
  • Justin Iszatt (Bachelor's student)

Key collaborators

  • Dr. Daniel S. Elson and Dr. Neil T. Clancy
    Imperial College London, Hamlyn Centre for Robotic Surgery

  • Dr. Peter Sauer
    University of Heidelberg, Interdisciplinary Endoscopy Centre

  • Prof. Dr. Beat P. Müller, PD Dr. Felix Nickel, Dr. Hannes Kenngott, A. Studier-Fischer
    University of Heidelberg, Division of Minimally-invasive Surgery of the Department of General Surgery

  • PD Dr. Dogu Teber
    Städtisches Klinikum Karlsruhe

  • Prof. Dr. Carsten Rother
    Head of Visual Learning Lab Heidelberg

  • M.D. Dr. Med. Edgar Santos
    Universität Heidelberg, University Clinic for Neurosurgery

Funding

This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 637960)

Publications

Adler, T. J., Ardizzone, L., Ayala, L., Gröhl, J., Vemuri, A., Wirkert, S. J., Müller-Stich, B. P., Rother, C., Köthe, U., & Maier-Hein, L. (2019a). Uncertainty handling in intra-operative multispectral imaging with invertible neural networks. International Conference on Medical Imaging with Deep Learning. https://openreview.net/forum?id=Byx9RUONcE

Adler, T. J., Ardizzone, L., Vemuri, A., Ayala, L., Gröhl, J., Kirchner, T., Wirkert, S., Kruse, J., Rother, C., Köthe, U., & Maier-Hein, L. (2019b). Uncertainty-aware performance assessment of optical imaging modalities with invertible neural networks. International Journal of Computer Assisted Radiology and Surgery, 14(6), 997–1007. https://doi.org/10.1007/s11548-019-01939-9

Ardizzone, L., Kruse, J., Wirkert, S. J., Rahner, D., Pellegrini, E. W., Klessen, R. S., Maier-Hein, L., Rother, C., & Köthe, U. (2018). Analyzing Inverse Problems with Invertible Neural Networks. International Conference on Learning Representations. https://openreview.net/forum?id=rJed6j0cKX

Ayala, L. A., Wirkert, S. J., Gröhl, J., Herrera, M. A., Hernandez-Aguilera, A., Vemuri, A., Santos, E., & Maier-Hein, L. (2019a). Live Monitoring of Haemodynamic Changes with Multispectral Image Analysis. OR 2.0 Context-Aware Operating Theaters and Machine Learning in Clinical Neuroimaging, 38–46. https://doi.org/10.1007/978-3-030-32695-1_5

Ayala, L., Wirkert, S., Herrera, M., Hernández-Aguilera, A., Vermuri, A., Santos, E., & Maier-Hein, L. (2019b). Abstract: Multispectral Imaging Enables Visualization of Spreading Depolarizations in Gyrencephalic Brain. Bildverarbeitung für die Medizin 2019, 244–244. https://doi.org/10.1007/978-3-658-25326-4_54

Ayala, L., Seidlitz, S., Vemuri, A., Wirkert, S. J., Kirchner, T., Adler, T. J., Engels, C., Teber, D., & Maier-Hein, L. (2020). Light source calibration for multispectral imaging in surgery. International Journal of Computer Assisted Radiology and Surgery, 15(7), 1117–1125. https://doi.org/10.1007/s11548-020-02195-y

Bohndiek, S., Brunker, J., Gröhl, J., Hacker, L., Joseph, J., Vogt, W. C., Armanetti, P., Assi, H., Bamber, J. C., Beard, P. C., & others. (2019). International Photoacoustic Standardisation Consortium (IPASC): Overview (Conference Presentation). Photons Plus Ultrasound: Imaging and Sensing 2019, 10878, 108781N.

Clancy, N. T., Jones, G., Maier-Hein, L., Elson, D. S., & Stoyanov, D. (2020). Surgical spectral imaging. Medical Image Analysis, 63, 101699. https://doi.org/10.1016/j.media.2020.101699

Gröhl, J., Kirchner, T., & Maier-Hein, L. (2017). Abstract: Quantitative Photoakustische Tomografie durch lokale Kontextkodierung. In Bildverarbeitung für die Medizin 2017 (pp. 153–153).

Gröhl, J., Kirchner, T., Adler, T., & Maier-Hein, L. (2018a). Confidence Estimation for Machine Learning-Based Quantitative Photoacoustics. Journal of Imaging, 4(12), 147. https://doi.org/10.3390/jimaging4120147

Gröhl, J., Kirchner, T., & Maier-Hein, L. (2018b). Confidence estimation for quantitative photoacoustic imaging. Photons Plus Ultrasound: Imaging and Sensing 2018, 10494, 104941C.

Gröhl, J. (2019a). International Photoacoustic Standardisation Consortium (IPASC): Recommendations for standardized data exchange in photoacoustic imaging (Conference Presentation). Photons Plus Ultrasound: Imaging and Sensing 2019, 10878, 108781S.

Gröhl, J., Kirchner, T., Adler, T., & Maier-Hein, L. (2019b). Estimation of blood oxygenation with learned spectral decoloring for quantitative photoacoustic imaging (LSD-qPAI). ArXiv Preprint ArXiv:1902.05839.

Gröhl, J., & Hacker, L. (2020a). International Photoacoustic Standardisation Consortium (IPASC): Progress in the data acquisition and management theme (Conference Presentation). Photons Plus Ultrasound: Imaging and Sensing 2020, 11240, 112401F.

Gröhl, J., Kirchner, T., Adler, T., & Maier-Hein, L. (2020b). Deep learning-based oxygenation estimation for multispectral photoacoustic imaging (Conference Presentation). Photons Plus Ultrasound: Imaging and Sensing 2020, 11240, 112402P.

Gröhl, J. (2020c). Data-driven quantitative photoacoustic imaging [PhD Thesis]. Universität Heidelberg.

Gröhl, J., Schellenberg, M., Dreher, K. K., & Maier-Hein, L. (2021a). Deep learning for biomedical photoacoustic imaging: A review—ScienceDirect. Photoacoustics. Retrieved March 1, 2021, from https://www.sciencedirect.com/science/article/pii/S2213597921000033

Gröhl, J., Schellenberg, M., Dreher, K. K., Holzwarth, N., Tizabi, M. D., Seitel, A., & Maier-Hein, L. (2021b). Semantic segmentation of multispectral photoacoustic images using deep learning. Photons Plus Ultrasound: Imaging and Sensing 2021 Conference Abstract.

Holzwarth, N., Schellenberg, M., Gröhl, J., Dreher, K. K., Seitel, A., Tizabi, M., Müller-Stich, B. P., & Maier-Hein, L. (2020). Tattoo tomography: Freehand 3D photoacoustic image reconstruction with an optical pattern. Retrieved March 1, 2021, from https://arxiv.org/abs/2011.04997

Kirchner, T., Wild, E., Maier-Hein, K. H., & Maier-Hein, L. (2016). Freehand photoacoustic tomography for 3D angiography using local gradient information. Photons Plus Ultrasound: Imaging and Sensing 2016, 9708, 97083G. https://doi.org/10.1117/12.2209368

Kirchner, T., Gröhl, J., & Maier-Hein, L. (2018a). Context encoding enables machine learning-based quantitative photoacoustics. Journal of Biomedical Optics, 23(5), 056008.

Kirchner, T., Gröhl, J., Sattler, F., Bischoff, M. S., Laha, A., Nolden, M., & Maier-Hein, L. (2018b). Real-time in vivo blood oxygenation measurements with an open-source software platform for translational photoacoustic research (Conference Presentation). Photons Plus Ultrasound: Imaging and Sensing 2018, 10494, 1049407. https://doi.org/10.1117/12.2288363

Kirchner, T., Sattler, F., Gröhl, J., & Maier-Hein, L. (2018c). Signed real-time delay multiply and sum beamforming for multispectral photoacoustic imaging. Journal of Imaging, 4(10), 121.

Kirchner, T., Gröhl, J., Herrera, M. A., Adler, T., Hernández-Aguilera, A., Santos, E., & Maier-Hein, L. (2019a). Photoacoustics can image spreading depolarization deep in gyrencephalic brain. Scientific Reports, 9(1), 1–9.

Kirchner, T., Gröhl, J., Holzwarth, N., Herrera, M. A., Hernández-Aguilera, A., Santos, E., & Maier-Hein, L. (2019b). Photoacoustic monitoring of blood oxygenation during neurosurgical interventions. Photons Plus Ultrasound: Imaging and Sensing 2019, 10878, 108780C.

Kirchner, T., Gröhl, J., Sattler, F., Bischoff, M. S., Laha, A., Nolden, M., & Maier-Hein, L. (2019c). An open-source software platform for translational photoacoustic research and its application to motion-corrected blood oxygenation estimation. ArXiv Preprint ArXiv:1901.09781.

Kirchner, T. (2019d). Real-time blood oxygenation tomography with multispectral photoacoustics [PhD Thesis]. Universität Heidelberg.

Lin, J., Clancy, N. T., Hu, Y., Qi, J., Tatla, T., Stoyanov, D., Maier-Hein, L., & Elson, D. S. (2017). Endoscopic Depth Measurement and Super-Spectral-Resolution Imaging. In: Medical Image Computing and Computer-Assisted Intervention − MICCAI 2017 (pp. 39–47). https://doi.org/10.1007/978-3-319-66185-8_5

Lin, J., Clancy, N. T., Qi, J., Hu, Y., Tatla, T., Stoyanov, D., Maier-Hein, L., & Elson, D. S. (2018). Dual-modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks. Medical Image Analysis, 48, 162–176. https://doi.org/10.1016/j.media.2018.06.004

Maier-Hein, L., Vedula, S. S., Speidel, S., Navab, N., Kikinis, R., Park, A., Eisenmann, M., Feussner, H., Forestier, G., Giannarou, S., Hashizume, M., Katic, D., Kenngott, H., Kranzfelder, M., Malpani, A., März, K., Neumuth, T., Padoy, N., Pugh, C., ... Jannin, P. (2017). Surgical data science for next-generation interventions. Nature Biomedical Engineering, 1(9), 691–696. https://doi.org/10.1038/s41551-017-0132-7

Maier-Hein, L., Eisenmann, M., Reinke, A., Onogur, S., Stankovic, M., Scholz, P., Arbel, T., Bogunovic, H., Bradley, A. P., Carass, A., Feldmann, C., Frangi, A. F., Full, P. M., van Ginneken, B., Hanbury, A., Honauer, K., Kozubek, M., Landman, B. A., März, K., ... Kopp-Schneider, A. (2018). Why rankings of biomedical image analysis competitions should be interpreted with care. Nature Communications,9(1), 5217. https://doi.org/10.1038/s41467-018-07619-7

Moccia, S., Wirkert, S. J., Kenngott, H., Vemuri, A. S., Apitz, M., Mayer, B., De Momi, E., Mattos, L. S., & Maier-Hein, L. (2018). Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy. IEEE Transactions on Biomedical Engineering, 65(11), 2649–2659. https://doi.org/10.1109/TBME.2018.2813015

Nölke, J.-H., Adler, T. J., Gröhl, J., Ardizzone, L., Rother, C., Köthe, U., & Maier-Hein, L. (2020). Invertible Neural Networks for Uncertainty Quantification in Photoacoustic Imaging. Retrieved March 1, 2021, from https://arxiv.org/abs/2011.05110

Reinke, A., Eisenmann, M., Onogur, S., Stankovic, M., Scholz, P., Full, P. M., Bogunovic, H., Landman, B. A., Maier, O., Menze, B., Sharp, G. C., Sirinukunwattana, K., Speidel, S., van der Sommen, F., Zheng, G., Müller, H., Kozubek, M., Arbel, T., Bradley, A. P., ... Maier-Hein, L. (2018). How to Exploit Weaknesses in Biomedical Challenge Design and Organization. In A. F. Frangi, J. A. Schnabel, C. Davatzikos, C. Alberola-López, & G. Fichtinger (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 (pp. 388–395). Springer International Publishing. https://doi.org/10.1007/978-3-030-00937-3_45

Sattler, F., Kirchner, T., Gröhl, J., & Maier-Hein, L. (2018). Real-time delay multiply and sum beamforming for multispectral photoacoustics (Conference Presentation). Photons Plus Ultrasound: Imaging and Sensing 2018, 10494, 104942Q. https://doi.org/10.1117/12.2285862

Vemuri, A. S., Wirkert, S. J., & Maier-Hein, L. (2018). Hyperspectral Camera Selection for Interventional Health-care. Retrieved March 1, 2021, from https://arxiv.org/abs/1904.02709

Waibel, D., Gröhl, J., Isensee, F., Kirchner, T., Maier-Hein, K., & Maier-Hein, L. (2018a). Reconstruction of initial pressure from limited view photoacoustic images using deep learning. Photons Plus Ultrasound: Imaging and Sensing 2018, 10494, 104942S. https://doi.org/10.1117/12.2288353

Waibel, D., Gröhl, J., Isensee, F., Maier-Hein, K. H., & Maier-Hein, L. (2018b). Abstract: Rekonstruktion der initialen Druckverteilung photoakustischer Bilder mit limitiertem Blickwinkel durch maschinelle Lernverfahren. In Bildverarbeitung für die Medizin 2018 (pp. 201–201).

Wirkert, Sebastian J., Clancy, N. T., Stoyanov, D., Arya, S., Hanna, G. B., Schlemmer, H.-P., Sauer, P., Elson, D. S., & Maier-Hein, L. (2014). Endoscopic Sheffield Index for Unsupervised In Vivo Spectral Band Selection. Computer-Assisted and Robotic Endoscopy, 110–120. https://doi.org/10.1007/978-3-319-13410-9_11

Wirkert, Sebastian J., Kenngott, H., Mayer, B., Mietkowski, P., Wagner, M., Sauer, P., Clancy, N. T., Elson, D. S., & Maier-Hein, L. (2016). Robust near real-time estimation of physiological parameters from megapixel multispectral images with inverse Monte Carlo and random forest regression. International Journal of Computer Assisted Radiology and Surgery, 11(6), 909–917. https://doi.org/10.1007/s11548-016-1376-5

Wirkert, Sebastian J., Vemuri, A. S., Kenngott, H. G., Moccia, S., Götz, M., Mayer, B. F. B., Maier-Hein, K. H., Elson, D. S., & Maier-Hein, L. (2017). Physiological Parameter Estimation from Multispectral Images Unleashed. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017, 134–141. https://doi.org/10.1007/978-3-319-66179-7_16

Wirkert, Sebastian J., Isensee, F., Vemuri, A. S., Maier-Hein, K., Fei, B., & Maier-Hein, L. (2018a). Domain and task specific multispectral band selection (Conference Presentation). Design and Quality for Biomedical Technologies XI, 10486, 104860H. https://doi.org/10.1117/12.2287824

Wirkert, Sebastian Josef. (2018b). Multispectral image analysis in laparoscopy – A machine learning approach to live perfusion monitoring [PhD Thesis] Karlsruher Institut für Technologie (KIT)

Wirkert, Sebastian J., Isensee, F., Vemuri, A. S., Ayala, L. A., Maier-Hein, K. H., Fei, B., & Maier-Hein, L. (2019). Task-specific multispectral band selection. ArXiv:1905.11297 [Physics]. http://arxiv.org/abs/1905.11297

Zhang, Y., Wirkert, S. J., Iszatt, J., Kenngott, H., Wagner, M., Mayer, B., Stock, C., Clancy, N. T., Elson, D. S., & Maier-Hein, L. (2016). Tissue classification for laparoscopic image understanding based on multispectral texture analysis. Medical Imaging 2016: Image-Guided Procedures, Robotic Interventions, and Modeling,9786, 978619. https://doi.org/10.1117/12.2216090

Zhang, Y., Wirkert, S., Iszatt, J., Kenngott, H., Wagner, M., Mayer, B., Stock, C., Clancy, N. T., Elson, D. S., & Maier-Hein, L. (2017). Tissue classification for laparoscopic image understanding based on multispectral texture analysis. Journal of Medical Imaging, 4(1), 015001. https://doi.org/10.1117/1.JMI.4.1.015001

Patents

P1. Maier-Hein, L., Kirchner, T., Groehl, J., Machine learning-based quantitative photoacoustic tomography (PAT). European patent pending under no. EP16177204.1.

P2. Seidlitz, S., Vemuri, A., Wirkert, S., Ayala, L., Kirchner, T., Adler, T., Maier-Hein, L., Method and system for augmented imaging in open treatment using multispectral information. European patent pending under no. EP18186770.8

P3. Holzwarth N., Schellenberg M., Gröhl J., Maier-Hein L. Method and system for context-aware photoacoustic imaging. European patent pending under no. EP20193102.9

P4. Seidlitz, S., Vemuri, A., Wirkert, S., Ayala, L., Kirchner, T., Adler, T., Maier-Hein, L. Method and system for augmented imaging using multispectral information. European patent pending under no. EP18186700.3

 

Awards

Bench to Bedside Award at MICCAI workshop: OR2.0 Context-Aware Operating Theaters (2019)
L. Ayala et al. for his paper "Live Monitoring of Hemodynamic Changes with Multispectral Image Analysis"

SMIT Young Investigator Award (2019)
L. Ayala et al. for his paper "Deep Learning Approach to live Monitoring of Hemodynamic Changes with Multispectral Image Analysis"

First Prize Science Slam SMIT (2019)
K. Dreher and N. Holzwarth for their contribution about photoacoustic imaging

BVM Award (2019)
S. Wirkert for his doctoral thesis "Multispectral Image Analysis in Laparoscopy - A Machine Learning Approach"

Medical Image Analysis/MICCAI Best Paper Award (2018)
J. Lin, L. Maier-Hein, D. Elson, and co-authors for their paper "Dual-modality endoscopic probe for tissue surface shape renconstruction and hyperspectral imaging enabled by deep neural networks"

DKFZ PhD Retreat Best Poster Award (2018)
Janek Gröhl for is poster at the Annual PhD Retreat of the German Cancer Research Center

1st prize conhIT Nachwuchspreis (2017)
Janek Gröhl for his Master's thesis "Machine learning based quantitative photoacoustic tomography"

Thomas-Gessmann-Special-Award (2017)
Janek Gröhl for his Master's thesis "Machine learning based quantitative photoacoustic tomography"

Berlin-Brandenburg Academy Prize (2017)
L. Maier-Hein for outstanding achievements in cancer research

Best Pitch at Science Sparks Startups (2017)
S. Wirkert, A. Vemuri for their contribution "Rainbow Surgery"

Emil Salzer Prize (2016)
Lena Maier-Hein for her contributions to the fields of computer science, physics and medicine

Thomas-Gessmann Prize (2015)
Justin Iszatt for the Bachelor's thesis "Multispektrale Bildgebung in der Medizin - Entwicklung eines multispektralen Laparoskops zur Schätzung des Sauerstoffgehalts in Geweben"

Best Paper Award at the MICCAI CARE workshop (2014)
S. Wirkert et al. for his paper "Endoscopic Sheffield Index for Unsuspervised In Vivo Spectral Band Selection"

 

Keynotes and Invited Talks on COMBIOSCOPY

07 / 2018                The International Workshop of Medical Imaging (Moscow, Russia)
 
07 / 2018                Beyond Gynecological Surgery Congress (Clermont-Ferrand, France)
 
06 / 2018                GNB 2018 - Sixth National Congress of Bioengineering (Milano, Italy)
 
02 / 2018                Medical Information, Information Retrieval, and Data Sciences (Toulouse, France)
 
11 / 2017                AIS Challenge: Live Surgery & The Operating Theatre of the Future (online talk)
 
03 / 2017                134th Annual Congress of the German Society for Surgery (Munich, Germany)
  
11 / 2016                First European Workshop of MedTech Alsace (Strasbourg, France)
 
09 / 2016                Meet and Match on Optical Imaging (Mannheim, Germany)
 
09 / 2016                European Health Science Match (Heidelberg, Germany)
 
02 / 2016                3rd EMBO Conference on Visualizing Biological Data (Heidelberg, Germany)
 
02 / 2016                BioPro Baden-Württemberg Meet and Match: Optical Imaging: Future Trends in Medical Applications (Mannheim, Germany)
 
02 / 2016                Dutch society of Pattern Recognition and Image Processing - Spring 2016 Meeting (Rotterdam, The Netherlands)

to top
powered by webEdition CMS