The KTI Project

Funded by the BMG from 2021 to 2023
Project management: Tabea Bucher, MSc

AI-based diagnostic assistance systems are part of medical research projects to support decision-making tasks, some of which are extremely complex. Various studies have shown that AI-based systems have the potential to support clinically active physicians and pathologists in cancer diagnostics. However, in order to successfully introduce such AI-based assistance systems from the research context into clinical routine, achieving high accuracy is not sufficient. On the one hand, these AI systems must be accepted by physicians and patients, and they must also be transferable to image material from any source and achieve a consistently high level of accuracy in each case.

To enable acceptance by medical experts and patients, the black-box approaches from the field of artificial intelligence are to be made more transparent and understandable by methods of explainable artificial intelligence (eXplainable artificial intelligence, or XAI for short). They give the physician the ability to understand the decisions of the AI assistance system to some extent. Similarly, un-/certainty is an important aspect that reflects the transparency and credibility of the classification of the AI system. This allows potential errors due to uncertainties to be openly communicated, thus increasing overall safety for the patient. The high performance of computer-assisted diagnosis systems must be ensured, especially in medicine, on image data from any source, so generalization forms another research focus.

Building on the Skin Classifcation Project (SCP) and Tumor Behavior Prediction Initiative (TPI), this project will evaluate transparency in AI procedures for the field of cancer diagnostics and therapy management, as well as methods for generalization, in order to transfer the potential of state-of-the-art AI-based assistance systems from the research context to clinical practice.

 

to top
powered by webEdition CMS