Data Science Seminar

Safety, Robustness and Explainability in Supervised Tasks using Invertible Neural Networks

Invertible Neural Networks (INNs) are largely used for unsupervised generative modelling. However, our recent work demonstrates that INNs produce various unique benefits when applied to supervised tasks. For medical applications, some of these are especially important, such as accurate uncertainty quantification, explainability and interpretability of decisions, and explicit detection of abnormal or unreliable inputs. In the talk, we present two complementing approaches to this, in line with the paradigms of transductive vs. inductive learning.

Virtual Talk

Biosketch Lynton Ardizzone

Lynton Ardizzone completed his Bachelor's and Master's Degree in Physics at Heidelberg University between 2012 and 2017. His Masters' thesis concerned characterizing and quantifying uncertainties for 3D Lidar measurements used for autonomous driving. In 2018, he began his PhD in computer vision at the Visual Learning Lab led by Prof. Carsten Rother, under the supervision of Prof. Ullrich Köthe. His research centers around Invertible Neural Networks and normalizing flows, and their applications to conditional modelling, focussing especially on uncertainty quantification and explainability.

Contact: https://hci.iwr.uni-heidelberg.de/vislearn/people/

to top
powered by webEdition CMS