Trustworthy AI for decision support in dermatology

Image credit: Authors

Abstract

Trustworthy and explainable Artificial Intelligence (AI) is especially important in critical applications like healthcare and other sectors of society that handle sensitive data that must be protected or not be shared. Nowadays the possibilities for AI utilization for real use cases are limited due to the current legislation, even for non-critical purposes. A clear and transparent procedure to understand the cause behind the model’s predictions is of particular importance. Therefore, we conducted an extensive study on the explainability of trained skin lesion image classifiers to distinguish melanoma and non-melanoma cases. We observed that the commonly used SIIM-ISIC 2020 dataset is highly unbalanced and contains many acquisition artifacts such as rulers, and black dermoscopic frames. As a result, networks trained with biased data result in biased, non-robust and unfair models. Failure to understand the model can result in real harm to society and loss of trust in AI-assisted systems. We explained the model’s diagnosis using both local and global techniques of explainable AI. With local explanations, we focus on instance-level explanations, focusing mainly on difficult or edge cases. This is of particular interest for both the AI developer as a debugging tool and the dermatologists in order to make their final verdict. For global explanations we used well-defined concepts, to interpret which attributes are more determinant for detecting melanomas. The main goal of this work was to understand which features are contributing to any model’s predictions to increase trustworthiness and in the end deliver better explanations to physicians and patients.

Date
Jul 23, 2022 2:00 PM — 5:30 PM
Location
Messe Wien
Messepl. 1, Vienna, 1021

Related