The past few years have witnessed a surge of research activity on the explainability of deep neural networks, which is driven by the need for trust and fairness in high-stake applications. A research area called XAI (explainability AI) is created. Evaluation is difficult in XAI. In the context of image classification, a commonly used metric is deletion AUC. It considers the ordering of pixels induced by a heatmap, but ignores pixel values in the heatmap themselves. Recently, a new metric called deletion cross-entropy (DCE) is proposed. In this project, the student is expect evaluate two classes of explanation methods for image classification, namely sensitivity analysis (vanilla gradient, guided backpropagation, Grad-CAM) and attribution (LRP, integrated gradient, deepLIFT) using both deletion AUC and DCE.
The student will study several XAI methods, learn to use the software packages, and evaluate them on the ImageNet. The project is intended for a student who has take a course on Machine Learning.
Students will learn several XAI methods and gain hands-on experiences with them.