IEEE

Talk: Neural Network Interpretability using Full-Gradient Representation

IEEE Signal Processing Society, Bangalore Chapter and Department of Computational and Data Sciences, 
Indian Institute of Science Invite you to the following talk:
 
SPEAKER   : Suraj Srinivas, PhD Scholar, Idiap Research Institute and EPFL, Switzerland
TITLE          : “Neural Network Interpretability using Full-Gradient Representation”
Venue            :  #102 CDS Seminar Hall
Date & Time :  Jan  07, 2020, 12:00 Noon

ABSTRACT:

In this talk, I will introduce a new tool for interpreting neural net responses, namely full-gradients, which decomposes the neural net response into input sensitivity and per-neuron sensitivity components. This is the first proposed representation which satisfies two key properties: completeness and weak dependence, which provably cannot be satisfied by any saliency map-based interpretability method. For convolutional nets, we also propose an approximate saliency map representation, called FullGrad, obtained by aggregating the full-gradient components. We experimentally evaluate the usefulness of FullGrad in explaining model behaviour with two quantitative tests: pixel perturbation and remove-and-retrain. Our experiments reveal that our method explains model behaviour correctly, and more comprehensively than other methods in the literature. Visual inspection also reveals that our saliency maps are sharper and more tightly confined to object regions than other methods. This talk is based on our recent NeurIPS 2019 paper titled “Full-Gradient Representation for Neural Network Visualization”.

BIOGRAPHY:
Suraj Srinivas is a 3rd year PhD research assistant at Idiap Research Institute and EPFL, Switzerland. He works with Prof. Francois Fleuret on analytically understanding deep learning architectures. Before that, he completed his M.Sc.(Engg.) at CDS, IISc, where he worked with Prof. Venkatesh Babu on neural network compression. His research interest are broadly relating to the robustness, adaptability and interpretability of deep neural networks.

Host Faculty: Prof. Venkatesh Babu

Leave a Reply