Facebook Pixel
Journal Club

Humanitarian AI, PyTorch Models, and Saliency Maps

Journal Club
Journal Club

George's paper this week is Sanity Checks for Saliency Maps. This work takes stock of a group of techniques that generate local interpretability - and assesses their trustworthiness through two 'sanity checks'. From this analysis, Adebayo et al demonstrate that a number of these tools are invariant to the model's weights and could lead a human observer into confirmation bias. Kyle discusses AI and brings the question: How can AI help in a humanitarian crisis? Last but not least, Lan brings us the topic of Captum, an extensive interpretability library for PyTorch models.

Journal Club
Not playing