George's paper this week is Sanity Checks for Saliency Maps. This work takes stock of a group of techniques that generate local interpretability - and assesses their trustworthiness through two 'sanity checks'. From this analysis, Adebayo et al demonstrate that a number of these tools are invariant to the model's weights and could lead a human observer into confirmation bias. Kyle discusses AI and brings the question: How can AI help in a humanitarian crisis? Last but not least, Lan brings us the topic of Captum, an extensive interpretability library for PyTorch models.
The podcast Journal Club is embedded on this page from an open RSS feed. All files, descriptions, artwork and other metadata from the RSS-feed is the property of the podcast owner and not affiliated with or validated by Podplay.