Skip to content

Visualization for AI Explainability

I recently attended the 6th Annual Visualization for AI Explainability Workshop, which explored some innovative ways to use visualizations to explain machine learning concepts as well as build trust in ML models.

There were a number of great talks during the event, but I will provide a bit more information on my favourite two below.

Highlights

Do ML Models Memorize or Generalize?

Some interesting research from Google's People and AI Research team. They explore scenarios where models can initially fail to generalize but as the number of training epochs increases there is a sudden shift and a sharp increase in performance once the model does generalize on the task. The article is a very visual and clean way to present some fascinating work.

Of Deadly Skullcaps and Amethyst Deceivers

A unique perspective on how trust can be gained via model explanability. The article focuses on a toy computer vision use case of predicting whether a mushroom is poisonous or not and provides two versions of the model, one with explanations and one without. Through an interactive game, it is clear to see that the one with explanations provides a much more confortable experience for the user who is faced with potentially eating a poisonous mushroom, and allows the user to be much more confident in the model predictions.