本資料は2020年07月11日に社内共有資料として展開していたものを WEBページ向けにリニューアルした内容になります。




●Recent trends on XAI

●Method 1: LIME/SHAP

  • Example: Classification

  • Example: Regression

  • Example: Image classification

●Method 2: ABN for image classification



Generally speaking, AI is a blackbox.

We want AI to be explainable because…

1. Users should trust AI to actually use it (prediction itself, or model)

Ex: diagnosis/medical check, credit screening

G. Tolomei, et. al., arXiv:1706.06691

People want to know why they were rejected by AI screening, and what they should do in order to pass the screening.

2. It helps to choose a model from some candidates

Classifier of text to “Christianity” or “Atheism” (無神論)

Both model give correct classification, but it is apparent that model 1 is better than model 2.