Upol Ehsan ponders on Mastodon:
Explainable AI suffers from an epidemic. I call it Explainability Washing. Think of it as window dressing–techniques, tools, or processes created to provide the illusion of explainability but not delivering it.
Ah yes, slapping feature importance values onto a prediction and asking your users “Are you not entertained?”.
This thread pairs well with Rick Saporta’s presentation. Both urge you to focus solely on your user’s decision when deciding what to build.