(Not recommended for mobile devices or slow connections.)

XAI

A conceptual module to facilitate the planning and communication of explainable/interpretable artificial intelligence (XAI) efforts.

This project is currently under development. A progress snapshot is available here.

Justification

There is currently a lot of interest in making AI more explainable (think summary – insight into causes and outcomes) and/or interpretable (think translation – detailed insight into the logic and process).

There are techniques by which to achieve a degree of AI explainability and/or interpretability, such as decision trees, Grad-CAM (Gradient-weighted Class Activation Mapping ), SHAP (SHapley Additive exPlanations), and LIME (Local Interpretable Model-agnostic Explanations), amongst others.

What is not so clear is the process and extent by which these techniques deliver explainability/interpretability, nor the timelines involved.

I am therefore developing a conceptual model that helps to more easily plan and communicate efforts to make AI more explainable and/or interpretable.

To be clear, my goal in this project is not to make AI more explainable or interpretable. That’s what the techniques I mentioned and others are for. My goal in this project is to create a resource that helps AI developers and implementers plan and/or communicate XAI efforts.

Status

This project is currently under development, with a low priority mainly because I want to think about it thoroughly.