Skip to main content
search
By Patrick Elder
Director, Data and AI Center of Excellence

From the explosive popularity of large language models (LLMs) such as ChatGPT, Bard, and FLAN-T5, to personalized recommendations in ecommerce and autonomous vehicles, AI systems already provide huge value and utility to our daily lives. But all too often, end users lack insight into how these systems’ algorithms generate actionable information in the first place. They operate as “black boxes,” creating significant risks for decision makers who rely on accurate information.

Enter explainable AI (XAI) — the key to gaining transparency into how AI systems make decisions and building trust between these algorithms and their end users. Trust is especially critical when we consider the still-evolving legal landscape surrounding AI. There is growing concern regarding the discriminatory impacts AI may have and how those affected can seek recourse, meaning the technology’s deployer may shoulder legal responsibility for explaining its use and decision inputs.

Different Paths to XAI for Different Models

Before we can break down the value and utility of XAI, it’s important to understand that in the realm of AI and machine learning (ML), models are often classified based on their level of transparency and interpretability: the user’s ability to determine cause and effect from model outputs. These classifications — glass box, white box, and black box — serve as descriptors that help us understand the inner workings of different AI models.

GLASS BOX MODELS

Glass box models, as the name suggests, represent the most transparent and interpretable category. These models are typically rule-based or explicitly programmed, making them highly interpretable and easily explainable. Federal agencies typically gravitate to glass box models because unprecedented levels of transparency are required to field AI on government systems.

Example: A 3-day intensive care unit readmission prediction model that helps identify at-risk patients and improves discharge decisions.

WHITE BOX MODELS

White box models share many characteristics with glass box models but are often based on traditional ML algorithms, such as decision trees, linear regression, or deep neural networks. Although not as transparent as glass box models, white box models still provide interpretability to some extent, enabling users to gain insights into the decision-making process.

Example: Object detection and recognition algorithms in autonomous vehicles such as self-driving cars.

BLACK BOX MODELS

Black box models, which represent the most opaque and complex category and are typically based on deep learning architectures such as neural networks, operate with a high level of abstraction. While black box models can achieve remarkable accuracy and performance in challenging use cases, their lack of interpretability poses challenges when it comes to understanding the reasoning behind their predictions.

Example: The algorithms that drive personalized recommendations from major retailers like Amazon or streaming services like Netflix.

Understanding the distinctions between glass box, white box, and black box models will help us understand the differences in how we can achieve explainability for each of them.  Moreover, a lack of explainability should deter the use of some models when ethical and responsible use could be called into question.

XAI: Explainability vs. Interpretability 

In most instances when someone talks about “explainability” in AI, they mean interpretability. Interpretability focuses on understanding causation in an AI model’s results without necessarily requiring a comprehensive explanation of every decision. It involves gaining insights into the relationships between input features, the model’s internal representations, and its output. While interpretability provides a detailed understanding of a model’s inner workings, it may not necessarily offer a complete explanation for every decision made by the model. In general, full interpretability exists only for a small class of glass box models.

Explainability is quite different: it intends to justify the results, and to clearly articulate how a model reached a specific result. In essence, explainability aims to provide a clear and justifiable explanation for the model’s decisions, while interpretability focuses on understanding the model’s behavior and gaining insights into its functioning.

Explainability is possible with white box models where you have access to the internal model information and, to a lesser extent, some black box models, although the more complex the model, the more difficult it is to achieve.

What is XAI?

XAI is a set of tools, frameworks, and methodologies to help you bridge the gap between opaque AI systems and human understanding. But how, exactly, can you implement XAI to achieve model transparency?

Effective XAI Techniques

Feature Visualization. XAI methods can visualize and highlight the important features learned by different layers through the training of a black box model, which significantly influences the AI model’s output. Feature visualization also provides near-real time model performance and monitors for model drift, providing insight into model degradation over time and recommending when it’s time to replace a model.

Rule Based Explanations. Rule-based models, like decision trees, can be used to explain the decision logic step-by-step, providing a clear path of reasoning.

LIME and SHAP. Techniques like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) generate locally interpretable explanations for individual predictions.

Layer-wise Relevance Propagation (LRP). LRP assigns relevance scores to each input feature, propagating them back through the neural network to explain the model’s output, essentially following the journey backward to specify which inputs had the greatest impact on the result.

Contrastive Explanations. This approach involves explaining a prediction by contrasting it with alternative outcomes, showing what factors led to the final decision.

XAI enables users to gain insights into the decision-making process of AI systems, understand their strengths and limitations, and identify potential biases or errors.

By incorporating these and other methods, XAI enables users to gain insights into the decision-making process of AI systems, understand their strengths and limitations, and identify potential biases or errors.

ECS: Driving Responsible AI Adoption 

XAI is a necessary goal because increased transparency fosters trust in AI and facilitates its deployment in critical domains, such as defense, healthcare, finance, and autonomous vehicles, where clear explanations are essential for user acceptance and regulatory compliance, and in some cases, life-saving decisions. At ECS, our data, AI, and ML experts — with more than 1,000 combined certifications, accreditations, and awards — help commercial and federal organizations manage the unmanageable and execute critical missions with AI-fueled insights. As a leading contractor supporting the government’s AI missions, we’re committed to reliable, fair, and transparent AI for all.

 

Interested in learning more?

Close Menu

© 2023 ECS. All Rights Reserved.

WE'RE HIRING