Skip to content

You have questions?
We might just have the answer.

How is Xpdeep's self-explanatory framework different from other explanatory methods like SHAP, DeepLIFT, LIME...?

Current explicability methods (SHAP, LIME, ...) operate on models that have already been learned (model-agnostic); as a result, their explanations are often incomplete, imprecise and not very robust. In contrast, Xpdeep enables the learning of self-explanatory deep models that actively integrate explainability directly into the model design. This ensures that the explanations provided are an accurate representation of the internal mechanics and properties of the model providing a clear and accurate understanding of its decision-making process.

What kind of architectures, data types and tasks can be handled by the Xpdeep framework?

Xpdeep is a generic solution applicable to all major AI tasks (supervised, semi-supervised, unsupervised...), standard architectures and data types (tabular, temporal, image...). An outstanding feature of Xpdeep is its native and optimal solution for explaining models dealing with temporal data (i.e., sensor data, evolving data, etc.).

Does Xpdeep require changing the developing environment?

No, Xpdeep is used as a Pytorch library, for the development of self-explanatory deep models and for the visualisation of their understandable explanations via a flexible interface, during the learning or inference of the models.

Does Xpdeep sacrifice model performance to explain?

No, on the contrary, the subtle design of self-explanatory models makes it possible to capture precise, intelligible explanations of how the model works, without hindering the optimisation of its performance. On the contrary, it has often been observed that explicability enables better performance to be achieved more quickly.

Can Xpdeep's visualisation module be understood by non-technical users?

Xpdeep's visualisation interface is flexible and can be configured for AI novices, managers or experienced AI developers.  Xpdeep's accessible explanations make it easy for non-technical stakeholders to understand the model's predictions and reasoning without the need for deep learning expertise. Xpdeep's interface provides explanations in the form of text summaries, dashboards and graphs, and is designed to be easy to use. This fosters collaboration and informed decision making across the organisation.

Can I use Xpdeep on already trained models?

Yes, absolutely, Xpdeep can explain a model that has already been learned, either directly or by using a surrogate model; in both cases, the explanations provided by Xpdeep will always be more precise and more complete than those provided by post-hoc approaches (Shap, LIME, etc.).

AI vs Machine Learning vs Deep Learning: What's the difference?

Machine Learning (ML) is a subset = one way of implementing Artificial Intelligence (AI) that is structuring algorithms in layers to create an “artificial neural network” that can learn and make intelligent decisions

Deep Learning (DL) is a subset of Machine Learning that is learning patterns on its own (experts and developpers don't add logic into the data) from massive amounts of data. Prior to the advent of Xpdeep's self-explanaible Deep Learning engine, this neural network was a mysterious "black box" impossible to fully apprehend.

Both ML and DL models use various forms of learning, including supervised learning (datasets are labelized by humans), unsupervised learning (unlabeled data, the algorithm detects patterns by itself), and reinforcement learning (developpers define rewards and penalties that the system maximizes over time through testing).

Deep Learning technology powers a wide range of everyday products and services, including self-driving cars, digital assistants, financial analytics, and credit card fraud detection. Xpdeep's developps the first self-explanaible Deep Learning engine.

What AI explainability means?

Explainability (or “interpretability”) refers the ability of an AI system to provide clear and understandable explanations for its decisions or predictions. It enables both developpers and users without any technical expertise to understand the factors and reasoning behind AI outcomes. Xpdeep is the first self-explainable deep learning engine.

Why AI explainability matters?

AI-powered systems are used in many processes impacting our lives such as recruitement, loan approval, disease diagnosis or national security threats. The problem is that deep learning models are "black-boxes" where even algorithm’s creators can’t understand how and why the model gave its decision.

Explainability helps developpers build faster and safer models, having access to the precise and complete model behind each decision. Issues like biaises and overfitting are then easier to solve. Having access to the full deep model and its precise, complete and intelligible explanations allows experts and regulatory bodies to verify the system.

It is also very important for final users (recruiters, bankers, doctors, ...) to understand AI's actions or decisions to use AI-powered systems and eventually to explain it to their own business stakeholder, regulator and customer.

What is the difference between interpretability and explainability?

Interpretability is a process using deductive reasoning and is calculated outside and after the model's computation. Algorithms typically make small variation in features and observe their impact on the outputs. However, explainability as it's done by Xpdeep's engine is provided together with the output, from within the model. No additional time or machine power is required.

What does Explainable Deep Learning provide?

Xpdeep's DL engine reveals the functioning of the learned model by giving for all the decisions the involved variables, their importance, and the contribution direction of each variable.

Our visualization interface employs decision graphs and charts to enhance comprehension. It offers an interactive experience, enabling users to seamlessly switch between global, semi-local, or local views to gain a comprehensive understanding of each individual decision.

Does it take additional time to get the explanation?

No, it doesn't. Unlike other systems that require extra time to test an AI model from outside, Xpdeep's engine provides simultaneously the deep model and its precise, complete and intelligible explanations.

Xpdeep's engine is often faster and presents less complexity than others, testing 

Can Explainable Deep Learning eliminate biases in models?

Explainable AI helps uncover biases by providing precise insights into decision-making.

Can users with non-technical backgrounds understand Deep Learning explanations?

Yes, our goal is to ensure AI explanations are comprehensible for all users, even those without technical expertise. This is why we employ visualization charts, graphs, and plain language to convey the rationale behind AI decisions.

DIfference Xpdeep and other explanability methods ? (SHAP, LIM...

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam tempor arcu non commodo elementum.

Ready to Revolutionize Deep Learning with Xpdeep?

Request a demo to:

  • Explore Xpdeep's intuitive interface and user-friendly features.
  • See how self-explainability enhances model transparency and accountability.
  • Understand how Xpdeep can address your specific industry needs.

Fill out the form to request a demo, and one of our experts will reach out to you shortly. Embark on a journey toward more interpretable, insightful, and impactful deep learning.


Request a demo