Skip to content

The First Self-Explainable Deep Learning Framework

for Business Optimization, Trust and Compliance

  • Build deep models that are explainable by design, without compromising performance
  • Enhance the performance and complexity of deep models through explainability
  • Explain deep models' functioning, decisions, and inferences to users, regulators and stakeholders

Train Deep Models Explainable by Design
or Explain Existing Deep Models

Substitute our components in your code to discover your model internal's decisions

 

Implementation

 

✅ No need to change the preprocessing of data.

✅ API libraries integrated with PyTorch 2.4.

✅ Deployment on cloud and on-premises.

✅ Inferences from the visualization app or via Xpdeep APIs.

Models & Data Supported

 

✅ Explain pre-trained models.

✅ Train new models explainable by design.

✅ Compatible with tabular, text, images, time series data.

✅ Includes MLP, Transfer learning, Supervised learning...

Icons_adoption white

UNDERSTAND

Model Global Attribution

feature attribution

Discover Model Internal Decisions to Enhance Robustness

Utilize our Python libraries to train deep models explainable by design:

  • Unlike posthoc methods (SHAP, LIME...), Xpdeep explains both model and inferences at local and global scales, without additional computational cost,
  • Access unique analyses: counterfactual, fairness, and more,
  • Identify weak points, false positives / negatives, biases, and underrepresented groups.

This facilitates discussions between data scientists and experts about model behavior, enhancing risk management and ensuring robust performance.

 

Our mission

 

Icons_adoption white

UNDERSTAND

imageRD3-Understand
Pie-ECG-Understand
Pie-MachineFailure-Understand
regression-error-Understand

Understand your Deep Model Functioning, Strengths, and Weaknesses

Utilize our Python libraries to train deep models explainable by design:

  • Unlike posthoc methods (SHAP, LIME...), Xpdeep explains both model and inferences at local and global scales, without additional computational cost,
  • Access unique analyses: counterfactual, fairness, and more,
  • Identify weak points, false positives / negatives, biases, and underrepresented groups.

This enables data scientists and experts to discuss model behavior, improving risk management and ensuring performance and robustness.

 

Icons_time series white

fff

Capture d’écran 2024-09-03 à 17.07.52

Shape Your Model Precisely According to your Specific Needs

At the design stage, data scientists can flexibly adapt models to business constraints:

  • Quickly identify model errors and instabilities for more efficient and rapid optimization,
  • Detect superfluous input factors to build more efficient and less complex models, thus reducing computational demands during inference,
  • Tune model complexity to manage underfitting or overfitting in predictive regions,
  • Bring back engineering and other internal clients into data science to maximize the adequation of the model with its destination use.

This precise optimization ensures the deep model meets adequately business objectives and needs.

 

Icons_time series white

OPTIMIZE

Forecast-StockMarket-optimize
Qualitymetrics-Rel-optimize
Qualitymetrics-Rob-optimize

Shape your Model Precisely According to your Specific Needs

At the design stage, data scientists can flexibly adapt models to business constraints:

  • Quickly identify model errors and instabilities for more efficient and rapid optimization,
  • Detect superfluous input factors to build more efficient and less complex models, thus reducing computational demands during inference,
  • Tune model complexity to manage underfitting or overfitting in predictive regions,
  • Bring back engineering and other internal clients into data science to maximize the adequation of the model with its destination use.

This precise optimization ensures the deep model meets adequately business objectives and needs.

 

Icons_collaboration white

EXPLAIN

barplot-doubles-Explain
ECG-Explain
har-explain
imageRD-Explain
imageRD2-Explain

Get Deep Insights for Trust, Compliance, and Risk Management

Provide clear, comprehensible insights that ensure all stakeholders understand the model's decision-making processes, fostering confidence in AI applications:

  • Explain the model and its inferences for adoption and trust purposes,
  • Explain generated inferences to analyze and control future predictions,
  • Explain past predictions for auditing and investigating critical scenarios, incidents...,
  • Enable your legal, financial, and insurance teams to mitigate deep models' deployment risks in customer-facing products,
  • Simplify regulatory compliance, by offering tools to analyze, document, and prove AI models' workings.

This documentation is crucial for adhering to regulations and demonstrating accountability, minimizing financial and legal risks associated with AI integration.

Unleash the Commercial Potential of your Deep Models

Logo XPdeep avec signature

Trust

Transparent and intelligible model explanations enhance trust among developers, internal clients, and end-users.
Internal stakeholders and future users can independently understand how the model works and why certain predictions are made. They can help analyze and optimize their models to correct misclassified samples and check for biases in groups and individuals.
 
Learn more
Logo XPdeep avec signature
Logo XPdeep avec signature

Compliance

Simplify adherence to regulatory standards with clear, documented explanations.
Previously, even developers struggled to document deep models. Xpdeep's transparent deep models allow compliance and audit teams to test, prove, and document directly.
 
Learn more
Logo XPdeep avec signature

Risk Management

Facilitate risk management by identifying the weaknesses of your deep models.
Explanations are necessary when deploying deep models in software and hardware sold to third-parties, to let your financial, legal, insurance... teams identify and mitigate risk factors.
 
Learn more
Logo XPdeep avec signature

Become a Partner

Consultants, Integrators, Application Developers, Independent Software Vendors, OEMs: Partner with Xpdeep.

Take your Customers to the Forefront of AI Innovation, Ethics, and Performance

Unveil Xpdeep's self-explainable deep learning framework to your customers, leaving behind the opacity of the black box. Address their operational and regulatory requirements for trust, explainability, and superior accuracy in deep models.

Our visualization module provides transparent explanations for deep models they have never seen before, enhancing model performance and opening up new avenues of opportunity.

Let your Customers Finally Access Deep Models Explanations

Deploy Xpdeep-generated self-explainable deep models into your customers' applications or the machines/software you provide.

Let them access explanations through our visualization module or APIs, empowering them to harness the potential of high-accuracy explainable, and trustworthy AI, uniquely tailored to their needs, and complemented by your expertise.

Frequently asked questions

How is Xpdeep's self-explanatory framework different from other explanatory methods like SHAP, DeepLIFT, LIME...?

Current explicability methods (SHAP, LIME, ...) operate on models that have already been learned (model-agnostic); as a result, their explanations are often incomplete, imprecise and not very robust. In contrast, Xpdeep enables the learning of self-explanatory deep models that actively integrate explainability directly into the model design. This ensures that the explanations provided are an accurate representation of the internal mechanics and properties of the model providing a clear and accurate understanding of its decision-making process.

What kind of architectures, data types and tasks can be handled by the Xpdeep framework?

Xpdeep is a generic solution applicable to all major AI tasks (supervised, semi-supervised, unsupervised...), standard architectures and data types (tabular, temporal, image...). An outstanding feature of Xpdeep is its native and optimal solution for explaining models dealing with temporal data (i.e., sensor data, evolving data, etc.).

Does Xpdeep require changing the developing environment?

No, Xpdeep is used as a Pytorch library, for the development of self-explanatory deep models and for the visualisation of their understandable explanations via a flexible interface, during the learning or inference of the models.

Does Xpdeep sacrifice model performance to explain?

No, on the contrary, the subtle design of self-explanatory models makes it possible to capture precise, intelligible explanations of how the model works, without hindering the optimisation of its performance. On the contrary, it has often been observed that explicability enables better performance to be achieved more quickly.

Can Xpdeep's visualisation module be understood by non-technical users?

Xpdeep's visualisation interface is flexible and can be configured for AI novices, managers or experienced AI developers.  Xpdeep's accessible explanations make it easy for non-technical stakeholders to understand the model's predictions and reasoning without the need for deep learning expertise. Xpdeep's interface provides explanations in the form of text summaries, dashboards and graphs, and is designed to be easy to use. This fosters collaboration and informed decision making across the organisation.

Can I use Xpdeep on already trained models?

Yes, absolutely, Xpdeep can explain a model that has already been learned, either directly or by using a surrogate model; in both cases, the explanations provided by Xpdeep will always be more precise and more complete than those provided by post-hoc approaches (Shap, LIME, etc.).

AI vs Machine Learning vs Deep Learning: What's the difference?

Machine Learning (ML) is a subset = one way of implementing Artificial Intelligence (AI) that is structuring algorithms in layers to create an “artificial neural network” that can learn and make intelligent decisions

Deep Learning (DL) is a subset of Machine Learning that is learning patterns on its own (experts and developpers don't add logic into the data) from massive amounts of data. Prior to the advent of Xpdeep's self-explanaible Deep Learning engine, this neural network was a mysterious "black box" impossible to fully apprehend.

Both ML and DL models use various forms of learning, including supervised learning (datasets are labelized by humans), unsupervised learning (unlabeled data, the algorithm detects patterns by itself), and reinforcement learning (developpers define rewards and penalties that the system maximizes over time through testing).

Deep Learning technology powers a wide range of everyday products and services, including self-driving cars, digital assistants, financial analytics, and credit card fraud detection. Xpdeep's developps the first self-explanaible Deep Learning engine.

What AI explainability means?

Explainability (or “interpretability”) refers the ability of an AI system to provide clear and understandable explanations for its decisions or predictions. It enables both developpers and users without any technical expertise to understand the factors and reasoning behind AI outcomes. Xpdeep is the first self-explainable deep learning engine.

Why AI explainability matters?

AI-powered systems are used in many processes impacting our lives such as recruitement, loan approval, disease diagnosis or national security threats. The problem is that deep learning models are "black-boxes" where even algorithm’s creators can’t understand how and why the model gave its decision.

Explainability helps developpers build faster and safer models, having access to the precise and complete model behind each decision. Issues like biaises and overfitting are then easier to solve. Having access to the full deep model and its precise, complete and intelligible explanations allows experts and regulatory bodies to verify the system.

It is also very important for final users (recruiters, bankers, doctors, ...) to understand AI's actions or decisions to use AI-powered systems and eventually to explain it to their own business stakeholder, regulator and customer.

What is the difference between interpretability and explainability?

Interpretability is a process using deductive reasoning and is calculated outside and after the model's computation. Algorithms typically make small variation in features and observe their impact on the outputs. However, explainability as it's done by Xpdeep's engine is provided together with the output, from within the model. No additional time or machine power is required.

What does Explainable Deep Learning provide?

Xpdeep's DL engine reveals the functioning of the learned model by giving for all the decisions the involved variables, their importance, and the contribution direction of each variable.

Our visualization interface employs decision graphs and charts to enhance comprehension. It offers an interactive experience, enabling users to seamlessly switch between global, semi-local, or local views to gain a comprehensive understanding of each individual decision.

Does it take additional time to get the explanation?

No, it doesn't. Unlike other systems that require extra time to test an AI model from outside, Xpdeep's engine provides simultaneously the deep model and its precise, complete and intelligible explanations.

Xpdeep's engine is often faster and presents less complexity than others, testing 

Can Explainable Deep Learning eliminate biases in models?

Explainable AI helps uncover biases by providing precise insights into decision-making.

Can users with non-technical backgrounds understand Deep Learning explanations?

Yes, our goal is to ensure AI explanations are comprehensible for all users, even those without technical expertise. This is why we employ visualization charts, graphs, and plain language to convey the rationale behind AI decisions.

DIfference Xpdeep and other explanability methods ? (SHAP, LIM...

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam tempor arcu non commodo elementum.

Start optimizing your deep models now