Skip to content
Xpdeep > Product > Understand

 

Understand Comprehensively your Deep Model

Ensure data scientists and internal clients have a clear understanding and complete trust to work efficiently with AI-driven applications.

Understand both the Model and its Inferences

Unlike post-hoc methods (SHAP, LIME), Xpdeep explains both models and inferences at local and global scales without additional computational cost. Gain insights from granular precision to a holistic understanding about feature importance, performance, quality metrics, and predictive regions.

Discover Model Internal Decisions to Enhance Robustness

Utilize our Python libraries to train deep models explainable by design, facilitate discussions between data scientists and experts about model behavior, enhance risk management, and ensure robust performance.

 

Icon xpdeep progressive gradient_industry 4.0

Performance Indicators

There used to be a trade-off between performance and transparency of AI models. Xpdeep disrupts this paradigm by delivering deep learning based solutions that are inherently self-explainable, ensuring high performance without sacrificing transparency.

View standard performance indicators such as accuracy, precision, F1, and confusion matrices on a global level. On a local level or for groups of interest, access accuracy (local MSE, RMSE), target feature distribution, and class distribution within that region.

Icon xpdeep progressive gradient_industry 4.0
Icon_bar plot

Feature Importance

Access attributions related to each internal decision, indicating the importance of input variables in each deployed decision and global attributions in the learned model's predictions. For each discriminative feature is indicated the mean value ± its standard deviation within the group, along with an indicator showing whether the group is characterized by lower or higher values for this feature.

Feature importance can help to refine the data used: eliminating superfluous input factors or reducing lookback length for times series, eventually reducing computational demands during inference. 

Icon xpdeep progressive gradient_Scenario analysis-29

Predictive Regions

Model predictions fall into different regions called predictive regions. Simultaneously visualize for all decisions and predictive regions: decision descriptions, factor importance, quality measures, and the distribution of target characteristics.

Metrics indicate if a predictive region is performing well or needs refinement. Additionally, see which important features characterize a given class or discriminating classes.

Icon xpdeep progressive gradient_Scenario analysis-29
Icons_what if

What If Scenarios

What-if analysis allows data scientists to precisely quantify how changing a feature affects the model's confidence in a particular prediction, or observe how these alterations affect the model's output, assessing sensitivity and robustness.

For users, these scenarios help compare different possibilities to prepare for the future or check how the model would react to particular data outside the training set or possibly biased data.

Icon xpdeep progressive gradient_end user

Counterfactuals Analysis

With precision, and through a graphical interface, Xpdeep highlights the features that had the most significant impact on a prediction, and what minimal change is needed in input to correct the prediction. Counterfactual analysis empowers data scientists and stakeholders to precisely identify conditions under which desired outcomes can be achieved.

Plus, this feature being available both on API and through a graphical interface, it is no longer reserved to data scientists, which streamlines development time and use of the model.

Icon xpdeep progressive gradient_end user
Icons xpdeep_trust

Quality Metrics on Explanations themselves

Xpdeep provides metrics on the Reliability (Infidelity) and Robustness (Sensitivity) of model explanations at both global and local levels (decisions, predictive regions). The "Reliability" assesses the reliability or accuracy of the provided explanation, while the "Robustness" indicator measures the stability or robustness of the explanation against minor perturbations in a nearby neighborhood.

In the extensive experiments conducted comparing Xpdeep's explanations with state-of-the-art XAI methods (LIME, SHAP, Int.Grad and Saliency), Xpdeep’s explanations were found to be comparable or significantly better.

Icon xpdeep progressive gradient_false positive

Identify Strengths and Weaknesses

Identify weak points, false positives/negatives, biases, and underrepresented groups to facilitate discussions between data scientists and experts about model behavior. This enhances risk management and ensures robust performance. Our platform also allows easy identification of outliers within predictive regions, enabling stakeholders to address unusual patterns effectively.

Icon xpdeep progressive gradient_false positive

Share model results precisely with end-users and stakeholders

Xpdeep extends trust and understanding to diverse stakeholders, from bank customers and marketing officers to doctors and patients.

Ensure they can make informed decisions with complete clarity.

Easier tasks

  • Collaboration & Feedback

  • Biases & ...

  • What if & ...

  • Counterfactuals

  • Extreme probabilities explanations & ...

  • Overfit & ...

  • Adoption & ...

  • Certification & Compliance