A comparison of explainable artificial intelligence methods in the phase classification of multi-principal element alloys.
Ontology highlight
ABSTRACT: We demonstrate the capabilities of two model-agnostic local post-hoc model interpretability methods, namely breakDown (BD) and shapley (SHAP), to explain the predictions of a black-box classification learning model that establishes a quantitative relationship between chemical composition and multi-principal element alloys (MPEA) phase formation. We trained an ensemble of support vector machines using a dataset with 1,821 instances, 12 features with low pair-wise correlation, and seven phase labels. Feature contributions to the model prediction are computed by BD and SHAP for each composition. The resulting BD and SHAP transformed data are then used as inputs to identify similar composition groups using k-means clustering. Explanation-of-clusters by features reveal that the results from SHAP agree more closely with the literature. Visualization of compositions within a cluster using Ceteris-Paribus (CP) profile plots show the functional dependencies between the feature values and predicted response. Despite the differences between BD and SHAP in variable attribution, only minor changes were observed in the CP profile plots. Explanation-of-clusters by examples show that the clusters that share a common phase label contain similar compositions, which clarifies the similar-looking CP profile trends. Two plausible reasons are identified to describe this observation: (1) In the limits of a dataset with independent and non-interacting features, BD and SHAP show promise in recognizing MPEA composition clusters with similar phase labels. (2) There is more than one explanation for the MPEA phase formation rules with respect to the set of features considered in this work.
SUBMITTER: Lee K
PROVIDER: S-EPMC9270422 | biostudies-literature |
REPOSITORIES: biostudies-literature
ACCESS DATA