Dataset Viewer
Auto-converted to Parquet
text
stringlengths
1.25k
2.81k
A comprehensive study on fidelity metrics for XAI Miquel Mir´ o-Nicolaua,b,˚, Antoni Jaume-i-Cap´ oa,b, Gabriel Moy` a-Alcovera,b a UGi VIA Research Group, University of the Balearic Islands, Dpt. of Mathematics and Computer Science, 07122 Palma (Spain) b Laboratory for Artificial Intelligence Applications (LAIA@UIB), University of the Balearic Islands, Dpt. of Mathematics and Computer Science, 07122 Palma (Spain) Abstract The use of e Xplainable Artificial Intelligence (XAI) systems has intro-duced a set of challenges that need resolution. Herein, we focus on how to correctly select an XAI method, an open questions within the field. The inherent difficulty of this task is due to the lack of a ground truth. Sev-eral authors have proposed metrics to approximate the fidelity of different XAI methods. These metrics lack verification and have concerning disagree-ments. In this study, we proposed a novel methodology to verify fidelity metrics, using a well-known transparent model, namely a decision tree. This model allowed us to obtain explanations with perfect fidelity. Our proposal constitutes the first objective benchmark for these metrics, facilitating a com-parison of existing proposals, and surpassing existing methods. We applied our benchmark to assess the existing fidelity metrics in two different exper-iments, each using public datasets comprising 52,000 images. The images from these datasets had a size a 128 by 128 pixels and were synthetic data that simplified the training process. All metric values, indicated a lack of fidelity, with the best one showing a 30 % deviation from the expected val-ues for perfect explanation. Our experimentation led us to conclude that the current fidelity metrics are not reliable enough to be used in real scenarios. From this finding, we deemed it necessary to development new metrics, to avoid the detected problems, and we recommend the usage of our proposal as a benchmark within the scientific community to address these limitations. ˚Corresponding author Email addresses: miquel. miro@uib. es (Miquel Mir´ o-Nicolau), antoni. jaume@uib. es (Antoni Jaume-i-Cap´ o), gabriel. moya@uib. es (Gabriel Moy` a-Alcover) Preprint submitted to Elsevier January 22, 2024ar Xiv:2401. 10640v1 [cs. CV] 19 Jan 2024
Keywords: Fidelity, Explainable Artificial Intelligence (XAI), Objective evaluation PACS: 0000, 1111 2000 MSC: 0000, 1111 1. Introduction Deep learning models have become ubiquitous solutions and are used across multiple fields, yielding astonishing results. These methods outper-form other artificial intelligence (AI) models owing to their high complexity, and ability to learn from large amounts of data. However, this complexity gives rise to a major drawback: the inability to know the reasons behind their results. This challenge is commonly known as the “black-box problem” [8]. To address this challenge, e Xplainable AI (XAI) has emerged. According to Adadi and Berrada [1], the goal of XAI methods is to “create a suite of techniques that produce more explainable models whilst maintaining high performance levels”. The growing dynamic around XAI has been reflected in several scientific events and the increase in publications as highlighted in sev-eral recent reviews about the topic [1, 13, 28, 6, 22, 8, 11]. In particular, these methods have been used in sensitive fields such as medical tasks [35, 36, 2], where XAI methods are extensively used to gain a deeper understanding of models, improve them, and prevent life-costing mistakes. Multiple methods have emerged to achieve this goal. Murdoch et al. [28] proposed categoriz-ing them into two main categories: model-based and post-hoc. Model-based algorithms refer to AI models that inherently provide insights into the re-lationships they have learned. The main challenge in model-based explain-ability lies in developing models that strike a balance between simplicity, making them easily understandable to the audience, and sophistication, en-abling them to effectively capture the underlying data. Post-hoc techniques are defined as methods that analyse an externally trained model to provide insights into the learned relationships. These techniques focus on under-standing the specific model's behaviour rather than directly interpreting the model's internal mechanisms. Owing to their simplicity compared with model-based approaches, post-hoc methods have gained widespread adoption, as demonstrated in various studies that reviewed the existing state of the art ([14], [24], [34]). However, 2
a significant challenge with the post-hoc methods, as highlighted by Adebayo et al. [3], is that different post-hoc methods can produce varying explanations for the same AI model. Krishna et al. [20] identified and analysed this incon-sistency and called it the disagreement problem. This problem emphasises the need to identify correct and incorrect explanations to enhance existing techniques. To achieve this, objective evaluation becomes crucial, as relying solely on subjective human evaluation, as stated by Miller [21], may not yield reliable and consistent results. Tomsett et al. [33] identified fidelity as the main property for detect-ing whether an XAI algorithm is correct. According to Mohseni et al. [26] fidelity is ”the correctness of an ad-hoc technique in generating the true ex-planations (e. g., correctness of a saliency map) for model predictions”. The main limitation to calculating it is the inability to have a ground truth of the real explanation. To overcome this limitation, most authors rely on assumptions about the relationship between a correct explanation and the model to measure fidelity. While these proposals may differ in several as-pects, all of them involve perturbing the input based on an explanation and analysing the resulting differences in the output of the AI model. Bach et al. [7] proposed perturbing individual pixels from most important to least important and analysing how this perturbation modifies the neural network output, generating a curve that contains the number of pixels perturbed and the output. Samek et al. [32] used an approach similar to that of Bach et al. [7], however, instead of perturbing single pixels they perturbed regions of pixels. In addition, they calculated the Area over the Perturbation Curve (AOPC). Rieger et al. [31] modified the proposal of Samek et al. defining the regions using a superpixel detection algorithm. Bhat et al. [9] proposed adding perturbations to the original input according to the importance of the explanation, to a completely perturbed image. To obtain the final met-ric, they proposed using the Pearson correlation coefficient [15] between the differences in the outputs of the model and the extend of importance that was removed. Alvares-Melis et al. [4] performed the same calculation as Bhat et al. [9], with the major difference being that they did not accumulate the perturbation, only perturbing one region at a time. Finally, Yeh et al. [37] instead of calculating fidelity proposes calculating the reverse, i. e. infidelity. To do this, they proposed using the expected mean square between the dif-ference in the output when a region is perturbed and the importance of these regions multiplied by the amount of perturbation. The existence of numerous fidelity metrics and the absence of a consen-3
sus among them pose a significant challenge, which is reminiscent of the disagreement issues identified in XAI methods by Krishna et al. [20]. In addressing this challenge, several authors have advocated the assessment of metric quality, with Hedstr¨ om et al. [18] characterising this evaluation as a meta-evaluation. To this end, Tomsett et al. [33] introduced three sanity checks for fidelity metrics, establishing essential conditions for ensuring their inherent reliability. The application of these checks to the AOPC metric pro-posed by Samek et al. [32] and the faithfulness metric proposed by Alvarez-Melis [4] revealed that both metrics were deemed ”unreliable at measuring saliency map fidelity”. Using a methodology similar to that of Tomsett et al. [33], Hedstr¨ om et al. [18] introduced a set of conditions for accurate mea-surements, and subsequently converted them into continuous metrics. The application of these metrics to 10 different fidelity measures indicates that Pixel Flipping by Bach et al. [7] performed the best, albeit without achiev-ing perfect results. Both of these assessment approaches can be categorised as axiomatic evaluations, because they establish a set of axioms and assess whether the metrics align with them. However, a noteworthy limitation of these studies lies in the necessity to assume these axioms, especially when a lack of consensus exists between both proposals. Efforts to reconcile and standardise these axioms are imperative for advancing the field and enhanc-ing the reliability of fidelity metrics. From the insights gleaned from these studies, it becomes apparent that the comprehensive XAI methodology does not necessarily eliminate the need for blind trust in black-box models; instead, it introduces its own set of non-transparent elements that demand a similar degree of trust. These com-ponents include the AI model itself, the XAI method used, and the fidelity metric employed. Visualised in Figure 1, we observe how the inclusion of elements aimed at shedding light on the opaqueness of a pipeline actually only adds complexity to the entire system. In this study, we aim to develop a novel method to verify fidelity metrics. To accomplish this, we used a transparent model that allowed us to have a ground truth for the explanation: the well-known decision tree [10]. This model allowed us to compare fidelity metrics with the real fidelity of the explanation, surpassing the limitations of the previous approaches and the axiomatic approaches used previously in the literature. The rest of this paper is organised as follows. In the next section, we identify the objectives of this research. In Section 3 we propose a method-ology to measure and analyse the different fidelity metrics. In Section 4, we 4
(a) Flow of a black-box model. (b) Flow of a black-box model combined with an XAI method. (c) Flow of a black-box model combined with an XAI method and a fidelity metric. (d) Flow of a black-box model combined with an XAI method and a verfied fidelity metric. Figure 1: Flows of different configurations: AI model, AI with an XAI method, and AI with an XAI method and a fidelity metric. Inside the dash box, the element that must be trusted is shown. specify the experimental environment and describe the fidelity metrics, mod-els, measures, and statistical tests used for experimentation. In Section 5, we discuss the results of the two experiments defined in the previous section to analyse the different fidelity metrics, and the theoretical and practical im-plications of the results. Finally, in Section 6 we present the conclusions of the study. 2. Research objectives We propose a novel approach to verify the existing fidelity metrics for the XAI methods. These metrics are crucial for a correct XAI system, thereby avoiding the disagreement problem described by Krishna et al. [20] for XAI methods. However, the reliability of these metrics remains an open question in the current state of the art [33, 18]. Therefore, the main goals of the 5
proposed verification metric are as follows: (1) to introduce a novel objective methodology for verifying the reliability of fidelity metrics via the use of a ground truth, which works as the first benchmark for fidelity metrics, and (2) to analyse the existing metric proposals and identify the degree to which they accurately approximate the actual fidelity. 3. Method To define our methodology, we firstly formalise the fidelity problem we aimed to discuss. We follow the proposed methodology of Guidotti [17]. Let a function f:XÑYbe a model that maps instances x PX, from the set of possible input data, X, to its respective output y PY, where Yis the set of all ground truths for X. We write fpxq“yto denote the AI result for a particular x PX. These AI models can be classified either as transparent models or black box models. On one hand, transparent models are characterised by knowing the cause behind the decision fpxqfor an xinput, this cause is known as the explanation, ex PE, where Eis the set of all possible explanations. On the other hand, black box models are the model that the explanation is not known. However, the explanation exin this kind of models can be approximated by XAI methods, such as g:XˆYшE, where ˆEis the set of approximations to the original E, and ˆ exis the approximate explanation for an instance x. The fidelity of XAI methods with this setup becomes a distance between the real explanation and the approximations, dist p E,ˆEq. The main problem with black-box models is that we did not dispose of E, and for this reason the value dist p E,ˆEq, is calculated via proxy function ydistp E,ˆEq. This proxy function is the different fidelity metrics found in the state-of-art [7, 32, 31, 9, 4, 37]. We proposed to realise the goal of the article, checking whether ydistp E,ˆEq« distp E,ˆEq. To do so we used transparent models, in which E“ˆE, and for this reason, we know that dist p E,ˆEq“0 and that if ydistp E,ˆEq‰0 it means that the fidelity metric is incorrect. In the following section, we define a set of experiments to check whether the previous requirements are fulfilled in the state-of-art fidelity metrics. 6
4. Experimental setup The experimental setup defined in this section was originally designed to identify the reliability of the fidelity metrics. To do so, we used a transparent model that allowed us to obtain a ground truth for both the fidelity of the metrics and the explanations. 4. 1. Fidelity metrics In the previous section, we analysed the state-of-the-art fidelity metrics. We selected four metrics to further analyse them: Region Perturbation, pro-posed by Samek et al. [32]; Faithfulness Correlation, proposed by Bhat et al. [9]; Faithfulness Estimate, proposed by Alvarez-Melis et al. [4]; and Infi-delity, first proposed by Yeh et al. [37]. We discarded the rest of the metrics analysed in the previous section for different reasons: the lack of meaningful differences from the ones se-lected (Pixel Flipping [7], IROF [31] and Selectivity [27] are similar to Re-gion Perturbation proposed by Samek et al. [32]) or the nature of the metric (Sensitivity N proposed by Ancona et al. [5] is a binary metric, that only indicates whether one result was correct or not). We used implementations from Quantus [19]. 4. 2. AI model We evaluated the four fidelity metrics discussed previously using a trans-parent model: the regression decision tree. This model is a well-known supervised and transparent AI model, based on a tree structure. Its goal is to predict the value of a target variable through binary decision rules inferred from data [10]. These models are extensible and can be used for tabular data; however, in our case, the data we used were images. To use them, we flatten each image and thread it as a flat vector, and each pixel is considered a feature. Decision trees are transparent; however, the usual explanations from these models are global ones, with a single explanation for the whole model instead of explaining the decision for one input. Fidelity metrics, in contrast, were designed to analyse local explanations. To obtain a local explanation, we developed a new and simple algorithm. Knowing that the prediction of de-cision trees is defined by the path from the root node to a leaf node and that this path is selected by analysing a single feature, we proposed to set each of these features as important for prediction. Finally, to quantify this 7
importance, we considered the impurity criterion. As can be seen in Fig-ure 4, where a set of examples of explanations are depicted, the result of this process is a sparse explanation, with a very few pixels with any im-portance. This odd result, compared with usual saliency maps found in the state-of-the-art, was caused by the differences between the convolutional neural networks (the usual model from the saliency maps was extracted) and decision trees: the former detects local patterns, whereas the latter detects global patterns. Therefore, the saliency maps obtained from decision trees do not highlight local and compact structures, but rather different pixels along the entire image. The algorithm and trained models are available at https: //github. com/explaining AI/fidelity_metrics/releases/tag/1. 0. Owing to the simplicity of decision trees, we have the set of real explana-tions,E, available. Therefore, the fidelity metric must have perfect results. In other words, in this case, the approximate distance defined by each metric (ydist) should be zero because the explanation was perfect. 4. 3. Datasets The experiment presented in this study was based on the use of decision trees, a transparent AI model, which allowed us to obtain explanations with perfect fidelity. This method is not capable of handling complex data such as real images; therefore, we proposed training it using simple synthetic datasets. In particular, we used the AIXI-Shape dataset, proposed by Mir´ o-Nicolau et al. [23], and the TXUXIv3 dataset proposed by Mir´ o-Nicolau et al. in [25]. The original goal of these two datasets was to generate datasets with defined ground truths for the explanations, thus highlighting their simplicity. Both datasets were made public by the authors at https: //github. com/miquelmn/aixi-dataset/releases. The AIXI-Shape dataset is a collection of 52000 images of 128 by 128 pixels, built by combining a black background and a set of simple geometric shapes (circles, squares, and crosses). Each image varies depending on the position, size, and number of the figures present in it. The label of each image is calculated using equation 1, ssinpxq“1{2¨sin´π 2|xc|¯ `1{4¨sin´π 2|xs|¯ `1{6¨sin´π 2|xcr|¯,(1) where xis an image from the AIXI-Shape dataset, and |xc|,|xs|, and |xcr|are the number of circles, squares, and crosses present in the image x respectively. 8
The nature of these images reduces the appearance of out-of-domain (OOD) samples, which is one of the main concerns related to fidelity met-rics, owing to the uniform background used. The most common way in which OOD samples are generated, is via the addition of black pixel areas due to the occlusion process, making the background black and mitigating the ap-parition of known patterns. The TXUXIv3 dataset is an extension of the original AIXI-Shape dataset, proposed by the same authors in a different study [25]. The authors aimed to generate synthetic images that had the limitations of real datasets, particu-larly allowing increased OOD generation due to the non-uniform background. This dataset is also a collection of 52,000 images with simple figures, as in the AIXI-Shape dataset, with random locations and sizes. The main difference is the background: instead of a uniform value, the background was randomly selected from 5,640 of the Describable Textures Dataset [12]. Similarly to the AIXI-Shape, the label is once again calculated with the ssin function 1. Examples of these two datasets are shown in Figures 2 and 3. Figure 2: Sample of images from the AIXI-Shape [23] dataset. Figure 3: Sample of images from the TXUXIv3 [25] dataset. 9
4. 4. Experiments We conducted two different experiments to analyse the behaviour of the different fidelity metrics and their reliability. We used the fidelity metrics, AI model and datasets introduced in the previous section. Each experiment aimed to analyse the behaviour of the metrics in a different context, as defined by the data used: Experiment 1. We trained a decision tree [10] on the AIXI-Shape dataset, proposed by Mir´ o-Nicolau et al. [23]. We used the training and testing divisions from the original dataset: 50,000 images for training and 2,000 for validation. We obtained the local explanations of this transparent model, as explained previously, and calculated the four fi-delity metrics on the validation set. Figure 4 shows a set of images from this dataset and their corresponding explanation. In this experiment, we analysed the behaviour of the fidelity metrics in an environment with fewer OOD samples than usual, which is one of the main concerns of fidelity metrics. We report the mean and standard deviation of the different metrics. Experiment 2. Similar to that in the previous experiment, we trained a decision tree [10]; however, in this case, we used the TXUXIv3 dataset proposed by Mir´ o-Nicolau et al. [25]. We used the training and testing division from the original dataset: 50,000 images for train and 2,000 for validation. This dataset, as already discussed, allows for an increased generation of OOD samples due to the presence of a non-uniform back-ground. We analysed the impact of these OOD samples on the fidelity metrics by repeating the same metric calculation as in the previous ex-periment, using the same method as used previously. Examples of im-ages from this dataset and their corresponding explanations are shown in Figure 4 In both cases, we used the implementation presented in the scikit learn li-brary [29]. We also used the default hyperparameter values from this library. The values can be seen in Table 1. We have made the two resulting mod-els publicly available (see https://github. com/explaining AI/fidelity_ metrics/releases/tag/1. 0 ). The performance of the decision trees in both experiments was not partic-ularly important. The fidelity of the method, which is the main analysis topic 10
Hyperparameter Value Criterion Gini impurity Splitter Best Maximum depth Without maximum Minimum sample split 2 Minimum samples leaf 1 Minimum weighted fraction leaf 0 Maximum features Number of features Maximum leaf nodes Unlimited Minimum impurity decrease 0 Table 1: Hyperparameter value for decision tree training in both experiments. of this study, is independent of the performance of the underlying models. A good XAI method must have good fidelity, for both good and bad models. In our case, we assure the fidelity because decision trees are transparent model. However, for the sake of scientific openness, it would be interesting to also have performance measures of the decision trees. We trained these models for a regression task, and used two standard performance measures: Mean Absolute Error (MAE), and Mean Squared Error (MSE) (see equations 2 and 3 respectively). Finally, in Table 2 shows the performance measures for the validation set can be seen. MAE“řn i“i|yi´ˆyi| n, (2) MSE“řn i“ipyi´ˆyiq2 n, (3) where nis the size of the dataset, ithe index of the image, yithe predic-tion of iimage, and ˆ yithe ground truth of the image i. Metric Experiment 1 Experiment 2 Mean Absolute Error 0. 265 0. 256 Mean Squared Error 0. 107 0. 100 Table 2: Regression metric performance values for both experiments. 11
(a) Image from the AIXI-Shape [23] dataset. (b) Perfect explanation obtained from a decision tree for the previous im-age. (c) Image from the AIXI-Shape [23] dataset. (d) Perfect explanation obtained from a decision tree for the previous im-age. (e) Image from the TXUXIv3 [25] dataset. (f) Explanation obtained from a decision tree for the previous image. (g) Image from the TXUXIv3 [25] dataset. (h) Explanation obtained from a decision tree for the previous image. Figure 4: Examples of images from AIXI-Shape dataset [23], TXUXIv3 [25] dataset and its respective explanations from decision trees. In neither Experiment 1 nor 2 we compared our approach with any state-of-the-art baseline, because of the novelty of our approach. To the best of our knowledge, this is the first attempt to assess the reliability of fidelity metrics using transparent models with ground truth [18, 33]. 5. Results and discussion In this section, we discuss and analyse the results obtained for the exper-iments defined in Section 4. 5. 1. Experiment 1 Table 3 depicts the results obtained in the first experiment. The table shows the aggregated results for the fidelity metrics with two values: the mean and standard deviation. To analyse the results it is crucial to bear in mind that Faithfulness Correlation [9], Faithfulness Estimate [4], and Region Perturbation [32] are 12
Metric First experiment Faithfulness Correlation [9] 0. 7866p˘0. 2963q Faithfulness Estimate [4] 0. 7751p˘0. 2888q Infidelity [37] 5. 9897p˘23. 6442q Region Perturbation [32] 0. 2192p˘0. 1812q Table 3: Results of the first experiment, obtained using decision trees [10] for the AIXI-Shape dataset [23]. These results are aggregations of image-wise results: the mean and standard deviation. similarity measures, where a value of 1 represents a perfect result. Conversely, Infidelity [37] is a distance measure, where a value of 0 indicates perfection, with a possible values range of r0,`8q We can see very different results. Whereas Region Perturbation [32] in-dicated very low fidelity, both Faithfulness Correlation [9] and Faithfulness Estimate [4] indicated much higher fidelity, but still with poor results. The fact that the results of these three fidelity metrics were so different, shows a concerning lack of consensus. Furthermore, these diverging results clearly de-pict that, in addition to the imperfection of the results, the metrics presents important problems. Finally, we can see that, because of the unbound na-ture of the metric proposed by Yeh et al. [37], it is more complex to identify whether the explanations are faithful to the real explanation or not. Even so, we found a large dispersion, with a standard deviation of 23. 6442. With values of Infidelity ranging from 0 (the perfect result) to 481. 10. This large dispersion also indicated that the results depend on the sample used, indi-cating a clear lack of consistency between samples, in addition to the lack of consensus between metrics. Although there are big differences between the metrics and some of them were harder to analyse, in our case we know that the explanations have perfect fidelity. Therefore we consider that the results obtained for all four metrics indicated some problems because none of them showed the actual perfect fidelity. According to the literature [16], the presence of OOD samples is one of the reasons for erroneous behaviour of occlusion-based approaches, as fidelity metrics, and the fact that the dataset used in this experiment had a fewer OOD samples than usually found in more typical datasets, we expected an even worse result in a real scenario. To confirm this expectation, in the next 13
experiment, we tested the behaviour of these metrics with a dataset with larger appearance of OOD samples. 5. 2. Experiments 2 Table 4 depicts the results obtained in the second experiment. The table depicts the aggregated metrics for the image-wise results. Metric Second experiment Faithfulness Correlation. [9] 0. 2979p˘0. 3401q Faithfulness Estimate [4] 0. 4871p˘0. 3532q Infidelity [37] 8. 63e7p˘1. 1e10q Region Perturbation [32] 0. 2334p˘0. 1627q Table 4: Results of the second experiment, obtained using decision trees [10] for the TXUXI dataset [25]. These results are aggregations for image-wise results: the mean and standard deviation. We observed that all four metrics yielded significantly poorer results than those in the previous experiment, with less fidelity and larger dispersion. These results were obtained in a context in which we knew that the expla-nation was obtained from a transparent model, and thus hypothetically we expected perfect metrics results for all data. The larger dispersion of all metrics can be seen in, for example, the maximum and minimum values of Infidelity [37], ranging from 3. 09 to 4. 78e10, which is much worse than those in the previous experiment are. In the previous experiment, we tested the metrics in a context with fewer OOD samples. However, in this experiment, we used the TXUXIv3 dataset, proposed by Mir´ o-Nicolau et al. [25], which increases the probability of gen-erating OOD samples because the background is not equal to 0. We can conclude, based on these results, bearing in mind that the ex-planation was obtained from transparent models, that the studied fidelity metrics, did not depict the real fidelity of the explanation to the backbone model. The results obtained from these experiments are compatible with those of previous studies that indicated the susceptibility of AI models to OOD samples and the ease of sensitivity approaches, such as fidelity metrics, to generate them [30, 16]. 14
5. 3. Theoretical and practical implications The proposed methodology allowed objective assessment of the reliability of any fidelity metric. Considering the lack of consensus on how to faithfully calculate the real fidelity of an explanation, a meta evaluation for the met-rics, can clearly depict an evaluator correctness can resolve the disagreement problem existing among them. Our proposed methodology can serve as a quality benchmark for future metric developments. The experimentation in this study used the proposed methodology to compare and analyse the existing fidelity metrics. The results revealed a high sensibility to OOD samples and overall unreliable results, similar to the conclusions obtained in previous axiomatic meta-evaluation proposals [33, 18]. All metrics approximate fidelity, obtaining far worse results than the real value. 6. Conclusion In this study, we introduced a novel evaluation methodology designed to objectively assess the reliability of fidelity metrics. This evaluation used a transparent model—decision trees—to serve as a quality benchmark for fidelity, due to the inherent availability of a ground truth for the explanation, and consequently for the fidelity. Using this methodology, we conducted a comprehensive analysis of the current state of fidelity metrics. Specifically, we consolidated all of them into four metrics: Region Perturbation, proposed by Samek et al. [32]; Faithful-ness Correlation, proposed by Bhat et al. [9]; Faithfulness Estimate, proposed by Alvarez-Melis et al. [4]; and Infidelity, first proposed by Yeh et al. [37]. Our experimental setup, comprising two distinct experiments, aimed to determine whether existing fidelity metrics accurately reflect the true fidelity of explanations. We hypothesised that accurate metrics would produce im-peccable results for transparent decision tree explanations. Contrary to our expectations, none of the metrics consistently delivered perfect outcomes across all samples. Moreover, their performance significantly declined when faced with an increased presence of OOD samples in the second experiment, highlighting their sensitivity to such artefacts. The susceptibility of fidelity metrics to OOD samples renders them im-practical in certain real-world scenarios. In many AI models, one class is designated to include any sample not fitting into other categories. Thus, any perturbation applied to these samples generates new ones that are confidently 15
assigned to this catch-all class. This invariability challenges the explanation of samples within this class through perturbation, revealing an inherent lim-itation. In light of these findings, we conclude that the existing state-of-the-art fidelity metrics are ill-suited for accurately calculating explanation fidelity in all practical scenarios, particularly in fields such as medical related tasks, usually characterised by problems with few classes or even binary, rendering the use of these metrics highly problematic. As a future work, and after we have demonstrated that the current fidelity metrics have serious problems, even in a very simple context, our research underscores that it is imperative to develop novel fidelity metrics capable of being correctly used in all scenarios. These new metrics must address the deficiencies inherent in the current approaches and effectively encapsulate the genuine fidelity of explanations. In particular, the lack of reliability of these metrics in the presence of OOD samples, must be fixed. These desiderata can be objectively checked using the proposed methodology. We recommend its use as an initial benchmark to avoid generating more unreliable fidelity metrics. 7. Declaration of competing interest The authors declare that they have no known competing financial inter-ests or personal relationships that could have influenced the work reported in this study. 8. Funding Project PID2019-104829RA-I00 “EXPLainable Artificial INtelligence sys-tems for health and well-be ING (EXPLAINING)” funded by MCIN/AEI/10. 13039/501100011033. Miquel Mir´ o-Nicolau benefited from the fellowship FPI 0352020 from Govern de les Illes Balears. References [1] Adadi, A. and Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE access, 6:52138-52160. 16
[2] Adarsh, V., Kumar, P. A., Lavanya, V., and Gangadharan, G. (2023). Fair and explainable depression detection in social media. Information Processing & Management, 60(1):103168. [3] Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., and Kim, B. (2018). Sanity checks for saliency maps. Advances in neural information processing systems, 31. [4] Alvarez Melis, D. and Jaakkola, T. (2018). Towards robust interpretabil-ity with self-explaining neural networks. Advances in neural information processing systems, 31. [5] Ancona, M., Ceolini, E., ¨Oztireli, C., and Gross, M. (2018). Towards bet-ter understanding of gradient-based attribution methods for deep neural networks. In 6th International Conference on Learning Representations (ICLR), number 1711. 06104, pages 0-0. Arxiv-Computer Science. [6] Anjomshoae, S., Najjar, A., Calvaresi, D., and Fr¨ amling, K. (2019). Explainable agents and robots: Results from a systematic literature re-view. In 18th International Conference on Autonomous Agents and Mul-tiagent Systems (AAMAS 2019), Montreal, Canada, May 13-17, 2019, pages 1078-1088. International Foundation for Autonomous Agents and Multiagent Systems. [7] Bach, S., Binder, A., Montavon, G., Klauschen, F., M¨ uller, K.-R., and Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. Plo S one, 10(7):e0130140. [8] Barredo Arrieta, A., D´ ıaz-Rodr´ ıguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., and Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward respon-sible AI. Information Fusion, 58:82-115. [9] Bhatt, U., Weller, A., and Moura, J. M. (2021). Evaluating and aggregat-ing feature-based model explanations. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pages 3016-3022. [10] Breiman, L. (1984). Classification and regression trees. Routledge. 17
[11] Cambria, E., Malandri, L., Mercorio, F., Mezzanzanica, M., and Nobani, N. (2023). A survey on xai and natural language explanations. Information Processing & Management, 60(1):103111. [12] Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S.,, and Vedaldi, A. (2014). Describing textures in the wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). [13] Doˇ silovi´ c, F. K., Brˇ ci´ c, M., and Hlupi´ c, N. (2018). Explainable artifi-cial intelligence: A survey. In 2018 41st International convention on in-formation and communication technology, electronics and microelectronics (MIPRO), pages 0210-0215. IEEE. [14] Eitel, F., Ritter, K., and (ADNI), A. D. N. I. (2019). Testing the robust-ness of attribution methods for convolutional neural networks in mri-based alzheimer's disease classification. In Interpretability of Machine Intelli-gence in Medical Image Computing and Multimodal Learning for Clinical Decision Support: Second International Workshop, i MIMIC 2019, and 9th International Workshop, ML-CDS 2019, Held in Conjunction with MIC-CAI 2019, Shenzhen, China, October 17, 2019, Proceedings 9, pages 3-11. Springer. [15] Freedman, D., Pisani, R., and Purves, R. (2007). Statistics (interna-tional student edition). Pisani, R. Purves, 4th edn. WW Norton & Com-pany, New York. [16] Gomez, T., Fr´ eour, T., and Mouch` ere, H. (2022). Metrics for saliency map evaluation of deep learning explanation methods. In International Conference on Pattern Recognition and Artificial Intelligence, pages 84-95. Springer. [17] Guidotti, R. (2021). Evaluating local explanation methods on ground truth. Artificial Intelligence, 291:103428. [18] Hedstr¨ om, A., Bommer, P., Wickstrøm, K. K., Samek, W., Lapuschkin, S., and H¨ ohne, M. M.-C. (2023). The meta-evaluation problem in explain-able ai: Identifying reliable estimators with metaquantus. ar Xiv preprint ar Xiv:2302. 07265. [19] Hedstr¨ om, A., Weber, L., Krakowczyk, D., Bareeva, D., Motzkus, F., Samek, W., Lapuschkin, S., and H¨ ohne, M. M. M. (2023). Quantus: An 18
explainable ai toolkit for responsible evaluation of neural network expla-nations and beyond. Journal of Machine Learning Research, 24(34):1-11. [20] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., and Lakkaraju, H. (2022). The disagreement problem in explainable machine learning: A practitioner's perspective. ar Xiv preprint ar Xiv:2202. 01602. [21] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267:1-38. [22] Minh, D., Wang, H. X., Li, Y. F., and Nguyen, T. N. (2022). Explain-able artificial intelligence: a comprehensive review. Artificial Intelligence Review, pages 1-66. [23] Mir´ o-Nicolau, M., Jaume-i Cap´ o, A., and Moy` a-Alcover, G. (2023). A novel approach to generate datasets with xai ground truth to evaluate image models. ar Xiv preprint ar Xiv:2302. 05624. [24] Mir´ o-Nicolau, M., Moy` a-Alcover, G., and Jaume-i Cap´ o, A. (2022). Evaluating explainable artificial intelligence for x-ray image analysis. Ap-plied Sciences, 12(9):4459. [25] Mir´ o-Nicolau, M., Jaume-i Cap´ o, A., and Moy` a-Alcover, G. (2023). As-sessing Fidelity in XAI post-hoc techniques: A Comparative Study with Ground Truth Explanations Datasets. ar Xiv:2311. 01961 [cs]. [26] Mohseni, S., Zarei, N., and Ragan, E. D. (2021). A multidisciplinary survey and framework for design and evaluation of explainable ai systems. ACM Transactions on Interactive Intelligent Systems (Tii S), 11(3-4):1-45. [27] Montavon, G., Samek, W., and M¨ uller, K.-R. (2018). Methods for inter-preting and understanding deep neural networks. Digital signal processing, 73:1-15. [28] Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., and Yu, B. (2019). Interpretable machine learning: definitions, methods, and appli-cations. ar Xiv preprint ar Xiv:1901. 04592. [29] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Van-derplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and 19
Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830. [30] Qiu, L., Yang, Y., Cao, C. C., Zheng, Y., Ngai, H., Hsiao, J., and Chen, L. (2022). Generating perturbation-based explanations with robustness to out-of-distribution data. In Proceedings of the ACM Web Conference 2022, pages 3594-3605. [31] Rieger, L. and Hansen, L. K. (2020). Irof: a low resource evaluation metric for explanation methods. In Workshop AI for Affordable Healthcare at ICLR 2020. [32] Samek, W., Binder, A., Montavon, G., Lapuschkin, S., and Muller, K.-R. (2017). Evaluating the Visualization of What a Deep Neural Network Has Learned. IEEE Transactions on Neural Networks and Learning Sys-tems, 28(11):2660-2673. [33] Tomsett, R., Harborne, D., Chakraborty, S., Gurram, P., and Preece, A. (2020). Sanity checks for saliency metrics. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 6021-6029. [34] van der Velden, B. H., Kuijf, H. J., Gilhuijs, K. G., and Viergever, M. A. (2022). Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Medical Image Analysis, 79:102470. [35] Wang, L., Lin, Z. Q., and Wong, A. (2020). Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Scientific reports, 10(1):19549. [36] Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., and Summers, R. M. (2017). Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax dis-eases. In Proceedings of the IEEE conference on computer vision and pat-tern recognition, pages 2097-2106. [37] Yeh, C.-K., Hsieh, C.-Y., Suggala, A., Inouye, D. I., and Ravikumar, P. K. (2019). On the (in) fidelity and sensitivity of explanations. Advances in Neural Information Processing Systems, 32. 20
README.md exists but content is empty.
Downloads last month
5