Explainable AI (XAI): Where It Stands Today?

Stavros Theocharis / November 30, 2022 / Explainable AI

A quick review of its current phase

Introduction

Artificial Intelligence is quickly becoming an essential component of businesses, academia, and society, and Explainable Artificial Intelligence (XAI) is the branch of AI that explains the reasoning behind the intelligent systems’ predictions, suggestions, and conclusions. If humans want to make sure that artificial intelligence is used responsibly to benefit academia and the public interest, they should embrace and push for XAI.

  • Make sure everything is in line with regulations and legislation.
  • Avoid potential danger.
  • Provide justifications that are responsible and credible.
  • Reduce the effects of bias, unfairness, and misunderstanding on the accuracy of the models, and know how to interpret them.
  • Verify the accuracy of XAI’s models and explanations.

Basic XAI taxonomies

There is a difference between explicitly training explainable models and explaining an unclear model after it has been trained. The first model category is referred to as a transparent model or ante-hoc explainability, whereas the second one is referred to as a post-hoc explainability model (Speith T., 2022). Linear regression, decision trees, knn, etc. are all examples of well-known transparent models. All of these models perform well across a wide range of tasks and data sets. However, such models don’t always work well in practice. The concept of a performance-explainability trade-off is relevant here (Arrieta A. B. et al., 2020). If any of these simpler models fails to provide satisfactory results, a more complex model should be used instead. These sophisticated, opaque models are known as “black boxes” due to their lack of explanation.

XAI STRATEGY

Ridley M. (2022) supports the idea that the three main parts to be answered are about the characteristics of an effective explanation, who exactly this explanation applies to, and in what manner the explanation will be presented. The explanations given are dependent on the surrounding circumstances. As with almost all software applications, the end user’s needs and goals, as well as the XAI, are used to judge the quality of the explanation.

Post-hoc XAI methods

Some of the known XAI post-hoc methods are:

Human-centered AI

The difficulty of explaining AI is fundamentally a human-centered issue. Expert system researchers have already struggled with this issue. Nonetheless, the resurgence of interest in explanations is the result of two processes happening at the same time. Firstly, the recognition of the increasing complexity and interdependence of modern AI products, and secondly, the growing recognition of the crucial roles they play as arbiters in numerous facets of human life. The current social science ideas on how individuals explain, interpret, and construct shared meaning via interaction need to form the basis for XAI procedures. In order to comprehend the social and technological needs of human-AI interaction, some researchers also discuss how to systematically apply Human-centered design (HCD) principles to the XAI development and design processes (Neerincx M. A. et al., 2019).

Discussion

So many XAI algorithms have been created, but are they all effective? Since XAI comes with a wide variety of settings, arriving at a specific solution seems to be difficult. The solution is complex because it relies on knowing how humans take in, interpret, and use AI-provided justifications. Human-computer interaction (HCI) studies, as well as human-subject research more generally, are essential for assessing XAI in the context of use, determining its limitations, and generating human-centered solutions. Actually, there are a large number of studies that reference a better understanding of the models through the XAI as it currently stands. On the other hand, XAI does not always help to bring about the desired results.

Conclusion

There is a high significance to XAI, its many components, and the ways in which they might be applied to the problem of explaining deep neural network algorithms. Model-agnostic post-hoc explainability algorithms have attracted a lot of XAI attention because of their convenience and potential applicability. The current state of research in XAI assessment shows that the area of XAI is still developing, highlighting the need for caution in the creation and selection of XAI methodologies. In order to better design and evaluate XAI, recent enhancements in human-centered assessments have been made.

References

Arrieta A. B., Díaz-Rodríguez N., Del Ser J., Ben- netot A., Tabik S., Barbado A., García S., Gil-Lopez S., Molina D., Richard B., Chatila R., Herrera F. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI. Information Fusion, 58, 82–115.

Related articles