What Is Explainable AI and What Is It Used For?

When we’re talking about artificial intelligence problems, trust is often the first thing that comes to mind. AI today is used in many fields, including those that have direct impact on human lives, such as healthcare and justice. To be able to trust machine decisions in these fields, we need them to provide explanations of why they are doing things and what lies behind their decisions.

In this post, we’re going to show how to do that. We’ll talk about explainable artificial intelligence (XAI), its main methods and principles, and how we can use it to benefit industry.

What is explainable artificial intelligence?

Explainable AI (XAI) is a set of methods that make it possible for humans to unpack the ‘black box’ of computer decision making.

Explainable AI models can belong to any of the following categories:

  • Those that are inherently explainable. They have been designed in order to be easy to understand.
  • Black-box models. These models were designed not following the principles of XAI. Special techniques are needed to decipher their meaning.
  • Replicable models. If the research results achieved with a machine learning model can be replicated it means it’s working. However, sometimes it can be hard to trace down whether the replication has been made correctly.

Explainable AI kits are often provided by leading tech companies like Amazon and IBM. They help to describe the AI model pipeline, its expected outcome, and potential biases that can interfere with the model’s performance. Today no company that uses artificial intelligence can go without integrating XAI principles into their development process if they want to manage compliance risks. If they fail to provide accounts for their model’s decision, they can face million dollar lawsuits like it has happened with Facebook and Google many times.

Overall, explainable AI helps to promote accuracy, fairness, and transparency in your organizations. It helps you to adopt a responsible approach to AI development.

Explainability vs. interpretability

In machine learning, two terms are often used interchangeably: explainability and interpretability. In fact, they mean two different things, and it’s worth it to make this distinction:

  • Explainability describes the extent to which the internal mechanism of the AI system can be explained so that humans would understand. In a perfect world, we would just make a query in English to a machine ‘Why did you do that?’ and get a clear list of factors that explain the decision.
  • Interpretability describes the extent to which you can predict the output given the input. Interpretability looks at the model as a black box. If you have good interpretability, you can predict the results the model will give but not “why it does it”.
Explainability vs. interpretability

Both are important for making AI more approachable for humans.

What are the main principles of explainable AI?

Main principles of XAI

Explainable artificial intelligence methodology is based on 4 main principles. They have been formulated by the US National Institute of Standards and Technology (NIST):

  • AI systems should provide explanations that are backed by evidence.
  • Explanations should be meaningful in a way that can be understood by users of the software system.
  • Explanations should be accurate in describing the artificial intelligence system’s process.
  • Machine learning and deep learning models should operate within the limits that they were designed for.

Let us talk about each of these in more detail.

1. Explanation

AI is expected to provide an explanation for its outputs and also give evidence that supports the explanation. The type of explanation may vary depending on the system but as a minimum, it should be there.

NIST outlines several categories, for example, explanations that benefit users by making them more informed. This applies to patients in healthcare systems, loan applicants in banks, etc. XAI provides reasons for why this or that diagnosis was given or why a loan was approved or denied.

Another kind of explanation is expected to help systems gain trust and acceptance in society. For example, Facebook often faces negativity for not disclosing how its feed algorithm works. If they provided explanations that help users understand why this or that post appears in their feed, it could help Facebook to improve its public image.

2. Meaningfulness

The principle of meaningfulness is important because the user must understand the explanations given by AI. This principle is hard to achieve because there might be a variety of users that differ in their technical background and level of understanding. Ideally, the explanation should be accessible to anyone, regardless of their knowledge and skill.

A way to check whether the explanation given by artificial intellect is truly meaningful is whether it gives enough information so that a user could complete a task.

3. Accuracy

Explanations that AI gives to its outputs should match reality. It is the responsibility of software engineers to make sure that AI clearly describes how it arrived at certain conclusions. Accuracy doesn’t apply to the results but only to the explanations. If they come from erroneous premises then the users can’t really benefit from them or trust them.

4. Limits

Every system has knowledge limits, stuff that it doesn’t know. AI should be aware of its own knowledge limits so that it wouldn’t produce misleading results. In order to fit this principle, software must identify and declare to the end-user its knowledge limits.

What is explainable AI used for?

What is XAI used for?

Industry keeps putting more and more emphasis on explainable AI.

Domains with a lot of responsibility

ML modeling and other methods of artificial intelligence are now often used in areas that before were believed to be the prerogative of humans. AI helps judges make decisions about sentences and assists surgeons during operations. A mistake literally costs lives.

AI does sometimes perform surprisingly better than any human can. But they also make mistakes that no human would make. And these mistakes aren’t easy to identify if we rely blindly on our machines. That is why using XAI in areas like healthcare, justice, and automotive helps us prevent terrible consequences.

In any area, disputes are sometimes unavoidable. For example, a courier didn’t deliver your parcel in the agreed time or your client delayed the payment. If such a situation appears, you can contact people responsible and know the reasons behind the mistake.

As you might understand, this is much harder to do with AI. If a robot courier didn’t deliver your parcel, there might be a mistake in the system, wrong delivery address, or a malicious attack from your ex who hacked the system. We don’t know. However, we need to know why so that the same mistake wouldn’t happen again.

Elimination of historic biases from AI systems

Even artificial intelligence systems that have been built with respect to equality and inclusion regulations can contain biases if they have inherited them from historical data.

The problem with artificial intelligence is that it needs a lot of data, especially if we’re talking about deep learning algorithms that the majority of large impact companies today use. In some cases, this data might be around for a while, for example, archive documents, books, and movies. After analyzing this data, AI might decide that serious professions are not for women simply because there are many more books where women are represented as wives and mothers rather than professionals.

Explainable AI allows to somewhat combat this problem. If your model starts giving weird outputs, you will be able to track the problem down. If you can access information about the provenance (origin) of data, you will be able to eliminate it from the dataset.

Explainable artificial intelligence for critical industries

If your company uses AI for automated data-driven decision making, predictive analytics, or customer analysis, robustness and explainability should be among the core values for you.

In order to make your AI systems more compliant to the regulations, you should make your AI inherently explainable with the toolkits provided by leading software providers. However, it’s not always possible and you have to work with software written years ago. In this case, you can order a system audit from experts. We will conduct analysis of your business processes and advise on the best ways for implementing the principles of explainable AI.

Contact us to schedule a free consultation about how explainable AI could help your machine learning systems become more reliable and trustworthy.

Banner that links to Serokell Shop. You can buy hip FP T-shirts there!
More from Serokell
15 most important machine learning libraries in Python and C in 202315 most important machine learning libraries in Python and C in 2023
machine learning algorithms, how to choose ML techniquemachine learning algorithms, how to choose ML technique
Stable Diffusion overviewStable Diffusion overview