The Trust-in-AI Quadrant

If you follow the (AI) news, then it is becoming pretty obvious that trust in AI is a big topic of discussion. The discussion during the last few years was about what AI can do, why you need to learn Deep Learning if you still want to be ‘cool’, and so on. That discussion is still going on. But, our society is waking up and starts to ask the really hard questions, like “How can we trust AI"?” and “What about ethics in AI?”. A good example of the latter question is the NRC-Digital Catapult initiave “Ethics in AI”.

If you follow a3i, then you know that we care a lot about trust in AI. We have developed the Trust-in-AI Framework, for instance. This framework is one of the key components of Fiducia, which is our Platform for Responsible AI.

In this blog, we discuss our Trust-in-AI Quadrant. We developed this quadrant to frame the discussion how Responsible AI and trust in AI are related. First, let’s start with some of the core elements, which - in our opinion - make up Responsible AI:

Responsible AI FESS.png

As you can see, there are four (main) elements.

  • Fair: the AI system should not discriminate based on, say, age, gender, culture, social status.

  • Explainable: we understand how the AI system is built and how it derives it actions/results.

  • Secure: we should not harm the AI system (i.e., system has to be hacker-proof to an acceptable level).

  • Safe: the AI system should not harm us (e.g., self-driving cars running over pedestrians is not what we want).

There is much more (detail), which can be said about FESS and Responsible AI. But for now, this is our stake in the ground to make Responsible AI concrete.

The main reason to build Responsible AI is that it will lead to an increase in trust. So, trust in AI systems is a result of building Responsible AI systems from the get-go, taking into account the FESS elements. If we take those two together, we can define the Trust-in-AI Quadrant:

trust-in-ai quadrant.png

On the horizontal axis, we have “FESS” and “Not FESS" and on the vertical axis, we have '“Don’t trust” and “Trust”. We can classify the following four groups:

  • The lower right quadrant (“Dystopia/AI Luddites”) contains the people, which will never trust AI. Period. Not even in the case of Responsible AI (FESS).

  • The lower left Quadrant (“Concerned”) contains of the people who are (starting to be) concerned about the trust in AI. In particular because of the lack of FESS in AI systems. This is a group, which is growing fast in our society and are starting to become more vocal on the topic.

  • The upper left Quadrant (“Can’t be bothered”) are the people, which, well, can’t be bothered by Responsible AI. They trust the AI system because (i) they don’t care; they just want to use it or make money out of it or (ii) they may care, but don’t know how to build Responsible AI and as such, ignore it for now.

  • The upper right Quadrant (“Responsible AI”) are the people who trust AI because it is built responsibly.

What we want to achieve is a “movement within the quadrant”:

FESS Move.png

As the quadrant shows, we want (i) more Responsible AI and (ii) move mindsets. In particular, we want to move the people from “Can’t be bothered” to “Responsible AI”, by showing them how responsible AI can be build. The same applies to the people in “Concerned”. We want to move them from not trusting AI to trusting AI by showing them how responsible AI can be build.

Fiducia, our Platform for Responsible AI, is going to enable this movement and provide organizations the means to govern and control Responsible AI.