Our services

With our Trust-in-AI framework, we help your organization to design safety, security and explainability in your AI system(s). We can be an advisor to your Executive Team, your Chief Information Security Officer, or your Data Privacy/Compliancy Officer. We can also be part of your AI (development) team, helping them with the safety, security and explainability challenges they face in their design.

The Trust-in-AI Framework

Our Trust-in-AI framework is a holistic approach taking the following core components into account:  DIM ( D ata,  I nfrastructure,  M odel)  Safety/Security/Explainability  Context

Our Trust-in-AI Framework consists of the following components:

  • Data, Infrastructure, Model (DIM)
  • Safe/Secure/Explainable
  • Context

a3i DIM.png

Just looking at one of the DIM components is not enough. A lot of conversations about transparency and explainability of AI systems are restricted to just one component. Most of the time, these conversations are about the model/algorithms. Although a key part in an AI system, it is just one piece of the total puzzle. A puzzle where the weakest link in the chain still holds true.

That's why we take a holistic approach towards DIM. We look at all components individually as well as at their integration.

a3i SSE.png

We believe that to gain trust in an AI system, such a system must be:

  • Safe - the system does not cause harm to the world.
  • Secure - the world cannot cause harm to the system.
  • Explainable - we can explain why the system does what it does.

Although these are key requirements, reality shows that it is not (always) that black & white. We can have AI systems, which maybe safe and secure, but can't be explained to the full extent. For instance, we can evaluate the safety and security of the data and infrastructure used by a Deep Learning Model. We can even evaluate the model itself, like the weights and parameters used, the layers in the model, the regularization techniques used, and how the model is coded in software. However, explaining how such a model has derived its optimal set of weights based on its training set will, in most cases, still be a mystery for us. 

On the other hand, we can have relatively simple models, which can be fully explained (e.g., logistic regression). But at the same time, it may also have an insecure data implementation, with possible non-compliance regarding privacy regulations.

a3i Context.png

Our holistic approach is not complete without incorporating the context within an AI system is used. The context are any factors, which (potentially) influence and determine (to a large degree) how you handle the S/S/E around the DIM. Examples of context are your business environment, privacy laws, legislation, ethics, industry regulations, and technical environment, to name a few.

For instance, regulations (regulation context) might require that you can explain how your AI system derived its results or conclusions (e.g., GDPR's "right to explanation" clause). If you use a model, which is more black-box and as such, difficult (if not impossible) to explain, you might be in non-compliance. An example of technology context is when you use a Cloud platform to store and process your data sets, you are (partially) responsible for the security of that data.