Skip to content Skip to footer

Building trust in AI

Image: Adobe Stock / ipopba

Artificial intelligence (AI) is transforming lives and industries in unprecedented ways. From healthcare to finance, transportation to entertainment, AI is being used to enhance decision-making abilities and augment capabilities. However, this technology also poses some serious challenges, especially when it comes to building trust in AI.

One of the biggest challenges is regulation keeping up with the rapid innovation of AI. However, the importance of building trust in AI must not be underestimated in examining how it can be regulated effectively.

There is a natural progression when it comes to the innovation and regulation of technology. In the first phase, new technologies are introduced to the market, and responsible use is achieved through making mistakes. This is the current situation regarding AI. In the second phase, the regulation comes into play, and responsible use is learnt through the interpretation of the law and the case law that arises from it. The third and final phase involves the development of technologies with responsible use built in. For example, smartphones now have tools in their operating systems that help users manage their screen time. Through these built-in best practices, we learn about responsible use; this is a continuous process that unfolds over time.

The rapid pace of AI innovation presents significant challenges to regulation. Regulations that are too specific quickly become outdated as technology advances, while high-level regulations are too vague to be effective. It’s important to strike the right balance between specificity and flexibility when regulating AI. Regulations need to be specific enough to provide guidance to developers, but also flexible enough to adapt to the ever-changing technology landscape.

One of the more concrete ways to build trust in AI is through transparency and explain-ability. Users need to be able to understand how AI systems are making decisions and how they are affecting their lives. For example, if an AI-powered medical diagnosis system recommends a particular treatment, patients need to be able to understand how that recommendation was made and why it is the best option. By providing transparency and explain-ability, trust can be built in AI to ensure that users feel confident in the decisions being made.

Another important factor in building trust in AI is addressing bias. AI systems are only as good as the data they are trained on. If the data is biased, then the AI system will be biased too. This can lead to serious consequences, such as discriminatory decision-making. To address this issue, regulations will need to require developers to check models for bias and ensure that algorithms are transparent and explainable. By doing so, it will help to ensure that the AI system is as fair and unbiased as possible, and making decisions that are in the best interest of everyone.

Regulation is an essential part of building trust in AI. While some may argue that regulation stifles innovation, the reality is that regulation can drive innovation. Through requiring developers to check their models for bias and ensure that their algorithms are transparent and explainable, new opportunities are created for innovation. For example, companies may develop new techniques for detecting and eliminating bias in AI models, or new algorithms that are more transparent and explainable.

Regulations need to focus on making sure AI remains fair, unbiased and transparent. While the rapid pace of AI innovation presents challenges to regulation, a balance needs to be struck between specificity and flexibility when developing regulations.

Ultimately, the goal is to create a world where AI is a force for good, one that enhances decision-making abilities and augments our capabilities while ensuring that it is safe, ethical and trustworthy.

Frank Buytendijk
Frank Buytendijk
Distinguished VP Analyst at Gartner

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.