How do we build artificial intelligence that we can trust?

Developments in artificial intelligence are well underway, but as more innovations go to market, its trustworthiness becomes ever more pressing

The world is eager to become more artificial intelligence (AI)-driven, with every industry hungry to innovate with sci-fi movie-esque technologies. AI has made the impossible possible, and those very possibilities are endless.

With such an exciting and limitless technology in our arsenal, innovations are well underway. In particular, AI companies are working hard to bring new developments to market for customers and businesses alike to enjoy. However, there is such a thing as too much fun, and sometimes our AI ambitions have to be brought back down to earth.

Why? Because we can’t trust AI yet. This has long been a pressing subject, often eclipsed by the sheer excitement of AI innovations, but it’s a high priority for everybody in the equation. While Terminator-like events are (hopefully) unlikely, we don’t know for sure that it’ll never happen. It might sound dramatic, but if there’s a slight chance that human lives could be at stake, then it makes a good indicator of the weight of the matter.

However, nobody wants to quash or slow down the rate of AI development. In light of this, the next logical steps are to establish criteria for building AI so that it won’t go rogue.

Algorithms are the makeup of AI decision-making. While there are many advantages to algorithm/AI-driven decision-making, there are also significant worst-case scenarios. For example, banks might find that they refuse people on mortgages based on the algorithm, but with no explanation as to why. This is just one example; the lack of explainability and inability to trust it could be catastrophic.

So how do we go about building trustworthy AI?

Technology that you can trust

At present, we don’t have the luxury of fully explainable AI. However, AI developers can work towards this by upping the transparency of the algorithms. In particular, algorithms must be traceable and also have an extent of visibility. In doing so, companies can understand the journey of the decision, as well as redirect it if necessary.

As mentioned previously, a Terminator-esque cyborg is unlikely to sweep through the streets, but at the end of the day, AI doesn’t have human compassion or emotions. As emotive as AI can be (children’s talking toys, chatbots), they are still machines at the end of the day. However, this paves the way for a change in approach; AI developers should consider the possibility of building machines with human values.

A doubled-edge sword of an endeavour, this could be the make or break of trustworthy AI. On the one hand, encouraging AI to share our values could quench fears about its morality. However, who’s to decide the ethical standards? Worse still, with human input, developers must be extremely careful of introducing bias.

This is not to say that developers should abandon instilling human values at all. Rather, they must approach with the utmost caution. Furthermore, AI companies should ensure individuals in their teams are of different backgrounds and training to design algorithms of the richest diversity. In doing so, they stand a better chance of eliminating bias and in turn, enhance accuracy.

Developers must go into the task with the mindset of creating AI to empower humans. While AI takes a load off everyone’s plates, it’s not a means to make humans lazy. Instead, it’s to encourage us to work better and smarter.

For more discussion on AI, check out our Ask the Expert with Pete Hirsch at BlackLine.