How can we control AI (instead of AI controlling us)?

Roel van Rijsewijk ,

LinkedIn profiel

Roel van Rijsewijk, Senior Fellow of Deloitte’s Centre for the Edge, shares his vision on the importance of AI for financial institutions and the potential risks AI brings along.

Why is it important for financial institutions to learn more about applying AI in risk?

The financial industry is a front runner when it comes to applying AI and smart algorithms. It’s already there and the adoption is accelerating. They also experienced the downside of AI when the market collapsed because of smart algorithms unintentional wrongdoing. Risk management needs to learn and evolve in how to manage these risks. The second question is how AI is going to be applied in risk management itself. AI creates new risks but can also be the solution to manage many risks. This on top and beyond applying AI to make risk management processes less bureaucratic, more efficient and more effective.

Could you tell us a bit more about the shifting paradigm of risk management?

We live in a transformational time, the third revolution, entering a new age; the information age. We are at the beginning, but it’s going faster than you think. Nobody knows what the future will look like; not even Google or Apple. In short, the world and the environment that financial institutions operate in are getting more complex and increasingly uncertain. We are increasingly confronted with unknown risks, so called black swans. And many risks are evolving exponentially and thus becoming unpredictable. The old risk models based on likelihood and normal distribution are getting obsolete.

Artificial Intelligence | IIRA paradigm shift is needed where risk management is no longer about calculating uncertainty and the prevention of predictable loss events. It’s more about resilience for unforeseen events. We need to accept uncertainty and deal with it by using technology and advanced analytics to increase transparency. Not to predict the future but to detect events in real-time and the ability to then respond adequately. Not only to contain the damage to an acceptable level but also to recover to a higher level.

The new paradigm is that risk powers performance as a driver for innovation and empowerment of people in a learning governance environment.

Do you have a concrete example?

A concrete example is cyber risk management. Cyber risks are notoriously dynamic and unpredictable. You can’t plug all the holes in an ever evolving IT landscape and cyber criminals are innovative and they could start using AI to their advantage.

Being 100% secure is not only impossible, it’s also undesirable. You can’t lock information up in a vault or limit the use of technology. You want have a free flow of information and as much empowerment of people with technology as possible. Which makes you more vulnerable for attacks.

Security therefore has shifted to resilience. A shift from prevention to detection and response, using advanced analytics and response capabilities. Cyber risk management that enables innovation instead of obstructing it.

The world is getting tech-enabled and information based and, in that sense, many operational risks are becoming cyber risks. Also, other risks are starting to behave like cyber risks, in the sense that they are dynamic, evolving and thus more unpredictable. So also in the financial risk domain there are lessons to be learned from cyber risk management.

AI also faces some potential risks, what are these for financial institutions?

There are a couple of risks involved that need to be managed:

  • AI build with intention of wrongdoing
  • AI unintentionally being at fault
  • AI super-intelligence that becomes incomprehensible for humans

Questions that need to be addressed are:

  • Are there limits to what we want AI to do versus human ethical judgements, consensus and decisions?
  • How can we control AI (instead of AI controlling us)?
  • How can we keep AI and what it does transparent and auditable?
  • Who is responsible and who is accountable for actions and decisions made by AI?

These are fundamental questions that need to be addressed in dialogue with all stakeholders involved. That’s why this conference and many more like it are so much needed. I am not sure if we will be able to answer these questions upfront. Like any innovation we will have to learn by doing and hopefully make our mistakes small, fast and cheap. Fear is a bad advisor and we can’t expect the regulator to answer these questions for us because innovation will lead regulation, not the other way around.

Any technological development is a miracle and a disaster; when you invent the airplane you create the airplane crash, when you invent nuclear fission you get the atomic bomb and when you develop AI you also develop new and unknown disasters. We have opened Pandora’s box and we will suffer the consequences. But I do not believe in a doomsday scenario. Also, according to the same Greek myth, what is left in the box after we opened it is hope.

And I am very hopeful that in the end this will not be a race of man versus machine where machines will take over. In the end the AI that will successfully evolve and survive is the AI which makes us better humans. And the more we are confronted with AI the more we will learn what makes us human.

To make this journey safe and avoid an existential disaster, AI needs to be open technology developed in open innovation; not proprietary, secretive and owned by a few tech firms. And especially self-learning AI needs to be self-explanatory. We can’t avoid mistakes, but the more open and transparent we organize AI, the better we will be able to detect unwanted developments timely and learn faster.

Download the Artificial Intelligence brochure

Click the button below to download your free digital copy.