top of page
Writer's picturePerrin

ARTIFICIAL INTELLIGENCE IN THE BOARDROOM

The future of governance, risk management and compliance could be disrupted significantly by the use of AI. It's here now, but do we truly appreciate the levels of disruption and the potential unintended consequences?



Artificial Intelligence (AI) is neither artificial nor intelligent, yet!

It’s not artificial because all of it is currently ‘manmade’ and it’s not intelligent because the machine or computer is only performing a task. A task designed by humans, with human imposed boundaries, axioms and rules.

We, as humans, plug in the objective and off the machine goes. It’s performing a task, a task which has predefined and finite outcomes.

However, we are not going to remain in this comfortable place for very much longer.

In my preparations for presenting a forthcoming webinar for The Governance Institute (ICSA) on ‘AI and its use in Governance’, I listened to a recent podcast with Stuart Russell.

Russell is a professor of computer science at the University of California and recently published his book called Human Compatible. In it he answers the question,

“What would it mean for humanity, if we were able to create a truly artificial intelligent being?” by making the significant statement…

“It would be the biggest event in human history and possibly the last event.”

He argues that super intelligent AI is coming, and we have to be ready.

We are almost all interfacing with AI on a daily basis.

Google searching

Data analytics

Satellite navigation

Social Media feeds, and

Facial recognition…

…are all currently using AI to a significant extent.

As businesses continue to increasingly adopt AI and Machine Learning as tools to support their objectives, we as individuals, be it as directors, shareholders and employees, need to carefully consider its implications and consequences.

AI is heading to a boardroom near you, not just in the analysis of company financial performance, but in the assessment of marketing effectiveness, operational efficiency and dare I say it, director performance. AI has the potential to become an AI non-executive or executive director and could soon be supporting decision-making in the boardroom directly by participating in strategic analysis, governance and risk management.

All AI is data centric. Data is of course not information. In order for data to become information, we need to make it meaningful, we need to ensure its validity and its authenticity. If we want to make better decisions, improve our governance and risk management, we need to be sure of our data and the information that emerges from these data.

There are many benefits from AI, however there are some key ethical and moral questions that begin to come into play, once we move away from simple data analytics. One of these is described as the 'unintended consequence'.

The unintended consequence begs a fundamental question and this was perhaps best posed by the mathematician Norbert Wiener, who said,

“We had better be quite sure that the purpose put into the machine is the purpose which we really desire”.

From a governance perspective, if we put aside the validity issue for a moment, the key shift will be when we move towards AI having a more weighty role in decision-making. Not only does this raise data protection issues related to automated data processing, it also brings attention to ensuring that we have asked the machine the right question, or at least asked it to solve the right problem.

As an example, if we ask for the analysis and best strategy to meet the financial corporate objectives, we need to be sure that any decision made, following analysis, takes into account the wider responsibilities of the business, such as regulatory and social responsibility, but most importantly that there are no unintended consequences arising from these decisions.

To take a more global issue, such as climate change. AI will soon have the capability to play a major contribution to this solution, however if we have not considered carefully the parameters of its decision-making powers, if the calculated solution is to change the composition of the oxygen content in the atmosphere to levels that will not support large life forms, we will quickly find ourselves with a problem.

AI at the moment is based on specificity. It has a very specifically attributed problem to solve.


In the near future, AI will be based on generality. This means that it will have the analytical capability to take on much larger problems, which will require a loosening of restrictions.

As organisations and as boards, we are right to be enthusiastic about the contributions that AI can play in supporting us, however, we need to also balance this with caution through a deliberate and detailed understanding and robust challenging of the many implications and unintended consequences.

AI is here, its use is escalating exponentially, and we need to be ready for the future, now.

Recent Posts

See All

Comments


bottom of page