6 AI governance principles to help enterprises cope with risk in the fast-moving world

As enterprises engage more with AI-powered solutions, new risks will emerge, threatening the trust factor. Here are six principles organisations need to consider to create a robust AI governance framework

Updated: Apr 7, 2023 06:40:57 PM UTC
shutterstock_1586797216_SM
Image: Shutterstock

Due to the rapid democratisation of artificial intelligence (AI) and an unprecedented pace of adoption, enterprises may not be able to tackle the unfamiliar risks that arise and the subsequent compliance and regulatory pressures that follow. Traditional approaches to IT governance are falling short in the AI context, which has not kept up with the pace of innovations.

AI implementations are often highly decentralised and bespoke across the enterprise, making it difficult to install risk management mechanisms. AI is often in-built into third-party software, hardware, and services that are deployed to specific business units, potentially exposed to undiscovered risks in legal, reputational, data privacy, and operational areas.

An AI governance framework should rest on the following principles, which need to be incorporated throughout the lifecycle and not just during model validation.

Bias mitigation

Humans often have cognitive biases and negative associations, which frequently and unintentionally creep into AI systems. For an enterprise, unfair biases may permeate recruitment or differences in levels of services for some customer demographics, like loan approval rates tied to gender or ethnicity. Proper pre- and post-processing techniques like option-based classification help assign corrective weights and eliminate bias.

Adversarial debiasing is an in-processing technique wherein a second model is developed to predict the sensitive attributes of the first model that causes the bias. Similar other checks and balances exist to minimise bias, such as what-if analysis tools that provide interactive visual analysis for stress testing models and finding its blind spots and constraints.

Explainability

In their pursuit to get the highest accuracy possible, developers often neglect their focus on making the models more transparent. At the outset, developers approached AI as a "black box", where they were only interested in the inputs and outputs and did not put much effort into making the inner workings more visible. Regulations like the General Data Protection Regulation (GDPR) mandate the right to explanation to tackle accountability challenges generated by automated decision-making systems. To resolve disputes arising from AI-powered solutions, mediators must understand the root cause of the problem and attribute it to the right entity. There are several techniques to bolster explainability in AI systems. In proxy modelling, a simpler model like a decision tree is used to comprehend more complex AI models. Another approach, called "interpretability by design", attempts to build the overall network from smaller and more explainable chunks.

Also Read- 4 ways organisations can move beyond instrumental efficiency of conversational AI

Reproducibility

Reproducibility in an AI learning workflow means that every phase of data processing, model training, and model deployment should produce identical results given the same input. Consistency is needed for building trust in the ML components of a project across stakeholders. Proper model documentation can improve reproducibility. For achieving the best possible outcomes for reproducibility, enterprises should adopt the MLOps best practices centering around three focus areas—data, model, and hardware and software environment.

Security

A multitude of threats exists to the safety and security of AI systems. A key issue is that it is hard to predict all possible downstream effects ahead of time, particularly in complex systems with multiple layers of automation. Attackers can target a variety of vulnerabilities, one of them being tampering with the data used to power AI systems, leading to disastrous outcomes. It is challenging to develop systems with the necessary security protocols and to ensure they remain flexible to adapt to changing trends and inputs. As for reproducibility, checks and balances for data and model management help alleviate many of the concerns.

Ethical

A proper ethical AI framework is not just about adhering to legal and regulatory aspects but comprises fundamental values deeply grounded in individual rights, fairness, and privacy. Ethical AI guidelines and their enforcement help screen out unfair and illegitimate uses of AI. Enterprises should have clearly stated policies that are easily accessible at all levels in the organisation and structured review processes to ensure compliance. Frequent and targeted audits and appropriate internal feedback and contesting mechanisms can flag ethical concerns ahead of time. Here also, adversarial testing can expose those "edge cases" where the model behaves inappropriately, and stress tests the system and allied processes that are downstream.

Also Read- Why entrepreneurs need to become early birds of AI revolution

Human-AI collaboration

"Human in the loop" is a mechanism that embeds human actors at critical points in the decision-making process of a complex automated workflow, identified after a thorough risk assessment. The major challenge is to ascertain how and when human intervention should come in. We should consider variables like the sensitivity of the decisions, potential outcomes, the greater purpose of the process, and risk factors in each step. There should not be a loss of information during the transition from machine to human and from human to machine in the workflows. One key aspect will be creating avenues for communication and flexibility to tackle dynamic requirements in these kinds of systems.

As enterprises engage more with AI-powered solutions, new risks will emerge, threatening the trust factor. Business leaders must ensure every AI deployment is governed as per the the framework suggested to ensure their AI runs ethically and profitably.

The writer is the executive vice president—global head of AI, automation and ECS, Infosys.

The thoughts and opinions shared here are of the author.

Check out our end of season subscription discounts with a Moneycontrol pro subscription absolutely free. Use code EOSO2021. Click here for details.

Post Your Comment
Required
Required, will not be published
All comments are moderated