Saurabh Kumar (Deloitte) on mitigating AI risks and embedding ethics
With artificial intelligence (AI) increasingly being leveraged to gain a competitive advantage, leaders should at the same time beware of the risks involved. A discussion with Saurabh Kumar, a partner at Deloitte in India, on some of the key risks that should be considered and on what can be done to curb these threats.
What are the biggest AI related risks companies are facing today?
The most prudent risk related to AI is wrong decision making. Unattended AI models and automated decisions can negatively impact the quality of the products and services an organization delivers and thus can be detrimental to the organization’s reputation and public perception.
Some areas where AI driven decisions could go wrong and have a negative impact on organizations and society as a whole are credit scoring, fraud detection, loan eligibility, insurance qualification, object identification, access to government services, etc. Also arising from poor decision making, examples of racial, and gender bias of AI have been discussed in several case studies over last few years.
What are some of the ways companies can mitigate these risks?
It is key to implement human factors such as ethical behavior and empathy into artificial intelligence models. To do so, the following risk-mitigation activities should be considered during and post implementation:
Simulation testing
Put the AI models through simulation testing and have a human closely monitor the model reacts in real world situations. Provide feedback when the model made, or was about to make, any mistakes. Combine training data with the human feedback data, to identify situations where the model needs more learning data.
Manual annotations
Manually label correct and incorrect decisions while running the model in real life situations. The objective here should be to compile a list of situations and, for each situation, multiple labels saying if model actions were acceptable or unacceptable.
Machine-Human synergy
It is difficult to achieve a 100% accuracy in AI based automated decision making. Therefore, identify and prioritize processes where the impact of decision making is most critical and room for error could be extremely dangerous. Consider human interventions to review AI decisions for such processes; an obvious example of this is self-driving cars where pre-trained and self-learning AI models make driving decisions but the decisions are monitored (and can be over-ridden) by a human.
Workforce training
Ensure workforce training and awareness are included in risk-mitigation efforts. Involved personnel should receive comprehensive communications about where AI is being used; what steps the organization is taking to ensure fair and accurate decisions and to protect customer data. Additionally, business sponsors, risk teams, and analytics employees should receive targeted training on their role in identifying and minimizing risks.
Further reading: An approach to assess and implement artificial intelligence.
What are the key strategic elements essential to establish a 360-degree ethical AI risk framework?
Risk management frameworks allow risk-based decision making and provides a streamlined process for evaluating opportunities for the organization. A risk management framework is largely aimed at enabling efficient financial reporting and regulatory compliance while preventing reputational risks and related consequences.
In order to supplement existing risk frameworks with AI related risks, Deloitte has developed a six-part framework as an effective first step in diagnosing the ethical health of AI while maintaining customer privacy and abiding by relevant policy.
The key elements of the framework are:
- Fair / Impartial – AI applications should include internal and external checks to help ensure equitable application across all participants;
- Robust / Reliable – AI systems should have the ability to learn from humans and other systems and produce consistent and reliable outputs;
- Privacy – Consumer privacy should be respected, and customer data should not be used beyond its intended and stated use; consumers should be able to opt in / out of sharing their data;
- Safe / Secure – AI systems should be protected from risks (including Cyber) that may cause physical and/or digital harm;
- Responsible / Accountable – Policies should be in place to determine who is held responsible for the output of AI system decisions;
- Transparent / Explainable – All participants should be able to understand how their data is being used and how AI systems make decisions; algorithms, attributes, and correlations are open to inspections.
What are the potential blind-spots leaders should be aware of to deal with AI related risks?
Model performance in the real world is never identical to performance on a training set. The environment changes constantly: from shifts in customer base and offerings to customers shifting behavior in response to algorithms. This results in AI algorithms producing unintended results. They can arise out of underlying data having unconscious biases or prevailing unfair demographic distinctions.
Unintended results can also arise due to organizational shortcomings such as inadequate governance over AI applications, insufficient data protection mechanisms, lack of experienced AI talent, and lack of training for responsible parties.
A few examples where blind spots could occur by using the people, processes, and technology framework:
People – Misrepresentation in diversity within annotators is a blind spot that can result in skewed data when making manual annotations. Another example of blind spot related to people is the possibility that humans involved in machine-human synergy are unable or incapable of acting upon an incorrect automated decision.
Processes – Following standard software development and change management policies could expose blind spots in model testing, non-comprehensive selection criteria for third-party suppliers, delayed feedback mechanisms, etc.
Technology – Unavailability of (AI-automated) decisions due to factors like downtime, cyber vulnerabilities, issues with scalability of AI systems are blind spots that can be linked to technology.
What should companies do to speed up their risk management efforts?
It is imperative for boards and senior management to develop a meaningful understanding of the technology, including its existing and potential uses within their organizations, and take a firm grip on the implications of AI from a risk perspective. Companies do not require completely new processes for dealing with AI, but they will need to enhance existing ones to take into account AI and fill the necessary gaps.
Companies will need to review and update their risk practices to manage risks through the various stages in the risk management framework lifecycle (identify-assess-control-monitor). The continuously evolving nature of AI solutions will require some of these activities to happen at shorter and more frequent intervals.
Finally, existing risk appetite statements will also need to be reviewed, and a number of new components, such as a fairness policy for example, may need to be developed to inform the various phases of the risk management framework.
About Saurabh Kumar
Saurabh Kumar is a Partner with Deloitte India in the Consulting business. He currently serves as the Leader for Analytics and Cognitive offering.