Watching Google Build a Risk and Governance Framework for Artificial Intelligence

How to control AI is arguably the biggest question in tech right now. So it is interesting that Google has made two public steps that acknowledge the risks of AI and the need for AI governance. The first step was made in the Annual Report of Google’s parent Alphabet. For the first time, AI was included in the risk section and, also for the first time, machine learning was mentioned as part of Google’s advertising tools, which generate 85% of Google’s revenue. So here we have AI as a significant risk factor. The second step was the publication of a White Paper on AI Governance – the acknowledgement that something has to be done to address the risks of AI. If you have spent years at the intersection of Artificial Intelligence and scaling GRC (Governance, Risk & Compliance) you just have to watch as Google elevates AI to the top of its risk hierarchy and pushes for quite specific forms of AI governance.

For an Annual Report it is not surprising that the stated risks are bland and general. But if you build AI systems the White Paper has more nutritional content. The title “Perspectives on Issues in AI Governance” is intentionally modest and, frankly, I was expecting the fluff that is common in high level discussions of AI (“AI can equally be used for good and bad …”). It was a positive surprise that the White Paper chose five concrete Governance areas and gets quite specific. The five areas are (1) explainability, (2) fairness, (3) safety, (4) the role, if any, of humans in AI decisions, and (5) liability. These topics are classics that every AI practitioner has discussed at length but the White Paper gives us a clear direction.

Let’s take explainability as an example. Deep-learning models, the most successful form of AI today, are near impossible to explain in an efficient way. But as AI-based decisions have more and more serious consequences, explaining why they were made is critical. GDPR (the General Data Privacy Regulation), which is in force in the EU since last year, actually establishes a right to an explanation of any automated decision. An extreme (and unlikely) interpretation would make AI models illegal if they do not provide reasonable explanations. The White Paper steers away from this heavy-handed approach and nudges governance towards alternative ways of providing accountability. It suggests options for individuals impacted by AI and duties for organizations using AI. Individuals could flag decisions for review or contest an outcome. Organizations would have to commit to internal audits or to prescribed testing standards. This is the general direction of the White Paper: add accountability but stay away from hard regulation.

Commentators on the White Paper typically followed the pattern “A good start but we need more of X” where X stands for whatever the commentator is currently working on. Let me follow the same pattern. Yes, it is a good start. In my case X is software for risk management and governance at scale. So what I want to see are (1) quantification and prioritization of AI risks, (2) definitions of risk appetite, i.e. how much risk is acceptable given the benefit of a specific use of AI, (3) mitigation strategies for each concrete risk, and (4) rules for governance that assure that the framework is deployed consistently. I bet that in next year’s Annual Report, Google will add another step or two in this direction. I’d say it will defend the use of AI against privacy concerns.

Leave a Reply

Your email address will not be published. Required fields are marked *