Artificial Intelligence Regulation: Let’s not regulate mathematics!

On Wednesday, ahead of today’s White House Frontiers Conference, the White House Office of Science and Technology Policy released its report on Preparing for the Future of Artificial Intelligence. The report is optimistic, comprehensive and well-balanced. In summary: full-speed ahead.  But let’s be smart when it comes to Artificial Intelligence regulation.

The premise is that we are going toward an AI-based future, mostly for the common good. The progress of AI needs to be encouraged through investment, training and education. When AI is incorporated into existing applications, regulators should consider risks as well as benefits before intervening, and Artificial Intelligence regulation should not be used to arbitrarily burden or slow down the development of AI. That said, safety and ethics should be primary concerns as we move AI systems from the lab into the much more unpredictable real world. All sensible stuff.  

The difficulty of Artificial Intelligence regulation

Unsurprisingly though all is not completely straightforward.  The question of Artificial Intelligence regulation poses considerable challenges.  The OSTP report discusses the issues of fairness and transparency, and brings up two different concerns:

  • The need to prevent automated systems from making decisions that discriminate against certain groups or individuals.
  • The need for transparency in AI systems, in the form of an explanation for any decision.

The exact quote:

Use of AI to make consequential decisions about people, often replacing decisions made by human-driven bureaucratic processes, leads to concerns about how to ensure justice, fairness, and accountability …

Transparency concerns focus not only on the data and algorithms involved, but also on the potential to have some form of explanation for any AI-based determination. Yet AI experts have cautioned that there are inherent challenges in trying to understand and predict the behavior of advanced AI systems.

The European Union has also been thinking along these same lines and published a document in April 2016, with the intention of putting AI regulations into place by 2018. In the EU document, besides the usual point about what personal data can be collected, the same two concerns are raised and legal countermeasures proposed.

The risk is that attempting to regulate for fairness could effectively outlaw any fully automated system from making a decision about a person. Equally the requirement to a “right to an explanation of the decision reached after algorithmic assessment” could also lead to other unintended consequences.

Fairness

Non-discrimination is a right. If discrimination is intentional, legal measures need to be taken. It can also be accidental, as in the case of a fully automated system which is trained, rather than precisely encoded. Early image recognition systems got some bad press when dark-skinned people were labelled as gorillas. This was most likely due to an imbalance in the training set, Google apologized and fixed the problem. Now imagine if instead of being a fun app to label someone’s photos, it had been a medical application, maybe a dermatology app or device trying to spot potentially cancerous moles. If most of the training data came from light skinned people, this app could malfunction on the samples from dark skinned people. The same thing could happen on very wrinkled skin, or the skin of someone with a rare condition that affects the “normal appearance” (whatever that means) of skin. In all of these cases, it’s important for the training data to contain examples of benign as well as malignant moles on the widest possible sample of skin types.

The best way to address the issue of fairness is to start from a more diverse, and therefore larger, dataset. Notice that the goal is to make sure that smaller groups are treated fairly. This is a statistical argument, not an attempt to deal with individual cases. Which leads to the second point.

Transparency

The right to explanation, is less of a right. It is impossible to achieve, so it should not be legislated. Two arguments can be made.

First, we don’t ask the same of human-based decisions. Ask your bank why a request for a loan was rejected, and you will be told that “your credit score was too low”, or that the loan officer took the decision. Not a very transparent decision!  So you are faced with a very simplistic model, based on a couple of variables like your income, credit score or age. It doesn’t capture your full financial picture, but is simple enough to fit in our heads. Or you must deal with a very experienced loan officer who decided to reject your application. You are now faced with a human mind, who is probably not fully aware of all the biases and misperceptions affecting it: sizing up an applicant by their look, dress, or accent, all their judgments will be lumped under “experience”, but where is the true explanation? Even if you demand a more precise explanation, it will be an after-the-fact rationalization. We accept the human mind as an inscrutable black box, even though we are fully aware of its limitations.  With AI we are struggling with a new type of black box. The same debate is happening around autonomous cars: we accept 1.3 millions death in traffic accidents per year worldwide, most of them due to human error; but we instinctively demand perfection from the new black boxes.

Second, attempting to extract an explanation out of a modern Deep Learning model is bound to fail. Think for a second about the problem of deciding which advertisement to show, or which video to suggest, both problems that are best solved with AI. The companies which solve these problems have access to a very large amount of personal information about their users: browsing and search history; age, gender and education level and many more personal attributes which they know or can easily infer. The new models can ingest and make effective use of thousands or millions of variables and are vastly better than the simple graphs of yesterday, where a couple of lines delineated “good” from “bad”. The decision on what to suggest is now based on lessons learned from all the data of millions of other people: which advert was shown to whom, and who clicked on what. This is utterly impossible to explain in one sentence. Or a paragraph. Or a 1000-page book. We can’t explain a really complex mathematical function learned from a mountain of data in a way that will satisfy a human. This is what we are facing. Legislating the need for an explanation will not make that contradiction disappear.

Functional Regulation

The AI genie is out of the bottle for good. Artificial Intelligence is rapidly becoming one of the top competitive advantages for companies and countries. Once it makes its way deep enough into other industries, it might become the primary competitive advantage. Countries who feel compelled to implement Artificial Intelligence regulation that is too aggressive will fall behind, with serious economic and security consequences. So it is important that regulators be very clear on what we need to regulate: the inner workings of a Deep Learning model is a poor choice, such Artificial Intelligence regulation is tantamount to attempting to regulate mathematics. Instead we should focus on specific applications of Artificial Intelligence and regulate based on the performance of the function, not how it is achieved. Autonomous cars should be regulated as cars: they should safely deliver users to their destinations in the real world, and overall reduce the number of accidents; how they achieve this is irrelevant.

Extract data from almost any website


INSTANT ACCESS