Home - Topics - Papers - Theses - Blog - CV - Photos - Funny

November 18, 2020

AI for Governance Belongs in Mechanism, Not Policy

AI in Policy?

This post contains an address delivered at the IRGC conference on Governance Of and By Digital Technology.

The separation of mechanism and policy

Computer science has a well-established design principle known as separation of mechanism and policy. I think this principle may be equally applicable to human governance – especially in considering the question of where AI does and does not belong.

I believe the cautious use of today’s powerful AI technologies based on machine learning can justifiably play many useful roles in implementing low-level mechanisms used in governance. But I hold that AI has no legitimate role to play in defining, implementing, or enforcing policy. Matters of policy in governing humans must remain a domain we reserve strictly for humans.

What differentiates mechanism from policy? Mechanism represents a toolbox of technologies and processes that are usable in many different ways, and that are generally oblivious to how they are used. Policy determines the right way to use the tools we have towards beneficial ends. Mechanism strives to be value-neutral, while acknowledging that no technology is ever fully neutral. Policy, in contrast, necessarily lives in the densest heart of the jungle of human values, ethics, and norms. Policy is not about being value-neutral but about identifying and embodying the right values in the right way.

As an example, AI may have many justifiable uses in electronic sensors to detect the presence of a car, how fast it’s going, or whether it stopped at an intersection. But AI does not belong anywhere near the policy decision of whether a car’s driver warrants suspicion and should be stopped by highway patrol.

Similarly, deciding what is a house, a tree, or a road in an aerial photo is mechanism, for which AI is probably justified and certainly useful. Deciding – or even just suggesting or recommending – which neighborhoods on a map should receive more police attention and surveillance is policy, a decision domain where I claim at AI can play no legitimate role.

Four follies of using AI in policy

Misuse of AI in policy is toxic to self-governance in free societies for at least four reasons. First, by synthesizing an opaque decision-making algorithm from a training dataset, it represents not an improvement on, but a fundamental abdication of, the basic principle that policy should be based on transparent and clearly-elucidated laws and regulations that the governed can understand and contest if needed. Second, AI silently absorbes the biases, discriminatory behavior, and arbitrary expections of what is “normal” from its training datasets. Third, by making decisions based on static datasets recording past human behavior, AI embodies an implicit – and incorrect – strictly-conservative assumption that our past embodies the “right” guidance for what our future behavioral norms should be. Fourth, use of AI in policy amounts to delegating our self-governance to a guardian that is neither representative of nor accountable to the governed, ignoring lessons from millenia of human history that only the governed themselves are adequately-positioned to know or decide what is in their best interests. I will briefly expand on each of these points in turn.

1. Rule of law versus rule of opaque algorithm

The basic principle of rule of law holds that members of society be “equally subject to publicly disclosed legal codes and processes” (Oxford English Dictionary). Policy in the self-governance of humans must stem from grounds that humans have identified, considered carefully, and elucidated explicitly, transparently, and comprehensibly to those governed. Policy must embody clear and understandable reasons why some form of behavior is or isn’t allowed, or why it is or isn’t grounds for suspicion or investigation. Delegating policy to AI is a means to avoid doing our own thinking, to avoid elucidating transparent or justifiable principles for decisions, and to avoid accountability for how policy may be applied or misused. In short, delegating policy to AI is a means not to improve self-governance but rather to avoid it entirely, by embodying the expectations of the governed in opaque algorithms derived from unaccountable datasets.

2. Rule by principle versus rule by learned biases and discrimination

What exactly does today’s AI based on machine learning do? It ingests a dataset that we simply assume somehow magically represents what’s right, true, and proper, and produces an algorithm to regurgitate similar decisions in the future. But as we know, training data sets in the real world can and will never be perfectly correct – let alone fair, unbiased, and non-discriminatory. We know that datasets can hide many forms of bias and discrimination in almost limitless forms, and can even become outright racist. The only dimensions in which we can even plausibly hope a training dataset – or a resulting algorithm – is not biased is along those dimensions in which we explicitly checked for, and methodically corrected for, any detected bias. But these detected biases will always be an infinitesimal subset of the dimensions along which machine learning can and in practice will learn unfair and discriminatory behaviors, in a multitude of toxic flavors that we merely haven’t thought of or analyzed yet.

Use of AI in policy may be defended on the grounds that humans are imperfect and make biased and discriminatory decisions too. But when it is real humans, and only real humans, making these governance mistakes, we can at least retain a lingering hope that one of us mistake-making humans will eventually think outside the box conscientiously enough to recognize a repeated mistake, bias, or discriminatory precedent as such, and do something to call it to attention and correct it. But when we automate away our mistake-making into a machine-learned algorithm embodying innumerable subtly-broken notions of what’s true or right, we cement in that mistaken precedent and risk losing the opportunity for anyone ever to notice or correct it.

3. Rule by what’s right for the future versus rule by our past

More generally, because machine-learning algorithms necessarily learn from datasets representing past and only past experience, AI-driven policy is fundamentally constrained by the purely-conservative assumption that data recording our past represents the right, best, or only viable basis from which to make important policy decisions about the future. This backward-looking presumption flies in the face of the self-evident fact that all past and present societies are highly imperfect to say the least. To have any hope of genuinely improving our societies, governance must be visionary and forward-looking, using the past only as a reference to learn from and help us avoid making old mistakes yet again.

Even when past social or behavioral norms are not themselves flawed or harmful per se, employing AI in policy leads inevitably to the irresistible temptation of algorithmically discriminating between “normal” people or behaviors and ones that are unusual in any dimension, however innocuous. Discrimination between the normal and the unusual always tends to focus unwanted attention on the unusual – thereby punishing people who are extreme or exceptional even in positive ways, and incentivizing everyone to conform and keep their heads down in order to stay “under the radar” of norm-centric algorithms. Thus, using the recorded past to derive algorithmic norms for the future inevitably disempowers and dehumanizes people by imprisoning them in the straightjacket of the prevailing median.

4. Self-rule versus rule by unaccountable guardian

Finally, delegating policy decisions – any policy decisions – to AI is merely a new technocratic form of political guardianship. As powerfully developed in the work of political philosopher Robert Dahl, one of the basic lessons of human history is that only those governed are well-positioned to decide what is in their own interests.

Delegating the power to make or enforce policy to any unrepresentative or unaccountable guardian – however “wise” or “benevolent” such a dictator may seem – eventually leads to divergence between the guardian’s aims and the interests of the governed. This divergence can be because the guardian’s incentives or optimization criteria never fully or accurately reflect the interests of the governed – as it never can, whether the guardian is a human dictator or an AI. Divergence can also result gradually from the fact that the governed themselves and their interests are necessarily fluid, changing and evolving in both reality and perception. An AI trained on a dataset encoding precedent by definition cannot be accountable to gradual changes in the interests of the governed, no matter how ideal the initial state may be. Finally, an AI guardian can never be representative because it is not human.

Today’s enlightened dictator eventually becomes tomorrow’s oppressor. Supposing that an AI guardian will change this rule is nothing but technocratic hubris. To the contrary, an AI guardian is all the more dangerous for being immune to human frailties like guilt or death, of which the latter at least claims today’s human dictators eventually. Governance by AI can thus only offer yet another misguided and disastrous means to avoid actual self-governance by delegating policy to unrepresentative and unaccountable guardians. Only the fresh thoughts and sentiments of the governed themselves, expressed year to year and moment to moment in human self-governance, can legitimately track and reflect the interests of those governed. To remain free we must reject all guardians, human or AI, reserving the right to set or enforce policy strictly to ourselves, the humans governed.

The siren songs of seductive non-solutions

Two potential mitigations are often proposed: keeping a “human in the loop” or so-called “explainable AI.” We must not take either of these approaches as excuses to allow AI in policy, however.

“Human in the loop” is not a solution because humans are already too willing to leave hard decisions to others. Having AI make merely a policy “recommendation”, while giving a human the final press of the button to accept the AI’s decision, invites and incentivizes the human to be lazy and simply accept the AI recommendation. Because why not? The AI “seems right” most of the time, anyway, and if it isn’t, we can blame it on the AI. “Human in the loop” simply reduces the human component to a cog in the machine.

What about “explainable AI”, where AI might for example be instrumental in finding and identifying a policy as a decision rule book? Even if “explainable AI” truly materializes – which it hasn’t yet – it will not justify use of AI in policy. There may be thousands, millions, or an infinite number of possible “explainable” rule books that a human or algorithm might identify, each rule book satisfying a desired but under-constrained set of criteria. But of these solutions, the data-driven AI will dutifully find the one unique rule set that most religiously reflects the biases and mistakes of the past embodied in the training data. The noise of deliberation by a diverse, representative, and accountable group of humans reliably introduces randomness that often yields fresh mistakes, but also often works in our favor, identifying new and better principles and rules that break from our past mistakes. Even “explainable” AI-based policy, in contrast, will optimize any resulting rule book with pinpoint-accuracy for slavish devotion to our own wrong-headed past.

Conclusion

Notice that none of my concerns above rely on the science-fiction scenario of AI “waking up” and intelligently deciding to take over. This is because self-aware AI is both a much more distant and much more obvious threat. Long before that happens, we will have far more numerous and pernicious opportunities to shoot ourselves in the societal foot through far more mundane abuses of today’s non-self-aware AI that merely learns from datasets. If anything, dependence on truly self-aware AI might be less dangerous, because we would have at least some hope of it developing a conscience.

In conclusion, we must never allow our governments or their employees to abdicate their responsibilities to govern as humans, no matter how attractive it may be to delegate decisions to – or even just obtain policy “recommendations” from – any seemingly-helpful guardian, human or AI. Influence by AI on policy is just as pernicious as foreign influence on a sovereign nation’s policy-making, and we should make it just as illegal. There are many powerful and justifiable uses of AI, but we must confine these uses to mechanism, not policy.



Topics: Democracy Law Surveillance Transparency Bryan Ford