How to Get Away With AI Discrimination and Insurance Fraud
It's easy to find loopholes in the law!
Disclaimer: All statements here are mine alone, and do not necessarily reflect the beliefs of my employer or any organization I work with. Nothing in this post should be construed as legal advice.
In recent years, dozens of state legislatures have passed bills outlawing discriminatory uses of artificial intelligence. These include (to name just a few of many examples) Illinois’s HB 3773, which forbids AI-driven employment discrimination; Maryland’s HB 820 and Nebraska’s LB 77, which both regulate AI in the healthcare industry; Colorado’s SB 21-169, which regulates AI in the insurance industry; and Texas’s HB 149 (TRAIGA), which regulates AI-based discrimination, among other provisions.
These laws (and note I’m necessarily simplifying over many different acts) are attempting to legislate two primary goals:
Ensuring human-in-the-loop (HITL) decision-making for high-impact industries like healthcare, insurance, employment, and law enforcement; and
Applying pre-existing civil rights laws to the AI industry, in order to prevent “algorithmic discrimination”
What exactly “algorithmic discrimination” means varies from state to state and law to law, but I’ll use Colorado’s Bill 24-205 as a fairly representative example. That act defines:
“Algorithmic discrimination” means any condition in which the use of an artificial‑intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited English proficiency, national origin, race, religion, reproductive health, sex, veteran status, or any other class protected under state or Federal law.
But let’s say you’re a company that wants to algorithmically discriminate. Luckily for you, there’s an easy way to get around these pesky state laws. All you have to do is follow this plan:
Use an AI model to make an important decision (E.G. hire or fire somebody, set an employee’s salary, give or deny somebody a loan, etc.).
Feel free to feed your AI model pertinent information like the applicant’s race, ethnicity, gender, disability, or religious affiliation.
Feel free to train your model on biased or discriminatory training data.
Ask the AI model to generate benign, non-discriminatory reasons to “explain” why it made that decision. These reasons don’t actually have to be true; they just have to sound true enough to fool regulators.
Use the AI model’s black box properties to your advantage. If you don’t know how your own model works, then you have plausible deniability about whether or not it’s discriminating.
Most government administrators and judges are old people without much understanding of modern technology. As soon as you start using terms like “linear regression” or “model weights” or “generative pre-trained transformer”, their eyes will roll to the back of their heads. Use this information asymmetry to your advantage.
Another way to put this: let your AI model practice alignment faking on the U.S. legal system.
Pick one of your employees to rubber-stamp whatever the AI model says. This employee will be your “human in the loop”, and they will take the fall for you if anything goes wrong.
That’s it! Follow those three simple steps, and you’ll be able to algorithmically discriminate as much as you want. Your AI model can deny insurance claims of elderly patients, or reject loans to minority applicants. Practically speaking, law enforcement agencies will be unable to do anything about it.
Well, sort of. There are ways for the government to get around these tricks. The government can subpoena records (including chat logs, system prompts, and model training data), subpoena witnesses, and require you to perform algorithmic impact assessments. And even if they can’t catch you on willful discrimination per se, they might still find you liable of deceptive business practices or reckless disregard of civil rights laws.
In practice, a law is only as powerful as the law enforcement behind it, and whether these civil rights laws end up being effective will depend on several factors. It will depend on how vigorously attorneys-general choose to enforce the laws. (I suspect that Democratic AGs will be more willing to spend resources on civil rights litigation than Republican AGs.) And it will depend on how committed private organizations are to resisting those laws. (A law need not be foolproof to be effective; it merely has to make the cost of not complying higher than the cost of complying.) Larger companies will have to be more careful, since they will be juicier targets for lawsuits. But larger companies will also have more resources to handle those lawsuits.
Overall, I’d say the deck is stacked more in the favor of the would-be discriminators and than the would-be civil rights enforcers. Private organizations tend to be nimbler and more technically adept than slow-moving government bureaucracies. And when it comes to something as fast-moving and complicated as AI, that can make all the difference.
For the record, I’m not saying that algorithmic discrimination and insurance fraud are good things, and I’m not saying that companies should follow the advice that I’ve just given. I’m merely claiming that if a company wanted to algorithmically discriminate in a way that skirts around civil rights laws, it would be fairly easy for them to do so.
If you are a fan of civil rights laws, then it’s in your best interests to design those laws more robustly than states are doing today. Simply requiring human oversight over AI decisions is basically meaningless on its own. Any effective law would have to include not only transparency requirements in the AI models themselves, but also enforcement mechanisms that can prevent the AI models from internally scheming to produce discriminatory outcomes. Of course, this would require regulators to be able to identify when AI models are internally scheming, which is a thorny technical problem. Even leading AI companies — who have every interest in understanding and controlling their own models — cannot reliably prove whether their own models are scheming against them, so I find it unlikely that civil rights regulatory agencies will be able to do so. It seems like the only two options for preventing algorithmic discrimination are A. solve mechanistic interpretability (which, if you can do that, you deserve a Nobel Prize) or B. prohibit companies from using AI algorithms to make decisions altogether (which would be a very onerous requirement, and not a strategy that state lawmakers seem likely to pursue).
Well, there is a third option, and it’s called disparate impact. Disparate impact is the legal doctrine that says discrimination can be shown via unequal effects on different populations, even if discriminatory intent cannot be proven. For instance, let’s imagine an AI hiring service called EmployAI. EmployAI does not officially discriminate based on race, nor do any of its chain-of-thought logs explicitly mention race. Nonetheless, EmployAI only recommends white applicants for hiring. Under the disparate impact doctrine, EmployAI (and by extension, any firm that uses EmployAI to make hiring decisions) could be found guilty of racial discrimination. In short, disparate impact allows the government to punish a company for civil rights violations, without having to undergo the difficult task of proving how that company’s decisions were made.
This is all well and good for supporters of civil rights laws, but it begs the question: Why even make AI-specific regulations? If you plan to enforce civil rights via disparate impact, then the impact will be the same whether the decision is made by a human or an AI model. Any new, AI-specific regulations would be redundant at best, and unduly burdensome at worst.
In general, I’m very skeptical of any laws requiring humans-in-the-loop, for five main reasons:
It likely reduces efficiency. While there are certainly cases when having humans in the loop makes a system better, there are also cases when a purely AI-driven decision process is more efficient than involving any humans. (Indeed, this must be the case for HITL laws to make any sense. If HITL was always more efficient than pure-AI reasoning, then by the efficient-market hypothesis, private organizations would always employ HITL without any need for government coercion.) By forcing organizations to employ HITL in circumstances where it is not efficient, you are harming those organizations.
It is difficult, if not impossible, to implement. Having a human in the loop means nothing if that human acts as merely a rubber stamp for whatever decision an AI algorithm makes. If human involvement is supposed to achieve anything, humans must be meaningfully in the loop. But when AI systems work faster and smarter than any human, how can humans keep up?
Even if HITL is technically possible, enforcing it may not be. Most people are lazy, and it will be easy for employees to rubber-stamp AI decisions while pretending to give those decisions a thorough review. From the perspective of a government agency, how are you supposed to prove whether a human meaningfully reviewed an AI output or not?
Whatever bad behavior you’re trying to prevent can be done by a human just as easily as it can be done by AI. E.G. If your concern about AI is unfair discrimination, then you should remember that humans also unfairly discriminate. So it’s not immediately clear why having a human in the loop would be an improvement.
AI systems might be more free from bias and error than humans are. Humans make bad decisions all the time, whether it’s due to laziness, being tired, being in a rush, or something more sinister. AI systems (at least in theory) could be free from these limitations. So by forcing organizations to use human decision-makers, lawmakers might inadvertently make those organizations more discriminatory than they otherwise would be.
HITL laws remind me of the
quote: “Markets do the good things that sound bad; governments do the bad things that sound good.” It sounds bad to say, “Let’s let an AI algorithm decide whose insurance claims get denied.” And it sounds good to say, in the words of American Medical Association President Bruce A. Scott, “Medical decisions must be made by physicians and their patients without interference from unregulated and unsupervised AI technology.” But if your primary concern is improving patient outcomes, then it’s not at all clear that HITL regulations would make things any better.In short, I think that while the AI civil rights and insurance bills passing through state legislatures are mostly well-intentioned, they are not very well-designed. They will do little to actually promote the cause of civil rights, and in the process they are likely to place undue burdens on hospitals, insurers, and business owners. Lawmakers should do better.