![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
A headline this morning on NBC read, Arizona Moves to Ban AI Use in Reviewing Medical Claims. This law is profoundly idiotic, and one of the most important bits of idiocy is obvious right in the body of the law. The law is a PDF, so I’ll paste the whole thing here. It’s not long:
Notice in section D they define “health care provider.” They chose not to define “artificial intelligence.”
In insurance, an actuarial table is a database that takes in a collects a massive pile of data and creates a statistical relationship between your current health (and lifestyle) statistics and the likelihood of your death, future disability, or the likelihood of any given treatment having a benefit that justifies the cost.
Insurance companies will stop calling their AIs “AIs” and start calling them “Actuarial attention models,” since the “model” in “large language model” is just a massive pile of data about the statistical relationships between phrases to determine what phrase is likely to follow another in human speech. The “AI” models used by insurance companies use a similar algorithm (“these medical and lifestyle events in this order are likely to create this outcome…”) but respond with a spreadsheet, not a conversation.
This bill effectively bans actuarial tables, since both actuarial tables and machine learning models do the same thing: statistical analysis. LLMs are especially bad at it because they’re just probabilistic parrots without any actual human intent behind what they’re saying; all the intent went into choosing the training data, the outcome is still broadly incomprehensible to even the best computer scientists. But this is an illusion; behind the curtain, it’s just statistics about likely outcomes.
The problem here is not the use of statistics. The problem here is systems that require low-level workers to make judgments that “maximize shareholder value” at the expense of human lives, while at the same time shielding upper-level management from any criticism or penalty for expending human lives. “That’s just what the numbers say” is the whole of the reason, even if the one real number that matters to insurance executives is “If you save too many lives, my bonus goes down.”
Accountability drain, the ability to say “no one person is responsible for this outcome,” will persist until we as a civilization decide “for every decision, there must be someone who has the final say in what it is and how it can be changed, and that person is accountable for what follows.” Banning statistical analysis of any kind isn’t the change we need. It’s just window dressing over ongoing human misery.
The 1940 film adaptation of The Grapes of Wrath, by John Ford, nails this perfectly:
Arizona decided to shoot the computer, for all the good that’ll do.
H.B. 2175
A. Artificial intelligence may not be used to deny a claim or a prior authorization for medical necessity, experimental status or any other reason that involves the use of medical judgment.
B. A health care provider shall individually review each claim or prior authorization that involves medical necessity, experimental status or that requires the use of medical judgment before a health care insurer may deny a claim or a prior authorization.
C. A health care provider that denies a claim or a prior authorization without an individual review of the claim or prior authorization commits an act of unprofessional conduct.
D. For the purposes of this section, “health care provider” means a person who is certified or licensed pursuant to title 32.
Notice in section D they define “health care provider.” They chose not to define “artificial intelligence.”
In insurance, an actuarial table is a database that takes in a collects a massive pile of data and creates a statistical relationship between your current health (and lifestyle) statistics and the likelihood of your death, future disability, or the likelihood of any given treatment having a benefit that justifies the cost.
Insurance companies will stop calling their AIs “AIs” and start calling them “Actuarial attention models,” since the “model” in “large language model” is just a massive pile of data about the statistical relationships between phrases to determine what phrase is likely to follow another in human speech. The “AI” models used by insurance companies use a similar algorithm (“these medical and lifestyle events in this order are likely to create this outcome…”) but respond with a spreadsheet, not a conversation.
This bill effectively bans actuarial tables, since both actuarial tables and machine learning models do the same thing: statistical analysis. LLMs are especially bad at it because they’re just probabilistic parrots without any actual human intent behind what they’re saying; all the intent went into choosing the training data, the outcome is still broadly incomprehensible to even the best computer scientists. But this is an illusion; behind the curtain, it’s just statistics about likely outcomes.
The problem here is not the use of statistics. The problem here is systems that require low-level workers to make judgments that “maximize shareholder value” at the expense of human lives, while at the same time shielding upper-level management from any criticism or penalty for expending human lives. “That’s just what the numbers say” is the whole of the reason, even if the one real number that matters to insurance executives is “If you save too many lives, my bonus goes down.”
Accountability drain, the ability to say “no one person is responsible for this outcome,” will persist until we as a civilization decide “for every decision, there must be someone who has the final say in what it is and how it can be changed, and that person is accountable for what follows.” Banning statistical analysis of any kind isn’t the change we need. It’s just window dressing over ongoing human misery.
The 1940 film adaptation of The Grapes of Wrath, by John Ford, nails this perfectly:
THE MAN: All I know is I got my orders. They told me to tell you you got to get off, and that’s what I’m telling you.
MULEY: You mean get off my own land?
THE MAN: Now don’t go blaming me. It ain’t my fault.
SON: Whose fault is it?
THE MAN: You know who owns the land — the Shawnee Land and Cattle Company.
MULEY: Who’s the Shawnee Land and Cattle Comp’ny?
THE MAN: It ain’t nobody. It’s a company.
SON: They got a pres’dent, ain’t they? They got somebody that knows what a shotgun’s for, ain’t they?
THE MAN: But it ain’t his fault, because the bank tells him what to do.
SON: All right. Where’s the bank?
THE MAN: Tulsa. But what’s the use of picking on him? He ain’t anything but the manager, and half crazy hisself, trying to keep up with his orders from the east!
MULEY: (bewildered) Then who do we shoot?
Arizona decided to shoot the computer, for all the good that’ll do.