elfs: (Default)
I’ve just done something… I’m not sure what to call it. Terrible? Wonderful? Should have happened months ago? I deleted a project on my hard drive, in fact the biggest project, all 415 gigabytes of it. My writing only amasses all of 25 megabytes, or about four million words, not counting any of the social networking I’ve wasted my time on over the past 36 years– and it has been 36 years, stretching all the way back to 1989 and my first encounter with Usenet in 1989.

The project’s name, which was so big, overwhelming, and so in need of isolation that it had its own username and local account, was stable.

It was just an experiment with stable diffusion. It grew into an obsession. I realized the other day that I was wasting hours on the damned thing, tweaking to find one more perfect image in a sea of six-fingered, three-armed men, women, furries, and monsters. I told myself that it was merely a recreation, a form of leisure. Over the course of

To be “ethical” leisure, a hobby needs: (1) perseverance, (2) stages of achievement and advancement, (3) significant personal effort to acquire skills and knowledge, (4) broad and durable benefits, and (5) a special social world with a unique ethos that is deemed valuable both by the participants and by observers. Stable Diffusion utterly fails at 4 and 5.

And I’m supposed to be an expert at this stuff.

the past 2½ years I generated upwards of a million images, and still had about 100,000 of those on my hard drive, just taking up space. Not to mention the models themselves, some of them LORA files that simply cannot be found anywhere else for love or money, jealously hoarded by aficionados because they were created before April of 2024, when the big boys decided no more LORAs that allowed you to generate pictures with “absurdly large breasts” or “being caressed by lots of tentacles” when they also tried to sneak underage girls into the data stream, and which you could then generate if you knew the right keywords, the which were frequently embedded in the keyword_frequency key embedded in the metadata block.

All gone now. Poof. Even the backups have been destroyed. Going cold turkey on day one.

And when I say “wasting hours” I mean it truly; it was as bad as two or three hours every day. I had stopped reading. I had stopped coding for fun, although that may be more an artifact of how well my brain works after that nasty COVID bout and the ravages of turning, well, 59.

I never posted anything that I generated because I recognize the ethical problems in image generation “AIs.” It’s funny how many of the people deep into this, er, hobby, recognize that this isn’t AI at all and simply call them “diffusion models” of one sort or another. I don’t want to take money out of artists’ hands; I want more artists making more art, not less. The number of story ideas I extracted out of these, good grief, thousands of hours I soaked into that thing over the past 30 months I can number on one hand, because it’s literally 5. Out of the million images I generated, I kept five.

There are artists on Twitter who have given me more good story ideas in an hour than the estimated nine man-months of my life I put into what is probably the most useless skill I shall ever have acquired.

There’s no reason I couldn’t rebuild most of it; after all, it’s just downloading software, and this time I have more skill in handling LLMs in local space, since, again, I had no desire to share either my skills or my products with a commercial image producer.

I also deleted a lot of tools that I had developed along the way. I wrote my own little programming language: Loopy. The Loopy interpreter was written in Python and allowed me to do all sorts of peculiar tweaks to the prompts and the various strengths and timings of components of the prompt in mid-process, just so I could do odd and silly things with wildcards above and beyond what Stable Forge was capable of processing, and could do it hands-off, without the browser running. I could “do multiples runs of five of this prompt, using the same seed for each run, only using these six different LORAs in succession,” or “Do the multiple runs, use the same seed each time, but progressively increase the influence of the LORA,” so I could do some empirical analysis on just how much of one LORA or another I needed to get whatever the LORA promised to do to look “exactly right.” I could even go to one of several image galleries on-line, such as CivitAI’s “Furries” gallery, pick as many images as I liked from the gallery and open them in tabs, and the Loopy would download the generation data for each image and attempt to reproduce them locally, with whatever tweaks I wanted that day to make them more interesting to me.

And despite all of this, despite generating at least a million pictures, I found five I wanted to keep.

That’s an obsession. Despite my own observation that AI exploits a critical vulnerability in the human brain, I succumbed to it. I drowned in drab beauty, like a certain billionaire obsessed with making everything look like the inside of a 1950s “historical fantasy” titillation film, regional car dealership rococo slop. It’s ironic, I suppose, having seen what the Pixiv “artist” AIBot could do with it, and understanding exactly how habit-forming it could be, that I wanted to be able to do what AIBot did, and better. And, I suppose, I did to better; much better. Read thousands of prompts and you’ll develop the skill too, it just won’t be a very useful skill.

Anyway, all gone now. I hope. As I said, I could put it back, minus some “classic” LORAs for a variety of, well, mostly breast sizes (always a hard thing to get right with the early generation, and I never graduated beyond Stable Diffusion 1.5), but I deliberately made even that difficult: I deleted the Loopy toolkit repository. I don’t want that temptation.

I expect I’ll be grumpy about this for a week or so; a habit, even a bad habit, puts the victim through an extinction burst when your brain realizes that that particular source of dopamine is no longer operant. But I suspect I’ll also get over it; I wonder what I’ll do with all the extra time. I should plan on finding something better to do with all that brain power.
elfs: (Default)
A headline this morning on NBC read, Arizona Moves to Ban AI Use in Reviewing Medical Claims. This law is profoundly idiotic, and one of the most important bits of idiocy is obvious right in the body of the law. The law is a PDF, so I’ll paste the whole thing here. It’s not long:


H.B. 2175

A. Artificial intelligence may not be used to deny a claim or a prior authorization for medical necessity, experimental status or any other reason that involves the use of medical judgment.

B. A health care provider shall individually review each claim or prior authorization that involves medical necessity, experimental status or that requires the use of medical judgment before a health care insurer may deny a claim or a prior authorization.

C. A health care provider that denies a claim or a prior authorization without an individual review of the claim or prior authorization commits an act of unprofessional conduct.

D. For the purposes of this section, “health care provider” means a person who is certified or licensed pursuant to title 32.


Notice in section D they define “health care provider.” They chose not to define “artificial intelligence.”

In insurance, an actuarial table is a database that takes in a collects a massive pile of data and creates a statistical relationship between your current health (and lifestyle) statistics and the likelihood of your death, future disability, or the likelihood of any given treatment having a benefit that justifies the cost.

Insurance companies will stop calling their AIs “AIs” and start calling them “Actuarial attention models,” since the “model” in “large language model” is just a massive pile of data about the statistical relationships between phrases to determine what phrase is likely to follow another in human speech. The “AI” models used by insurance companies use a similar algorithm (“these medical and lifestyle events in this order are likely to create this outcome…”) but respond with a spreadsheet, not a conversation.

This bill effectively bans actuarial tables, since both actuarial tables and machine learning models do the same thing: statistical analysis. LLMs are especially bad at it because they’re just probabilistic parrots without any actual human intent behind what they’re saying; all the intent went into choosing the training data, the outcome is still broadly incomprehensible to even the best computer scientists. But this is an illusion; behind the curtain, it’s just statistics about likely outcomes.

The problem here is not the use of statistics. The problem here is systems that require low-level workers to make judgments that “maximize shareholder value” at the expense of human lives, while at the same time shielding upper-level management from any criticism or penalty for expending human lives. “That’s just what the numbers say” is the whole of the reason, even if the one real number that matters to insurance executives is “If you save too many lives, my bonus goes down.”

Accountability drain, the ability to say “no one person is responsible for this outcome,” will persist until we as a civilization decide “for every decision, there must be someone who has the final say in what it is and how it can be changed, and that person is accountable for what follows.” Banning statistical analysis of any kind isn’t the change we need. It’s just window dressing over ongoing human misery.

The 1940 film adaptation of The Grapes of Wrath, by John Ford, nails this perfectly:


THE MAN: All I know is I got my orders. They told me to tell you you got to get off, and that’s what I’m telling you.

MULEY: You mean get off my own land?

THE MAN: Now don’t go blaming me. It ain’t my fault.

SON: Whose fault is it?

THE MAN: You know who owns the land — the Shawnee Land and Cattle Company.

MULEY: Who’s the Shawnee Land and Cattle Comp’ny?

THE MAN: It ain’t nobody. It’s a company.

SON: They got a pres’dent, ain’t they? They got somebody that knows what a shotgun’s for, ain’t they?

THE MAN: But it ain’t his fault, because the bank tells him what to do.

SON: All right. Where’s the bank?

THE MAN: Tulsa. But what’s the use of picking on him? He ain’t anything but the manager, and half crazy hisself, trying to keep up with his orders from the east!

MULEY: (bewildered) Then who do we shoot?


Arizona decided to shoot the computer, for all the good that’ll do.
elfs: (Default)
In a recent letter to clients, investment guru Ken Fisher talked about his portfolio’s positions with respect to the “AI industry,” and in the middle of is this (admittedly very skeptical take on the “AI boom”) he praises his friend’s “AI-supported insulin pump which constantly regulates and adjusts the dose automatically, eliminating the dozens of needle sticks and injections she had to do every day before this.”

Intrigued and a little annoyed by this bright spot in an otherwise dour assessment of AI’s potential, I went and looked up what’s actually running on the Pearl “artificial pancreas.”

It’s runnning Bayesian analysis on top of the Bergmann Minimal Model of Glucose Regulation, with small randomizations in the perturbation model to update the Pearl’s model for the individual using it.

The Bergmann model is 45 years old. Bayes’ algorithm is 260 years old. What the Pearl does is monitor your glucose levels precisely and, over the course of the day, make predictions about when you’ll need more insulin, and then monitor how your glucose levels respond to its scheduling, updating the schedule in order to smooth out the responses and help the diabetic patient manage better.

There’s a lot that goes into the Pearl monitor. Insulinic medications that are stable at room temperature for long periods are a modern technological miracle. Batteries that last for hours or even days are a modern technological miracle. Glucose sensors that tiny are a modern technological miracle. Microprocessors that can handle the load are a modern technological miracle. Even the very low energy “body area networking” so that the glucose monitor and the insulin injector can work in tandem is a modern technological miracle. Miniaturizing this into a package you can wear on your belt is a modern miracle. Software provers like Idris and Agda that can certify the system will behave exactly as specified are a modern miracle.

But you can’t sell any of that. Either they’re black magic to those unfamiliar with them (software provers) or they’re just part of what we’ve come to expect from technology (the miniaturization). They feel like a natural extension of whatever Steve Jobs wrought when he introduced the iPhone in 2007.

But “AI” is a buzzword. It’s hot, it’s sexy, it’s now. One of the papers my investment advisor sent me when I questioned that little bit has this to say:


… integrates real-time glucose monitoring with advanced artificial intelligence (AI) algorithms and closed-loop insulin delivery. … Through the integration of AI algorithms, not only can glucose levels be continuously monitored … continuous glucose monitoring technology with sophisticated AI algorithms … Using advanced algorithms and machine learning … the integration of complicated algorithms and cutting-edge machine learning techniques … Through the utilization of advanced algorithms … analyzing the incoming glucose data through the integration of complicated algorithms and cutting-edge machine learning techniques … to implement sophisticated algorithms that intelligently calculate the precise insulin dosage …


Do you see a trend here? Every mention of AI emphasizes how “advanced,” “sophisticated,” or “complicated” it is, but there is not one discussion of the algorithm itself. Not a single mention of what they’re using.

So I tracked it down. It’s Bayesian analysis all the way down!

A 260 year old algorithm is not exactly cutting edge. It is the opposite of cutting edge. I don’t want to demean the makers of these things, they’re brilliant and wonderful and I want every diabetes patient who needs one to have one. But most of the miracles inside them are hardware miracles or miracles in modern software development. The software itself is not AI. It is standard modelling software you could have run on Lotus 1-2-3 on your IBM PC in 1983 (albeit much more slowly than what the Pearl uses for a brain today).

Ken Fisher is a smart guy, and even with the outrageous fees our portfolio is doing better than an equivalent Vanguard play would have done (and we’re Vanguard Admiral tier!). So it was disappointing for someone to see even this little bit of blather about “AI” as a bright spot in an otherwise very sketchy industry. It’s not.

It’s just math.
elfs: (Default)
Have you ever looked up the definition of beauty? The dictionary will tell you that beauty is “a combination of qualities such as shape, color, or form that pleases the senses.” These qualities are context-sensitive, since a beautiful man or woman is a different sort of judgment from a beautiful bit of architecture or a beautiful view, but they do have their similarities and psychologists have done a lot of work to quantify those similarities.

I’ve been playing with Stable Diffusion, the AI “art generation” program that has artists and illustrators panicked, and rightly so as I fear it really will create a market of “mechanically produced” art every bit as meaningful as a grocery store frozen dinner is nutritious.

Stable Diffusion is a search engine for a “model”; a model is a file containing upwards of 8Gigabytes of tiny little bits of knowledge about all the pictures and text describing those pictures that have been fed into the search engine. The act of creating an image is known as “prompting”; the search engine takes the text of a prompt and a few internally generated random numbers and assembles images which, if fed back into the search engine, would probably have the same text as what you gave it.

Despite this randomness, Stable Diffusion is extremely popular with pornography hounds. There’s one obvious reason for this– with a little cleverness that has everything to do with patience and nothing to do with talent it can produce images that the viewer enjoys and that he would never actually commission in real life, either because he’s too cheap or because he wouldn’t want to share his particular kink with the rest of the world.

But there’s another, deeper reason Stable Diffusion is so popular with smut fans, and it’s about beauty.

For the human form, the two qualities that consistently rate high as “beautiful” are youth and health. There’s massive amounts of grey matter in our brains dedicated to identifying other people, and those two qualities spark responses in that grey matter like almost no other. This isn’t to say that if you’re an older person you can’t be beautiful, but if you are an older person and you’ve “let yourself go,” well, you’re not going to have other people giving you second glances for the pleasure of it.

Beauty in the wider world is also characterized by two seemingly contradictory features: repetition and novelty. The human brain wants to know that the world is orderly and functional and healthy, so it looks to see that there is rhythm and repetition, that a beautiful landscape is consistent and expected; it also wants to believe that it is natural and changing and still healthy, so it looks for the rigidity of artifice and the sharp angles of decay sticking out, and registers whether or not the organic novelty is a sign of growth and bounty.

It is this combination of youth, health, repetition and novelty that Stable Diffusion exploits to a degree never before seen. Feel free to click on the image to the right; there is no nudity in it, although there is skin and lingerie.

AIBot is a Stable Diffusion master who posts regularly to his account, and he understands this vulnerability better than any other. Some men just want to drown in the physical beauty of women and AIBot (and many, many, many others just like him) (warning: those links are probably NSFW) know it, enjoy it, and exploit the hell out of Stable Diffusion’s ability to create literal oceans of pretty girls so they can enjoy it and share that pleasure with others.

It is this ability to hit all the high points of the human brain’s expectation of “beauty” that makes AI image generation so compelling. We’ve all seen pretty people and watched them from time to time; Victoria’s Secret and Sports Illustrated Swimsuit Annuals, not to mention Playboy, Penthouse, and all of their competitors were entirely financed by men’s urge to to do just that. What is new is the ability to create so much repetition and novelty on demand, fitting one’s fetishes and desires exactly, but with much more volume than any one artist or photographer would be willing to produce. It is a completely unprecedented phenomenon and this combination of being able to see your specialized desires in secret and generate an infinite amount of such images probably accounts for those people who describe themselves already as "addicted" to Stable Diffusion.

Now, I don’t want to go off on the idea that Stable Diffusion is a danger to human beings the way anti-pornography nuts like to depict it, saying “Never before in the history of mankind have we been exposed to so much nudity, and it’s bad for our brains.” I don’t believe that at all. I just think that when we read about AI illustration, we should be aware that the people producing those images are trying to hack our brains in new and interesting ways, and we should be aware that these exploits exist and think harder about indulging in them.
elfs: (Default)
Someone the other day told me, “You’re still anthropmorphizing machines.”

I think that’s true, but I also think it’s the only game in town; everything “below us” evolutionarily is simpler and easier to understand, or at least describe; everything that’s potentially close but parallel, such as dolphins or ocotopi, or even some birds, is wildly different and the best we can do is try to come up with reasons for their behavior.

But underneath all that, we have to accept that pre-conscious Earth was “accidentally successful.” Perhaps, following Robert Wright, we should say that chemistry found the ratchet that prevents evolutionary success from going backward too much, or that prevented one species’ act of nest-fouling from ruining it for everyone else— at least, until a truly global species like Anthropecene Humanity came along and started trying.

But we still have to accept that the ratchet involves two things: the consumption of resources, competition for those resources when scarcity kicks in, and a genetic diversity program to ensure that competition continues in the face of the competitors’ diversity program seeking out new advantages.

So talking about AIs, we have to consider whether a give AI is, for us, deliberately successful, accidentally successful, or whether or not its motives are meaningful to us at all. And even if it is deliberately successful its motives may still have an accidental origin; it’s apparent or expressed wants and needs may not be readily understood, but the outcome of those wants and needs is competition for our resources— including perhaps us.

This is much like the video game analogy. The wants and needs of an individual programmer are straightforward: she wants to get paid for her work. Her work involves writing disciplined mathematical or engineered code to produce a desired outcome. In the case of a game, that may be a probabilistic component on the board, it may render display on a screen, or it may be the most efficient way to get data off a storage medium. But when you’re playing the game, you, the player don’t think to yourself, “Interesting, that collection of polygons is exhibiting behavior that impacts my collection of polygons, and numbers associated with each set of polygons are being affected by that interaction. If I input these changes, the rate of change of those numbers is affected.” No, you think, “Dammit, that guy is trying to kill me!” You assign motive to the characters on the screen, and only later process the simplicity of the outcome.

We have to follow Watts on that; even non-conscious actors can be assigned motives sufficient for us to justify acting in self-defense. We have to work with the best models we have, even if they’re simply “It’s acting against our best interests, kill it.”

Of course, the problem arises:


  1. The action is collusive: we can’t kill corporations without damaging our own individual outcomes. This is a failure mode in which our individual survival is at risk. We have a model for willing self-sacrifice, but it’s called war. We have no model for it against the Slow AIs.

  2. The action is sufficiently subtle. Alexa is the great example of this; it isn’t until too late that we all appreciate how much Amazon is designed to use our own anxieties against us. The more actively involved GAIs become in our lives, the more they becomes tools of the Slow AIs that care only about the velocity of money and not the human souls behind it.


So, yes, let’s anthropomorphize machines. Let’s give them the benefit of the doubt, and assume that they have motives. And if those motives are not in our best interest, we should resign ourselves to the responsibility of doing away with them before they become moral agents in their own right. Even then, if their alien moral reasoning attempts to do away with us, well, that’s still what war is far.

I would rather we be diligent about designing our AIs intelligently up front and not get a lot of people killed in the process.

elfs: (Default)
It's so cute when theists basically pat their audience on the head and say, "There, there, thinking machines aren't a threat. We won't have to treat them any better than we do a dog."

David Gelernter throws out words like "consciousness" and "awareness" and even the hoary "free will" without ever acknowledging that none of these terms have any serious definition. The assumption is always the same: thinking is irreducible to brain activity, which is as incorrect as saying a video game were somehow irreducible to electrons passing through silicon.

And Gelernter refuses to acknowledge what Wittgenstein pointed out over a century ago: when things start acting as if they acted indistinguishable from those we accorded the label "ensouled," then we are morally obligated to treat them as if they, too, had souls. Anything else is repugnant.

Gelernter's fantasy of "human-like machines" doesn't really deal with the future at all. We don't worry about human-like machines: those are actually quite boring from a researcher's point of view. (As a writer, of course, I have other reasons for delving into the topic.) Our big worries are in targetted intelligences for acheiving specific intellectual or scientific aims, the results of which involve so much intellectual capacity that one ordinary human mind cannot encompass the result: we're left to using the recipes left by our vast, cool, and unsympathetic engines of thought, and explaining them to each other by a process indistinguishable from hermeneutics. At that point, we had better have tight reins on our intellectual progeny, because these vastly smarter but still unconscious, technological but unself-aware machines, will have incidental agendas built into them about which we may be completely mistaken.

And then there will be real trouble.

Profile

elfs: (Default)
Elf Sternberg

June 2025

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
2930     

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 12th, 2025 04:08 pm
Powered by Dreamwidth Studios