elfs: (Default)
[personal profile] elfs
Someone the other day told me, “You’re still anthropmorphizing machines.”

I think that’s true, but I also think it’s the only game in town; everything “below us” evolutionarily is simpler and easier to understand, or at least describe; everything that’s potentially close but parallel, such as dolphins or ocotopi, or even some birds, is wildly different and the best we can do is try to come up with reasons for their behavior.

But underneath all that, we have to accept that pre-conscious Earth was “accidentally successful.” Perhaps, following Robert Wright, we should say that chemistry found the ratchet that prevents evolutionary success from going backward too much, or that prevented one species’ act of nest-fouling from ruining it for everyone else— at least, until a truly global species like Anthropecene Humanity came along and started trying.

But we still have to accept that the ratchet involves two things: the consumption of resources, competition for those resources when scarcity kicks in, and a genetic diversity program to ensure that competition continues in the face of the competitors’ diversity program seeking out new advantages.

So talking about AIs, we have to consider whether a give AI is, for us, deliberately successful, accidentally successful, or whether or not its motives are meaningful to us at all. And even if it is deliberately successful its motives may still have an accidental origin; it’s apparent or expressed wants and needs may not be readily understood, but the outcome of those wants and needs is competition for our resources— including perhaps us.

This is much like the video game analogy. The wants and needs of an individual programmer are straightforward: she wants to get paid for her work. Her work involves writing disciplined mathematical or engineered code to produce a desired outcome. In the case of a game, that may be a probabilistic component on the board, it may render display on a screen, or it may be the most efficient way to get data off a storage medium. But when you’re playing the game, you, the player don’t think to yourself, “Interesting, that collection of polygons is exhibiting behavior that impacts my collection of polygons, and numbers associated with each set of polygons are being affected by that interaction. If I input these changes, the rate of change of those numbers is affected.” No, you think, “Dammit, that guy is trying to kill me!” You assign motive to the characters on the screen, and only later process the simplicity of the outcome.

We have to follow Watts on that; even non-conscious actors can be assigned motives sufficient for us to justify acting in self-defense. We have to work with the best models we have, even if they’re simply “It’s acting against our best interests, kill it.”

Of course, the problem arises:


  1. The action is collusive: we can’t kill corporations without damaging our own individual outcomes. This is a failure mode in which our individual survival is at risk. We have a model for willing self-sacrifice, but it’s called war. We have no model for it against the Slow AIs.

  2. The action is sufficiently subtle. Alexa is the great example of this; it isn’t until too late that we all appreciate how much Amazon is designed to use our own anxieties against us. The more actively involved GAIs become in our lives, the more they becomes tools of the Slow AIs that care only about the velocity of money and not the human souls behind it.


So, yes, let’s anthropomorphize machines. Let’s give them the benefit of the doubt, and assume that they have motives. And if those motives are not in our best interest, we should resign ourselves to the responsibility of doing away with them before they become moral agents in their own right. Even then, if their alien moral reasoning attempts to do away with us, well, that’s still what war is far.

I would rather we be diligent about designing our AIs intelligently up front and not get a lot of people killed in the process.

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

elfs: (Default)
Elf Sternberg

May 2025

S M T W T F S
    123
45678910
111213141516 17
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 5th, 2025 01:55 am
Powered by Dreamwidth Studios