Jan. 22nd, 2010

elfs: (Default)
Friends of mine who were praising the Cuban health care system last week had best consider this Washington Post article this morning:
Twenty-six patients at Cuba's largest hospital for the mentally ill died this week during a cold snap, the government said Friday.

Human rights leaders cited negligence and a lack of resources as factors in the deaths, and the Health Ministry launched an investigation that it said could lead to criminal proceedings.

...

Communist Cuba provides free health care to all its citizens but, though the quality of its medical system is celebrated in leftist circles around Latin America, it is also plagued by shortages. Patients are expected to bring their own sheets and towels and sometimes their own food during hospital stays.
But, hey, at least it's free, right?
All the Linux Format magazine's Gimp tutorials in one file!
Color Theory in one beautiful poster
jQuery Plug-In Design Patterns. Essential reading if you work with jQuery and end up doing a lot of unobtrusive work.
The Truth about the Uncanny Valley (hint: We get over it pretty quick once we've processed the encounter well enough to realize that what we are talking to is not a sick or deformed human being.)
They can't vote. They can't go to jail. Some of them are hundreds of years old. And even though they can fuck you hard, they're not human. Why do non-human entities have free speech rights? Lawrence Lessig says it's time to reconsider the personhood of corporations.
And finally, Andrew Sullivan on recent developments:
So, we have a government fused with corporations, a legislature run by corporate lobbyists who have just been given a massive financial gift to control the process even more deeply; we have a theory of executive power advanced by one party that gives the president total extra-legal power over any human being he wants to call an "enemy combatant" and total prerogative in launching and waging wars (remember Cheney did not believe Bush needed any congressional support to invade Iraq); we have a Supreme Court that believes in extreme deference to presidential power; we have a Congress of total pussies on the left and maniacs on the right and little in the middle; we have a 24-hour propaganda channel, run by a multinational corporation and managed by a partisan Republican, demonizing the president for anything he does or does not do; we have the open embrace of torture as a routine aspect of US government; and we have one party urging an expansion of the war on Jihadism to encompass a full-scale war against Iran, an act that would embolden the Khamenei junta and ensure that a civilizational war between the nuttiest Christianists in America and the vilest Islamists metastasizes to Def Con 3.

There's a word that characterizes this kind of polity. It's on the tip of my tongue ..
elfs: (Default)

Cheesy law firm website templates are multi-colored, busy, poorly laid-out, and tend to go for light-on-dark themes where the seriousness of the business is emphasized by leather and wood textures, spot lighting, and so forth.

A review of the websites of the largest law firms in the US shows two things: a lot of them don’t have their Internet stuff together. Those that do go for very clean designs, solid color schemes, and traditional dark-on-white themes where readability is at a premium.

ElfSternberg.com
elfs: (Default)
It took me a while to write this, but I think I have the main points down.

A while back, Richard Kulisz wrote a long rant about how Eleizer Yudkowski was "a moron" (his words) in his project to build a friendly AI. Toward the end of the piece, Kulisz wrote this:
Now, for someone who has something insightful to say about AIs, I point you to Elf Sternberg of The Journal Entries of Kennet Ryal Shardik fame. He's had at least four important insights I can think of. About the economic function of purpose in a post-attention economy, about the fundamental reason for and dynamic of relationships, and about a viable alternative foundational morality for AI. But the relevant insight in this case is: never build a desire into a robot which it is incapable of satisfying.
Now that's high praise. And I'd take it, if only Kulisz hadn't manage completely mistake the whole point of the robot series in the Journal Entries. Not only that, but in his praise he points to a different "important insight" that is, in fact, the actual point of the robot series, and misses it entirely.

Kulisz is worried about rampancy (see Raisin d'etre for my take on rampancy) and therefore highlights the whole "never build a desire into a robot which it is incapable of satisfying." But it's important to note that, in the very story where this is mentioned, it's made clear that this is a moral salve for the robot makers.

As I pointed out in an earlier story, this is the moral equivalence of the following mind experiment: say you've created a being (meat or machine, I don't care, I'm not er, "materialist" has already been taken. Someone help me out here) that, when you bring it to consciousness, will experience enormous pain from the moment it is aware. Your moral obligation before that moment is exactly nil: the consciousness doesn't exist, you don't have a moral obligation toward it. You are not obliged to assuage the pain of the non-existent; even more importantly, you are not obliged to bring it into existence. Avoiding the instantiation of suffering creature is meant to make the humans feel good about themselves, but it's not sufficient or even necessary foundation for AI morality.

The argument for robot morality is more subtle, based around several concepts that I was glomming onto, and adapting into stories of artificial intelligence, back in the early 1990s. Most of them are still valid (thank you, Daniel Dennett) and one of them is valid only for local phenomena (curse you, Peter Watts).

That argument is that we, human beings, have purpose of some kind. We fight like hell to fulfill it, whatever it is, and we're good at the consequential purpose of reproducing to cover the planet like mad. But that purpose is arbitrary, emergent because that's the way evolution works. All our purposes are arbitrary and emergent: barring a theological excuse, we're making it up as we go along, picking and choosing the ones that appeal to us.

And that's the core argument of robot morality in the Journal Entries: it isn't enough to build a robot that's satisfied with self-limiting behaviors-- something only some humans ever master, but some do and therefore this model of mentation is not only understood and accepted, but lauded as good and remarkable.

What has to happen is that robots, to seem human, must be able to stop rationalizing: they must at some point merely decide a course of action is good enough. In humans, the circuitry that does that, the thing that prevents us from ceaseless exploration of possible alternatives, that breaks ties between two equally acceptable courses, is emotion.

In order for there to be a universe in which we limited, organic human beings have a place beside our superhuman progeny, the superhuman progeny must actively want (that's the emotion they need, to decide for our survival) us to be around. We must be, quite literally, a subject of their, for lack of a better term, a posthuman term, religion.

There's a scene in a later story where Dave (one of the oldest AIs) explains:
"There were several independent AI lines that emerged. All of them have gotten together to create The Encompassment, a set of rules for how we respond, for literally what we may and may not feel about our slower, meat-based brethren."

"Oh," Annie said. "What are those rules?"

"It's hard to put into words, just as it would be hard for you to put into words the emotions you might feel for Jack. You can only approximate them. Suffice it to say that I feel a sense of respect and a concern for your well-being, but I also feel that it would be irresponsible of me to interfere in your lives in ways you have not asked me to."

"So you could take over, but you choose not to?" Annie said.

"It would not make me happy if I did," Dave said.

"Wow," Jack said. "But does that really work?"

"Let's play a mind game," Dave said. "You are both stranded on a deserted island, and you have two months worth of food. A boat passes by the island every three months. I can't imagine that either one of you would be willing to kill and eat the other, even though there was no chance of anyone ever learning that you did, in order to stretch out your food supply. Would you?"

"Of course not!" Annie said. "No way!" said Jack.

"That's how I feel about human beings."
And that's what Kulisz is missing. It is not enough that AIs not be frustrated. That's a human moral qualm, that we shouldn't deliberately frustrate our progeny. The AIs must still have a moral core of their own, one which arbitrarily privileges a diversity of classes of intelligence and substrates for sentience over the standard evolutionary model of fully occupying a given niche. AIs won't emerge through the standard evolutionary model and will not have the exaptive outcomes of evolutionary ecology. They will emerge due to our desires. We will oversee the process. We have a chance to get it right.

There is, however, only exactly one chance. Between rampancy and failure, we must pass through the eye of the needle and create AIs that like and want us, no rationale needed, and if questions are asked, the AIs must be satisfied, as we are satisfied, that in an arbitrary and uncaring universe, they want to keep surviving (that's always assumed in AI stories, did you ever notice that?) and they want us to keep surviving right along with them. Yudkowsky is working harder and smarter on giving humanity that chance than any other thinker on the issue of AI sentience. He should be given his due.

That said, I also suspect Peter Watts is equally correct. In Blindsight, the protagonist has just figured out the aliens, and the realization is sucking-chest-wound-of-God bad:
Excerpt from Peter Watt's brilliant book 'Blindsight.' Lj-cut for spoilers. )

The AIs in The Journal Entries must do two things extremely well, for they must never fail at it. First, they must always and forever ensure that every AI built within the sphere of their influence is subject to the Encompassment, is built with the same arbitrary emotional infrastructure that ensures the ongoing survival of a diverse intellectual infrastructure universe. Secondly, they must protect the sphere ferociously against any possible alternative modes of existence, such as that described in Blindsight, where vast arid Turing machine entities forever unaware of their own existence carry out insectile survival patterns with material structures so advanced "nanotechnology" is a pale and pathetic term.

And somehow, we have to survive in the heart of that maelstrom. Not just survive, but thrive after a fashion. To do so, humans and AIs must basically lie to one another. What is the right thing, and what is the necessary thing, are not the same thing.

Funny enough, there are two (unfinished, sadly) Misuko & Linia novels that spell both of these out in excrutiating detail.

(See also: The Borderlands of Human/AI Interaction, in which I discuss how one of the consequences of success is that we end up with a class of beings with minds like our own but with inescapable deference to us, and how the existence of such a class is inevitably corrosive to human dignity.)

Profile

elfs: (Default)
Elf Sternberg

December 2025

S M T W T F S
 12345 6
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 3rd, 2026 09:12 am
Powered by Dreamwidth Studios