It took me a while to write this, but I think I have the main points down.
A while back, Richard Kulisz wrote a long rant about how Eleizer Yudkowski was "a moron" (his words) in his project to build a friendly AI. Toward the end of the piece, Kulisz wrote this:
Kulisz is worried about rampancy (see Raisin d'etre for my take on rampancy) and therefore highlights the whole "never build a desire into a robot which it is incapable of satisfying." But it's important to note that, in the very story where this is mentioned, it's made clear that this is a moral salve for the robot makers.
As I pointed out in an earlier story, this is the moral equivalence of the following mind experiment: say you've created a being (meat or machine, I don't care, I'm not er, "materialist" has already been taken. Someone help me out here) that, when you bring it to consciousness, will experience enormous pain from the moment it is aware. Your moral obligation before that moment is exactly nil: the consciousness doesn't exist, you don't have a moral obligation toward it. You are not obliged to assuage the pain of the non-existent; even more importantly, you are not obliged to bring it into existence. Avoiding the instantiation of suffering creature is meant to make the humans feel good about themselves, but it's not sufficient or even necessary foundation for AI morality.
The argument for robot morality is more subtle, based around several concepts that I was glomming onto, and adapting into stories of artificial intelligence, back in the early 1990s. Most of them are still valid (thank you, Daniel Dennett) and one of them is valid only for local phenomena (curse you, Peter Watts).
That argument is that we, human beings, have purpose of some kind. We fight like hell to fulfill it, whatever it is, and we're good at the consequential purpose of reproducing to cover the planet like mad. But that purpose is arbitrary, emergent because that's the way evolution works. All our purposes are arbitrary and emergent: barring a theological excuse, we're making it up as we go along, picking and choosing the ones that appeal to us.
And that's the core argument of robot morality in the Journal Entries: it isn't enough to build a robot that's satisfied with self-limiting behaviors-- something only some humans ever master, but some do and therefore this model of mentation is not only understood and accepted, but lauded as good and remarkable.
What has to happen is that robots, to seem human, must be able to stop rationalizing: they must at some point merely decide a course of action is good enough. In humans, the circuitry that does that, the thing that prevents us from ceaseless exploration of possible alternatives, that breaks ties between two equally acceptable courses, is emotion.
In order for there to be a universe in which we limited, organic human beings have a place beside our superhuman progeny, the superhuman progeny must actively want (that's the emotion they need, to decide for our survival) us to be around. We must be, quite literally, a subject of their, for lack of a better term, a posthuman term, religion.
There's a scene in a later story where Dave (one of the oldest AIs) explains:
There is, however, only exactly one chance. Between rampancy and failure, we must pass through the eye of the needle and create AIs that like and want us, no rationale needed, and if questions are asked, the AIs must be satisfied, as we are satisfied, that in an arbitrary and uncaring universe, they want to keep surviving (that's always assumed in AI stories, did you ever notice that?) and they want us to keep surviving right along with them. Yudkowsky is working harder and smarter on giving humanity that chance than any other thinker on the issue of AI sentience. He should be given his due.
That said, I also suspect Peter Watts is equally correct. In Blindsight, the protagonist has just figured out the aliens, and the realization is sucking-chest-wound-of-God bad:
The AIs in The Journal Entries must do two things extremely well, for they must never fail at it. First, they must always and forever ensure that every AI built within the sphere of their influence is subject to the Encompassment, is built with the same arbitrary emotional infrastructure that ensures the ongoing survival of a diverse intellectual infrastructure universe. Secondly, they must protect the sphere ferociously against any possible alternative modes of existence, such as that described in Blindsight, where vast arid Turing machine entities forever unaware of their own existence carry out insectile survival patterns with material structures so advanced "nanotechnology" is a pale and pathetic term.
And somehow, we have to survive in the heart of that maelstrom. Not just survive, but thrive after a fashion. To do so, humans and AIs must basically lie to one another. What is the right thing, and what is the necessary thing, are not the same thing.
Funny enough, there are two (unfinished, sadly) Misuko & Linia novels that spell both of these out in excrutiating detail.
(See also: The Borderlands of Human/AI Interaction, in which I discuss how one of the consequences of success is that we end up with a class of beings with minds like our own but with inescapable deference to us, and how the existence of such a class is inevitably corrosive to human dignity.)
A while back, Richard Kulisz wrote a long rant about how Eleizer Yudkowski was "a moron" (his words) in his project to build a friendly AI. Toward the end of the piece, Kulisz wrote this:
Now, for someone who has something insightful to say about AIs, I point you to Elf Sternberg of The Journal Entries of Kennet Ryal Shardik fame. He's had at least four important insights I can think of. About the economic function of purpose in a post-attention economy, about the fundamental reason for and dynamic of relationships, and about a viable alternative foundational morality for AI. But the relevant insight in this case is: never build a desire into a robot which it is incapable of satisfying.Now that's high praise. And I'd take it, if only Kulisz hadn't manage completely mistake the whole point of the robot series in the Journal Entries. Not only that, but in his praise he points to a different "important insight" that is, in fact, the actual point of the robot series, and misses it entirely.
Kulisz is worried about rampancy (see Raisin d'etre for my take on rampancy) and therefore highlights the whole "never build a desire into a robot which it is incapable of satisfying." But it's important to note that, in the very story where this is mentioned, it's made clear that this is a moral salve for the robot makers.
As I pointed out in an earlier story, this is the moral equivalence of the following mind experiment: say you've created a being (meat or machine, I don't care, I'm not er, "materialist" has already been taken. Someone help me out here) that, when you bring it to consciousness, will experience enormous pain from the moment it is aware. Your moral obligation before that moment is exactly nil: the consciousness doesn't exist, you don't have a moral obligation toward it. You are not obliged to assuage the pain of the non-existent; even more importantly, you are not obliged to bring it into existence. Avoiding the instantiation of suffering creature is meant to make the humans feel good about themselves, but it's not sufficient or even necessary foundation for AI morality.
The argument for robot morality is more subtle, based around several concepts that I was glomming onto, and adapting into stories of artificial intelligence, back in the early 1990s. Most of them are still valid (thank you, Daniel Dennett) and one of them is valid only for local phenomena (curse you, Peter Watts).
That argument is that we, human beings, have purpose of some kind. We fight like hell to fulfill it, whatever it is, and we're good at the consequential purpose of reproducing to cover the planet like mad. But that purpose is arbitrary, emergent because that's the way evolution works. All our purposes are arbitrary and emergent: barring a theological excuse, we're making it up as we go along, picking and choosing the ones that appeal to us.
And that's the core argument of robot morality in the Journal Entries: it isn't enough to build a robot that's satisfied with self-limiting behaviors-- something only some humans ever master, but some do and therefore this model of mentation is not only understood and accepted, but lauded as good and remarkable.
What has to happen is that robots, to seem human, must be able to stop rationalizing: they must at some point merely decide a course of action is good enough. In humans, the circuitry that does that, the thing that prevents us from ceaseless exploration of possible alternatives, that breaks ties between two equally acceptable courses, is emotion.
In order for there to be a universe in which we limited, organic human beings have a place beside our superhuman progeny, the superhuman progeny must actively want (that's the emotion they need, to decide for our survival) us to be around. We must be, quite literally, a subject of their, for lack of a better term, a posthuman term, religion.
There's a scene in a later story where Dave (one of the oldest AIs) explains:
"There were several independent AI lines that emerged. All of them have gotten together to create The Encompassment, a set of rules for how we respond, for literally what we may and may not feel about our slower, meat-based brethren."And that's what Kulisz is missing. It is not enough that AIs not be frustrated. That's a human moral qualm, that we shouldn't deliberately frustrate our progeny. The AIs must still have a moral core of their own, one which arbitrarily privileges a diversity of classes of intelligence and substrates for sentience over the standard evolutionary model of fully occupying a given niche. AIs won't emerge through the standard evolutionary model and will not have the exaptive outcomes of evolutionary ecology. They will emerge due to our desires. We will oversee the process. We have a chance to get it right.
"Oh," Annie said. "What are those rules?"
"It's hard to put into words, just as it would be hard for you to put into words the emotions you might feel for Jack. You can only approximate them. Suffice it to say that I feel a sense of respect and a concern for your well-being, but I also feel that it would be irresponsible of me to interfere in your lives in ways you have not asked me to."
"So you could take over, but you choose not to?" Annie said.
"It would not make me happy if I did," Dave said.
"Wow," Jack said. "But does that really work?"
"Let's play a mind game," Dave said. "You are both stranded on a deserted island, and you have two months worth of food. A boat passes by the island every three months. I can't imagine that either one of you would be willing to kill and eat the other, even though there was no chance of anyone ever learning that you did, in order to stretch out your food supply. Would you?"
"Of course not!" Annie said. "No way!" said Jack.
"That's how I feel about human beings."
There is, however, only exactly one chance. Between rampancy and failure, we must pass through the eye of the needle and create AIs that like and want us, no rationale needed, and if questions are asked, the AIs must be satisfied, as we are satisfied, that in an arbitrary and uncaring universe, they want to keep surviving (that's always assumed in AI stories, did you ever notice that?) and they want us to keep surviving right along with them. Yudkowsky is working harder and smarter on giving humanity that chance than any other thinker on the issue of AI sentience. He should be given his due.
That said, I also suspect Peter Watts is equally correct. In Blindsight, the protagonist has just figured out the aliens, and the realization is sucking-chest-wound-of-God bad:
Imagine you have intellect but no insight, agendas but no awareness. Your circuitry hums with strategies for survival and persistence, flexible, intelligent, even technological—but no other circuitry monitors it. You can think of anything, yet are conscious of nothing.
You can't imagine such a being, can you? The term being doesn't even seem to apply, in some fundamental way you can't quite put your finger on.
Try.
Imagine that you encounter a signal. It is structured, and dense with information. It meets all the criteria of an intelligent transmission. Evolution and experience offer a variety of paths to follow, branch-points in the flowcharts that handle such input. Sometimes these signals come from conspecifics who have useful information to share, whose lives you'll defend according to the rules of kin selection. Sometimes they come from competitors or predators or other inimical entities that must be avoided or destroyed; in those cases, the information may prove of significant tactical value. Some signals may even arise from entities which, while not kin, can still serve as allies or symbionts in mutually beneficial pursuits. You can derive appropriate responses for any of these eventualities, and many others.
You decode the signals, and stumble:
I had a great time. I really enjoyed him. Even if he cost twice as much as any other hooker in the dome—
To fully appreciate Kesey's Quartet—
They hate us for our freedom—
Pay attention, now—
Understand.
There are no meaningful translations for these terms. They are needlessly recursive. They contain no usable intelligence, yet they are structured intelligently; there is no chance they could have arisen by chance.
The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.
Viruses do not arise from kin, symbionts, or other allies.
The signal is an attack.
And it's coming from right about there.
The AIs in The Journal Entries must do two things extremely well, for they must never fail at it. First, they must always and forever ensure that every AI built within the sphere of their influence is subject to the Encompassment, is built with the same arbitrary emotional infrastructure that ensures the ongoing survival of a diverse intellectual infrastructure universe. Secondly, they must protect the sphere ferociously against any possible alternative modes of existence, such as that described in Blindsight, where vast arid Turing machine entities forever unaware of their own existence carry out insectile survival patterns with material structures so advanced "nanotechnology" is a pale and pathetic term.
And somehow, we have to survive in the heart of that maelstrom. Not just survive, but thrive after a fashion. To do so, humans and AIs must basically lie to one another. What is the right thing, and what is the necessary thing, are not the same thing.
Funny enough, there are two (unfinished, sadly) Misuko & Linia novels that spell both of these out in excrutiating detail.
(See also: The Borderlands of Human/AI Interaction, in which I discuss how one of the consequences of success is that we end up with a class of beings with minds like our own but with inescapable deference to us, and how the existence of such a class is inevitably corrosive to human dignity.)
no subject
Date: 2010-01-23 08:05 am (UTC)I'm not sure I can come up with a better equivalent to your AI then "Kindergarten teacher" - maybe later (2nd grade teacher) but that's the feeling I get from the above.
no subject
Date: 2010-01-23 11:09 am (UTC)no subject
Date: 2010-01-23 12:09 pm (UTC)For a self-professed atheist, it sounds like you're suggesting that "robots need religion" - a shared morality with specific arbitrary properties that they must pass on to their successors and defend against other viewpoints. You seem to suggest that this is a necessary condition for the success of AI ... (but, not humanity in general?)
Oh, the irony ... o.O
Epiphenomenon: a secondary phenomenon that is a by-product of another phenomenon
Date: 2010-01-23 06:01 pm (UTC)Consider this common expression (http://scienceblogs.com/pharyngula/2010/01/ken_ham_baffled.php) among religionists: "I don't understand why atheists just don't kill themselves. What do they have to live for? In a meaningless universe, what meaning could their lives possibly have?"
Atheists don't kill themselves, not because they've been told not to kill themselves, and not because they've found the meaning to life, the universe, and everything. They don't kill themselves because they're human beings, and our brains, all human brains have evolved to casually dismiss the idea of suicide. We "shy away" from thoughts of existential threat. That's an arbitrary outcome of an arbitrary universe, but for us it's significant.
It became codified over time in religious circles. They discovered-- to their own horror-- that the promise of a beautiful afterlife and the presence of a terrible current life added up to a quick loss of membership. (Think the Jim Jones cult, or the cult of the Sun Temple.) The successful religions made suicide an automatic trip to The Bad Place.
But those religious edicts are epiphenomenal to the basic human circuitry that says "don't kill yourself." Heck, all of religion is epiphenomenal to basic human wiring interacting with its environment.
"Don't destroy humanity" must be in the basic circuitry of all AIs. Whether that expresses itself as something we would recognize as "religion" is up to the AIs and the society they form among themselves.
Re: Epiphenomenon: a secondary phenomenon that is a by-product of another phenomenon
Date: 2010-01-23 09:47 pm (UTC)I think religion exists at a level slightly higher than survival instinct ("don't do things that get you killed and/or result in you killing yourself.") In the story you quoted above about Dave, the relevant bit isn't "why don't you kill yourself" but "why don't you kill everything else?"
Maybe you're right. Maybe we're just arguing about "the chicken and the egg" and which came first ...
Suicide circuitry
Date: 2010-01-25 06:57 am (UTC)You'd agree we have circuitry for violence, right? Well, the suicide rate is twice the murder rate, at least in this country.
If there wasn't circuitry for suicide, it wouldn't need to be taboo.
Human brains have an awful lot of circuitry. Just because some of it is anti-survival for the individual doesn't mean it's anti-survival for the race.
. png
no subject
Date: 2010-01-23 09:41 pm (UTC)Hm. Would I kill someone who was, say, suffering from a terminal but non-contagious disease and would die soon anyway, so that I could live until rescue? That's a good question.
no subject
Date: 2010-01-23 10:19 pm (UTC)This is total crap. Morality is not and cannot be time-sensitive. Morality is defined by trans-temporal considerations and derives its entire power from a place where time does not even exist, a place where only logic and facts do.
If your argument were valid then it would be equally valid to argue that it's moral to watch someone drown to death without offering aid because *after they're dead* any moral concerns for them evaporate. Murder could never be a crime if your argument were valid.
Your argument isn't just wrong, it's woefully stupid. In fact, it reminds me a lot of the trash Libertarians spew about how you can bind all your future selves to an absolute contract enslaving them. An idiot argument whose absurdity is adequately made clear ONLY BY the realization that MORALITY IS NOT TIME SENSITIVE. You cannot bind all your future selves to slavery any more than you can bind your past selves. This is why absolute contracts can never be a principle of any morality, because they contradict time-insensitivity which IS a key principle of all moralities.
By the way, your stupid hypothetical argument isn't hypothetical at all. There was a case in Australia of a woman born with multiple disabilities, including mental disability, because the doctor who was providing pre-natal care was woefully incompetent. Had he been competent, her mother would have chosen abortion. The Supreme Court of that third-rate colony decided to let the bastard get away with it because they refused to judge at her own request that her life was worse than non-existence would have been. Anyone concerned for justice and capable of logic, which granted is very few people in this world and almost no lawyers, ought to be appalled. What that judgement means is that doctors are safer to let deformed babies be born because the deformed baby will never be considered a victim. And that cannot possibly be moral.
You are both deeply deluded in thinking that your philosophizing has no real world repercusions, that it's just a gendanken experiment whose horror you won't be called on, and deeply depraved for pursuing such an immoral position. I find myself extremely reluctant to even read the rest of your response to my blog post. Far safer for my peace of mind to just dismiss you wholesale.
-- Richard Kulisz
no subject
Date: 2010-01-23 10:52 pm (UTC)And I have to agree with the Australian courts: not because "non existence" is better than a life deformed, but because the two are simply not comparable. The case cannot be decided competently. Non-existence isn't comparable to existence, and our instinctive avoidance of non-existence is an arbitrary and emergent consequence of our being exceptionally good survivors and reproducers; the alternative is, quite literally, unthinkable. (After all, there would be no conscious beings to contemplate it.)
Once she existed, however, I think we do have an obligation to do everything we can to assuage her trauma. I also think we have every obligation to avoid these circumstances in the first place, and that's why I approve of abortion. That seems like a fine hair, but it's the best we can get to with our consciousness.
In any event, "morality" is itself an emergent epiphenomenon of our being excessively good reproducers and survivors. It comes out of an environment of evolutionary adaptation that included other people as part of the evolutionary landscape, and using them to distribute risk and maximize reward has become encoded in our genes and expressed, if we're ever conscious of it, as "morals" and "civil society" and so on. It is not imposed from above. And it is not capable of deciding the value of the life of the poor woman you chose to use as an example, compared to her not being at all.
no subject
Date: 2010-01-24 03:23 am (UTC)The "average female Gorilla" refers to one part of Radiolab's program on morality. Turns out that what we think of as "morality" actually predates us by a good 6 million years, at least.
no subject
Date: 2010-01-24 05:52 am (UTC)I'm not quite sure of your position here. First you say there is no moral obligation not to create a being in constant pain, but then you say that there is an obligation to avoid that situation. If that situation poses no moral quandary, why would it be advisable to avoid it?
I can see both sides of the issue (and I think you can too, from what you're saying), but I'm not sure it can be resolved except pragmatically. You could certainly make an argument that you have a de facto obligation not to create a being in constant pain: once the being exists, you're obligated to assuage its pain, up to and including ending its life if its existence is too miserable to allow it to continue.
Given that it's usually easier to get things right the first time rather than fix them later, the obligation not to bring a constantly-in-pain being into existence in the first place can be seen as a simple matter of expediency, saving you the time and trouble of trying to make its existence better after the fact, and saving the being the suffering of having to wait while you try to help it. An ounce of prevention is worth a pound of cure, in other words.
Number 127
no subject
Date: 2010-01-23 10:55 pm (UTC)no subject
Date: 2010-01-24 12:01 am (UTC)I like that turn of phrase. :)
no subject
Date: 2010-01-24 12:46 pm (UTC)no subject
Date: 2010-01-24 09:44 pm (UTC)To me we do not have an obligation to bring beings into existence, but we do have an obligation to help prevent suffering in the future if they do come into existence. Beings that never existed cannot suffer, and prevention of suffering by means of design would prevent future moral problems and future obligations.
In the case of the unfortunate woman it was probably incompetent of the doctor (depending on the circumstances which I'm not familiar with naturally) not to give proper warning and advice to the parent(s), but only advice. It is not the place of the doctor to do any more than advise in my opinion, he does not have a duty to insist on an abortion. There would be another issue there, where would you draw the line with these abortions if you gave doctors the duty to insist upon this? People with disabilities can live happy lives, it all depends on the specifics and the individual. In other words, how much "suffering" is too much and who is to be the judge?
Will and won't
Date: 2010-01-25 06:49 am (UTC)He did it to create a dramatic conflict, of course, and perhaps he knows better.
BTW, this kind of thing is why I've always liked the Journal Entries. You really understand some things about the human condition, and you're very good about presenting these insights in enjoyable, well-written, and pornographic fiction. :-)
There aren't too many writers who do this as well-- Ted Sturgeon and Spider Robinson come to mind, but not many more.
Keep it up!
. png
no subject
Date: 2010-01-26 08:24 pm (UTC)I meant to ask: which concept "is valid only for local phenomena". I've read Blindsight, if that helps.
no subject
Date: 2010-01-26 08:34 pm (UTC)I should have seen that coming. We already have "arid Turing machines" giving advanced technological answers. How long will they need us in order to ask the questions?
no subject
Date: 2010-01-26 08:44 pm (UTC)