elfs: (Default)
[personal profile] elfs

There are few things more humbling than watching someone else work his way around the issues I’ve been touching on in my stories, vaguely aware that something was wrong with my approach, and having a dogged intellectual approach come out of his effort.

Eliezer Yudkowsky once expressed his admiration that I’d managed to work out the problems with having superintelligences around and had come up with the concept of the Friendly AI almost independently.  One of the things that’s long bothered me about the Journal Entries, though, is that, since I write erotica in a superintelligent AI setting, there are only so many crises about which I can write before I start to wonder if I’m just repeating myself.

Eliezer has supplied the answer: The Theory of Fun.  If you’re a science fiction writer of transhumanist themes, the Theory of Fun is absolutely critical reading: “How much Fun is there in the universe?”  “Can we escape the human tendency to return to our baseline persona in an environment of accelerating abundance?”  “Fun involves engaging the senses, not abstracting them away,” “If we’re free to optimize, do we provide an environment where one bad choice dooms you to an existence of misery?”

Probably the most telling chapter for me is Amputation of Destiny.  Without massive handwaving, the presence of superintelligence makes the ordinary characters about which I write little more than sideline characters living within the flow of history.  Part of the reason I’ve had trouble writing this past year, and considered falling back to fantasy and contemporary fiction, is that I haven’t figured out how to write convincingly about characters who can actually change their futures while living with AIs that pretty much see all and know all.

Eliezer has a heavy dislike of catboys, or any apparently conscious creature that has a more limited capacity for choice than human beings, calling it a “step in the wrong direction.”  That may be, given his goals, but I don’t think it addresses adequately the instantiation problem: what are our moral obligations to beings that don’t exist?  As long as they’re only a potential, even an immanent potential, we don’t have a moral obligation to them: they don’t have agency.  Instantiating a limited potential, even a conscious potential, with a more limited capacity than our own, is a morally neutral activity, so  I don’t have a problem with the whole sex-robot (or even sex meat machine) aspect.

Other chapters worth reading include Eutopia is Scary, in which he challenges creative people to imagine, not a world in which your ideas worked out and the future is like the present but moreso, but actually, genuinely as different from today as today is different from the past, and Higher Purpose, which reads a lot like a direct challenge both to the “Purpose culture” of the Journal Entries and a challenge to all posthuman writers: after having solved all of the major challenges facing humanity, what “purpose” would we adopt next?

I don’t recall if Eliezer ever addresses the simple statement that our “mechanical purpose” is to survive and reproduce: without that, we cease to exist; both controlling and supplanting that purpose are issues that we need to deal with to avoid poisoning our own pond before we escape it.  The “Purpose culture” of the Journal Entries proposes that that “mechanical purpose” can be arbitrarily replaced with any other mechanical purpose without a change in one’s individual moral status (although it’s very possible to wind up trapped in a “devil’s offer“), and again instantiating a being with a different purpose from the default is a morally neutral act (for the individual; it might be a very different story for those around the individual).   Eliezer’s take on this declaration is very different (see “Can’t Unbirth a Child“), but he has an equally arbitrary idea of “a life worth leading.”  (It reads remarkably like Eliezer’s own life.)

Still, this is an incredibly well-thought-out set of ideas about living next door to the godlike AI’s of love, and well worth reading, and mining for deep ideas about the future.

This entry was automatically cross-posted from Elf's writing journal, Pendorwright.com. Feel free to comment on either LiveJournal or Pendorwright.

Date: 2009-01-26 03:07 am (UTC)
From: [identity profile] mikstera.livejournal.com
"That may be, given his goals, but I don’t think it addresses adequately the instantiation problem: what are our moral obligations to beings that don’t exist? As long as they’re only a potential, even an immanent potential, we don’t have a moral obligation to them: they don’t have agency. Instantiating a limited potential, even a conscious potential, with a more limited capacity than our own, is a morally neutral activity, so I don’t have a problem with the whole sex-robot (or even sex meat machine) aspect."

Could you expand on this, please?

It seems to me that if you intentionally create a consciousness with limited capacity, that that is not a morally neutral act, though I confess I have a hard time articulating exactly why I hold that position. I keep imagining the creation of humans who have pruning hooks instead of hands, so that you can use them in your fields to harvest fruit, but they won't be able to do much of anything else.

I feel a parallel between the two, but I don't actually see it, if you get my meaning.

Date: 2009-01-26 03:40 pm (UTC)
From: (Anonymous)
I struggle with the ethics of instantiating a potential as well. On the one hand, I can appreciate (and even share) Elf's viewpoint that we have no moral obligation to something that does not yet exist -- how could we? Do we then have a moral obligation to all things that do not yet exist? That way lies madness (e.g., we must lobby against birth control on behalf of all the people who will never exist unless it's banned!).

On the other hand, I don't think that necessarily means the act of instantiation is morally neutral. I would have a strong objection to, say, creating a being who lived in constant, incurable agony; that doesn't sound morally neutral to me. I prefer a more pragmatic, utilitarianesque approach of "how well do I understand the likely consequences of this act, and would they lead to a world which is preferable to the current one?" That removes questions of agency while still providing a framework in which such evaluations can be made.

The question then becomes, does instantiating a limited potential lead to a better world? That's a complicated question with a lot of side issues (e.g. would the resources which would be devoted to its care be better used on something else?, etc.). It also requires us to consider the question of whether such a being suffers because of its limitations. That also is a complicated issue, but I don't think the answer is an unequivocal "yes."

Anonymous Blog Reader #127

Date: 2009-01-26 05:06 pm (UTC)
From: (Anonymous)
One (and I don't) could argue that it would be OK to create a sentient being with pruning hooks for hands, if one also designed them such that pruning was all they ever wanted to do.

I guess I don't accept as moral or ethical the idea that one could create a sentient being with intentional limitations so as to serve the purposes of its creator. Maybe that's the key: If you create a sentient being, do you treat it as an equal? If not, then I suspect you are being unethical / immoral in instantiating that being in the first place.

Damnit, I thought I was already logged in...

Date: 2009-01-26 05:09 pm (UTC)
From: [identity profile] mikstera.livejournal.com
I'd like to hear more about Elf's concept of "Purpose", and something he's alluded to in the past, "The Purpose Wars".

Date: 2009-01-26 03:46 am (UTC)
From: [identity profile] resonant.livejournal.com
One thing to consider is that baseline humans have very limited sensory bandwidth. Charles Stross once estimated that the entire sensory input of Scotland could be carried by an optic fibre the diameter of your arm.

There could be enough fun generated by a civilization to keep us regular humans from being bored, but uplifted folk and AIs could find it tedious. It'd be a useful plot device to explain why there are no transcendentally intelligent, all-knowing AIs in a story - existence is too painful once they start to process information too fast, and they end up killing themselves (often in dramatic and interesting ways).

Profile

elfs: (Default)
Elf Sternberg

May 2025

S M T W T F S
    123
45678910
111213141516 17
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 17th, 2025 07:08 pm
Powered by Dreamwidth Studios