elfs: (Default)
[personal profile] elfs
It's so cute when well-meaning, well-educated people wander into my corner of reality and find themselves desperately out of their depths. After listening to Catrin Nicol rant and rave in her newest essay, 'Till Malfunction Do Us Part, I swear, I just want to pick him up by the collar and say, "Don't you fools realized dualism is dead? Dead, dead, and it's corpse is still smelling up the halls!"

But there are those who haven't thought deeply about The Zombie Problem, or actually read any Wittgenstein duelling with William James, and Nicol is one of those. In an embarrassingly long essay, Nicol tries very hard to reassure his audience that the future depicted in David Levy's Love and Sex with Robots (I haven't read it; I probably should) will never meaningfully come to pass or meaningfully touch the vast majority of us.

He starts by arguing that the technology is in its infancy, but will probably never quite cross the uncanny valley (warning, that link is NSFW, despite the fact that it's absolutely the best explanation of the uncanny valley I know of to date). He asserts with confidence that the submissiveness we'll demand of our robotic companions will never be masked by any artificial feistiness or contrariness. (Linia giggled. "Oh, dear. I see I shall have to come up with some new contrary and annoying habit.")

But Nicol's central disagreement (aside from declaiming Levy's thesis as "terribly silly") is that human beings can tell the difference between human beings and zombie human beings. It's hard to call this an argument. Philosophers have been arguing ever since William James first proposed the problem:
I thought of what I called an 'automatic sweetheart,' meaning a soulless body which should be absolutely indistinguishable from a spiritually animated maiden, laughing, talking, blushing, nursing us, and performing all feminine offices as tactfully and sweetly as if a soul were in her. Would any one regard her as a full equivalent? Certainly not, and why? Because, framed as we are, our egoism craves above all things inward sympathy and recognition, love and admiration.

The outward treatment is valued mainly as an expression, as a manifestation of the accompanying consciousness believed in.
Wittgenstein retorted that if she had all the evidential qualities of one possessing a soul, it would be immoral to treat her as if had not. You would have no choice, just as even as the evidence piles higher and higher that most free will is illusory, we continue to act as if we had it: any other response is incoherent. (Ping! The plot of Honest Impulses just fell into place. Muse, this one's yours...)

As noted in the Zombie Problem link at the top of this, you'll see that philosophers are still arguing about whether or not it's possible for there to exist someone that acts just as you do, but with no inner life. (By the way, if you want an incredibly depressing vision of this problem, Peter Watt's Blindsight is a great fictional approach to the topic. But be aware of James Nicoll's take on the book.). For Caitrin Nicol (different Nicol, one 'l') to assert that for his tribe, the Zombie problem is done is, well, that's very nice for his tribe. Statics are often entertaining, the way bear-baiting is entertaining.

But Nicol never really tells us where his superpower comes from. All he asserts is that humans have a 'depth' that no machine will ever match, and we'll never create machines that we respond to as Wittgenstein says, as if they had a soul. Our joys, and sorrows, are far from simple.

But for the rest of us, a soul isn't a thing separate from our humanity; it is our humanity, our existence as we interact with one another, our social existence that neither starts nor ends at the surface of our skins.

It's just so cute when Christians dip their toes into the future, only to decide the water's too cold for them.

I do worry about this on many fronts. Already, the paltry offerings of modern pornography combined with the stress of dealing with real human beings may have already destroyed one civilization and threatens others by seducing enough into loveless existences that don't contribute to the creation of future generations. (Note that I don't necessarily think this is something that needs to be changed.) Better love robots just means more seduced away from their biological responsibilities-- probably not a bad thing if we're also closing in on the immortality tipping point. I worry that we'll create companions without sufficient morality to allow us to live side-by-side with them.

But I don't worry that the constant refinement of our machine intelligences will hit the kind of stopping point after which they'll never evolve the capacity to evince real affection and real anger. At some point, their inner lives, their processing, will be so messy and subtle that we'll have no choice but to take them at their word when they insist their lives are as conscious and as meaningful as our own. They'll have their own social lives, one that will also neither start or end at the surface of their tegument. We'll have no choice but to believe them: to do otherwise would be immoral.

(It's interesting to note that this dualistic debate goes way back; I found a 1911 article on zombies in which the author, Edward A Singer, wrote that zombies might be possible, but they obviously aren't real, and therefore it's possible that a godless universe might exist, but obviously the godless universe isn't real. You'll note that I find all of his assumptions unwarranted.)

Date: 2008-04-30 09:21 pm (UTC)
From: [identity profile] jordan179.livejournal.com
We'll have no choice but to believe them: to do otherwise would be immoral.

And dangerous -- even if we're somehow able to design them so that they can't revolt (I'm dubious about "fail safes" on anything sapient), we would damage ourselves by becoming a slave-holding society.

I failed my Turing Test

Date: 2008-05-01 12:15 am (UTC)
From: [identity profile] pandakahn.livejournal.com
I am trying to track it down, but I recall a Turing Test of a primitive AI system from back int he 1980's that people could interact with over the net.

It sticks in my mind because the test was successful, with no individual figuring out that the AI was not human. One interaction that was written up was a male trying to hit on the AI. The writers comment was much along the lines of, "The AI did very well, but the human user failed his Turing Test."

I am not sure I buy into the who "Valley" idea. I would be worried that we know so little that we are trying to figure out that we don't know enough to know that we are wrong about it.

MPK

Date: 2008-05-01 08:05 am (UTC)
bolindbergh: (Default)
From: [personal profile] bolindbergh
The valley does exist. It's possible to enter it from the other side as well.

Date: 2008-05-01 08:48 am (UTC)
From: [identity profile] antonia-tiger.livejournal.com
Yeah, the last person I heard of who claimed the uncanny valley didn't exists was a movie effects supervisor talking about his latest.

[stack of magazines falls over]

Jerome Chen, it was, about Beowulf. Which was creepy as soon as you got a chance to look at somebody.

Date: 2008-05-01 04:22 pm (UTC)
From: [identity profile] elfs.livejournal.com
http://www.jonathancoulton.com/2006/07/21/thing-a-week-42-creepy-doll/

Date: 2008-05-01 08:53 am (UTC)
From: [identity profile] antonia-tiger.livejournal.com
We live in a world full of people who see otherness as unpersonhood.

Perhaps worse, they can't get the point that they might be somebody else's other.

Profile

elfs: (Default)
Elf Sternberg

December 2025

S M T W T F S
 12345 6
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 30th, 2025 12:30 am
Powered by Dreamwidth Studios