elfs: (Default)
[personal profile] elfs
It's so cute when theists basically pat their audience on the head and say, "There, there, thinking machines aren't a threat. We won't have to treat them any better than we do a dog."

David Gelernter throws out words like "consciousness" and "awareness" and even the hoary "free will" without ever acknowledging that none of these terms have any serious definition. The assumption is always the same: thinking is irreducible to brain activity, which is as incorrect as saying a video game were somehow irreducible to electrons passing through silicon.

And Gelernter refuses to acknowledge what Wittgenstein pointed out over a century ago: when things start acting as if they acted indistinguishable from those we accorded the label "ensouled," then we are morally obligated to treat them as if they, too, had souls. Anything else is repugnant.

Gelernter's fantasy of "human-like machines" doesn't really deal with the future at all. We don't worry about human-like machines: those are actually quite boring from a researcher's point of view. (As a writer, of course, I have other reasons for delving into the topic.) Our big worries are in targetted intelligences for acheiving specific intellectual or scientific aims, the results of which involve so much intellectual capacity that one ordinary human mind cannot encompass the result: we're left to using the recipes left by our vast, cool, and unsympathetic engines of thought, and explaining them to each other by a process indistinguishable from hermeneutics. At that point, we had better have tight reins on our intellectual progeny, because these vastly smarter but still unconscious, technological but unself-aware machines, will have incidental agendas built into them about which we may be completely mistaken.

And then there will be real trouble.

Date: 2010-08-03 10:10 am (UTC)
ext_58972: Mad! (Default)
From: [identity profile] autopope.livejournal.com
Ah ...

... Er ...

You haven't read my next novel yet, have you?

Date: 2010-08-03 11:33 am (UTC)
From: [identity profile] funos.livejournal.com
He does seems to say something similar to Wittgenstein in that we would have a duty to treat seemingly intelligent machines with respect to avoid moral decay. That's all fine and good.

But then he claims a slippery slope based on some unstated, unsupported claim of the impossibility of machine intelligence, and drags in comparisons to PETA, and the whole thing falls apart before it even stands up the first time.

Date: 2010-08-03 03:50 pm (UTC)
From: [identity profile] doodlesthegreat.livejournal.com
I'm just bemused that the only concept for thinking machines is purely mechanistic.

Date: 2010-08-04 04:53 am (UTC)
From: [identity profile] gromm.livejournal.com
That's nothing. Even more amusing is watching theists wrestle with the present (http://action.afa.net/email/online.aspx?cid=1001&mid=20691860&tid=aa&utm_source=smAFA&utm_medium=email&utm_campaign=1001). The theists and social conservatives all seem to pine for a simpler time (http://www.google.ca/url?sa=t&source=web&cd=1&ved=0CBQQtwIwAA&url=http%3A%2F%2Fterrible.videosift.com%2Fvideo%2FGlen-Beck-Creies-For-A-Simpler-Time&rct=j&q=glenn%20beck%20coke%20commercial&ei=b_FYTJD5NY6isQPYyaCpDA&usg=AFQjCNHEPmvOi7upBEuRSZ8h0IMBehTcYA&cad=rja) that's largely fictitious and usually exists only in their childhood memories, not their grownup ones.

Profile

elfs: (Default)
Elf Sternberg

June 2025

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 1st, 2025 06:38 am
Powered by Dreamwidth Studios