elfs: (Default)
[personal profile] elfs
Mark Helpren criticizes those who argue that the Chinese Room as a "system that understands Chinese even if no one component does" fails to help AI enthusiasts because, as he puts it:
No one, after all, will be impressed by being assured that even if no part of an "intelligent machine" really understands what it is doing, the complete system, which includes every logician and mathematician as far back as the Babylonians, does understand.
Yet I fail to see why this is problematic. Why does Helpren believe this statement is compelling, when it can equally be argued
No one, after all, will be impressed by being assured that even if no part of the human brain really understands what it is doing, the complete system, which includes every neuron, does understand.
Helpren is still wedded to a ghost-in-the-machine view of human intelligence when he writes, "[A computer's] apparent intelligent activity is simply an illusion suffered by those who do not fully appreciate the way in which algorithms capture and preserve not intelligence itself but the fruits of intelligence." But the same could be said of the human brain: seemingly intelligent activity by a human being is not evidence of intelligence, but evidence of the evolutionary processes have captured some survival-worthy activity and encapsulated it as a collection of reponses to stimuli. The difference between human intelligence and computer intelligence is simply one of subtlety, and we should not be smug in our armchairs that robots will never catch up.

Date: 2006-02-27 01:56 am (UTC)
auroramama: (Default)
From: [personal profile] auroramama
I don't see that as problematic either. People seem to keep using arguments against machine intelligence (or animal intelligence, for that matter) that work just as well against people-who-aren't-me intelligence. Some people, in fact, deal with the asymmetry of "I think, therefore I am, but everyone else just looks like they're thinking" by pronouncing consciousness an illusion: nobody really thinks, so nobody really is (a conscious self.) This seems disappointing to me. Anyway, "emergent property" is not equivalent to "illusion".

Date: 2006-02-27 07:28 am (UTC)
From: [identity profile] antonia-tiger.livejournal.com
It seems to me that, while "emergent property" is to some extent a handwave, it is a necessary consequence of the limits of what we can explicitly model. Some of those limits are simply those of time and effort. How detailed a model of the weather can we make that will give a result soon enough to be a forecast? Other limits come from such things as Chaos, which can appear in what seems to be simplest of useful math.

So it's possible not only that we don't know how intelligence emerges from the complexity of our brains, but that, for reasons other than philosophical arguments about whether a system can ever understand itself, we cannot know.

But that's slightly different from the Chinese Room. That's saying that we can look inside the Black Box, and says something about what we find there. I think there's a real possibility that we can't do that. Though what we may have, and what we may be able to see, is a Chinese Room with a lot of parts we can understand, and a great big Black Box in the middle.

Which isn't really a Chinese Room any more.

Profile

elfs: (Default)
Elf Sternberg

May 2025

S M T W T F S
    123
45678910
111213141516 17
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 28th, 2025 09:07 pm
Powered by Dreamwidth Studios