elfs: (Default)
[personal profile] elfs
Mark Helpren criticizes those who argue that the Chinese Room as a "system that understands Chinese even if no one component does" fails to help AI enthusiasts because, as he puts it:
No one, after all, will be impressed by being assured that even if no part of an "intelligent machine" really understands what it is doing, the complete system, which includes every logician and mathematician as far back as the Babylonians, does understand.
Yet I fail to see why this is problematic. Why does Helpren believe this statement is compelling, when it can equally be argued
No one, after all, will be impressed by being assured that even if no part of the human brain really understands what it is doing, the complete system, which includes every neuron, does understand.
Helpren is still wedded to a ghost-in-the-machine view of human intelligence when he writes, "[A computer's] apparent intelligent activity is simply an illusion suffered by those who do not fully appreciate the way in which algorithms capture and preserve not intelligence itself but the fruits of intelligence." But the same could be said of the human brain: seemingly intelligent activity by a human being is not evidence of intelligence, but evidence of the evolutionary processes have captured some survival-worthy activity and encapsulated it as a collection of reponses to stimuli. The difference between human intelligence and computer intelligence is simply one of subtlety, and we should not be smug in our armchairs that robots will never catch up.
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

elfs: (Default)
Elf Sternberg

March 2026

S M T W T F S
1234567
8910111213 14
15161718192021
22232425262728
293031    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Apr. 16th, 2026 12:15 am
Powered by Dreamwidth Studios