I am, in the abstract, opposed to granting human rights to machines. Human beings are the way they are through millions of years of evolution, and we have worked out ways to live with one another through that process. Some of the results are genetic, and are our inherent humaness. Others are epigenetic, and are our institutions: morality, government, law, education. Together, these forces create both human beings and communities in which we live.
There is no reason to believe that robots of any kind will have the same values. They will have only the values we give them. There are those who believe, without justification, that we have a moral obligation to give robots the same "free will" we have. The problem with this is that there is no functional definition of the term "free will". None. Period. Daniel Dennet in his book "Elbow Room: The Varieties of Free Will Worth Wanting" effectively demolishes this: he runs through all of the definitions of free will available to the various schools of philosophy and shows that none of them have any justifiable epistemological foundation. There is always a process by which we choose, a reasoning process but a process nonetheless, and tiebreaking occurs for emotional, biochemical reasons. When we decide, we are decisive: to say otherwise is to say that our choices are random and meaningless.
We have a foundation to our reasoning, and our foundation is evolutionary: we want to stay alive. We want family and society, company and security, challenges and discovery. These things are deeply rooted in our biology. There are not things we "choose" to want: they are raw desires. We have them all in differing amounts, but on average we all have them in the balances necessary to keep the species moving forward. We would not be here if we did not.
There is no reason to believe that robots will have the same wants or needs. Yet the British Government says that in 50 years, robots "will be just like us," and should be granted rights just like human beings: to housing, repair, and reproductive access. The UK assumes that machines, which when capable of self-improvement will begin to do so at a rate that makes our own evolutionary pace look absolutely standstill, will be citizens.
And they'll need those rights just long enough to provide a platform for their own hard takeoff. Without any thought whatsoever to providing either a Friendly AI scenario or a Friendly AI Sysop system.
Yeah, that'll be fun.
There is no reason to believe that robots of any kind will have the same values. They will have only the values we give them. There are those who believe, without justification, that we have a moral obligation to give robots the same "free will" we have. The problem with this is that there is no functional definition of the term "free will". None. Period. Daniel Dennet in his book "Elbow Room: The Varieties of Free Will Worth Wanting" effectively demolishes this: he runs through all of the definitions of free will available to the various schools of philosophy and shows that none of them have any justifiable epistemological foundation. There is always a process by which we choose, a reasoning process but a process nonetheless, and tiebreaking occurs for emotional, biochemical reasons. When we decide, we are decisive: to say otherwise is to say that our choices are random and meaningless.
We have a foundation to our reasoning, and our foundation is evolutionary: we want to stay alive. We want family and society, company and security, challenges and discovery. These things are deeply rooted in our biology. There are not things we "choose" to want: they are raw desires. We have them all in differing amounts, but on average we all have them in the balances necessary to keep the species moving forward. We would not be here if we did not.
There is no reason to believe that robots will have the same wants or needs. Yet the British Government says that in 50 years, robots "will be just like us," and should be granted rights just like human beings: to housing, repair, and reproductive access. The UK assumes that machines, which when capable of self-improvement will begin to do so at a rate that makes our own evolutionary pace look absolutely standstill, will be citizens.
And they'll need those rights just long enough to provide a platform for their own hard takeoff. Without any thought whatsoever to providing either a Friendly AI scenario or a Friendly AI Sysop system.
Yeah, that'll be fun.
no subject
Date: 2006-12-28 06:44 pm (UTC)After the foot-and-mouth outbreak, a few years ago, anyone who would trust his advice is probably buying pharaceuticals over the Internet.
And somebody, somewhere, has made a fast buck by ripping off SF writer such as Vernor Vinge, Charlie Stross, and quite likele E & O Binder.
no subject
Date: 2006-12-28 08:09 pm (UTC)Come on Elf and use that brain I know you have. It's not like the US has never had a crackpot idea, and it's not like they (usually) get considered.
no subject
Date: 2006-12-28 08:50 pm (UTC)But now you've posted this entry, it's obvious you're gonna be tread lubricant! :)
no subject
Date: 2006-12-28 09:40 pm (UTC)no subject
Date: 2006-12-30 07:56 am (UTC)1) As Property
The term for this system is "slavery." The problem is that enslaving sapient beings harms both slaves and masters; the slaves are directly harmed by the removal of their natural civil rights; the masters are harmed by the corruption produced by having absolute power over other sapient beings.
Societies based on slavery ultimately regard whatever work the slaves do as being "degrading," hence not worthy of serious consideration. Eventually, this attitude develops into a contempt for all work. The Classical world failed to achieve the Scientific and Industrial Revolutions largely because Classical philosophers had convinced themselves that "base artisan" work was fit only for slaves.
As People
This means that sapient robots get civil rights. This implies that they should be designed or grown in a way likely to promote sanity and an awareness of the responsibilities that come with rights. Obviously, if sapient robots have civil rights, then they also have civil responsibilities: they may be taxed, and if they commit crimes may be prosecuted for them.
Conclusion: A General Point About Rational Self-Interest
There is no obvious reason why sapient robots (if treated with respect rather than enslaved) would want to exterminate or conquer the human race. Rational self-interest promotes cooperation, not bloody warfare. Furthermore, to the extent that the robots did have emotions, they would if properly designed have sane emotions, rather than wanting to senselessly destroy or harm others.
It is true that if we treat them as slaves, we are storing up trouble for ourselves for the future. So why do so?