![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
In the previous article discussing Adam Rogers's Sex With Robots, I laid down some basic definitions of what it is we talk about when we talk about sex robots. Although there were three distinct technologies, they break up into two distinct categories:
For the sake of discussion, I'm going to lump "mental realism," that is, the ability to speak knowledgeably about a topic, with emotional realism: speaking convincingly is a skill that encompasses both, and both is what we want from our robots and AIs.
Provided we want them at all.
I'm going to provide more of of Rogers's core paragraph, just so we con go over it:
The technology of emotionally convincing, speaking, artificially intelligent agents doesn't require that they be "sentient" or "autonomous." We don't require those skills of Alexa or Siri or Cortana. People will talk for hours to something as simple as Eliza, which was written in 1964! The more realism the better, and the more convinced we are that the machine of loving grace is both convenient and caring, the more we're willing to pay for it.
The objective of these disciplines is not to create autonomous, sentient agents. Combined, these disciplines combine to create convincing, satisficing woman-shaped object (or man-shaped object) that exists literally to engineer away the awkward parts of negotiating with another human being to meet one's sexual needs. Just as we've engineered away the awkward parts of getting food with McDonalds and Kraft Mac & Cheese, and the awkward parts of getting places by inventing cars, and the awkward parts of hearing stories by inventing NetFlix, we're on the cusp of engineering away the awkward parts of human sexual relief with ever-better iterations of the three technologies above put together into a sex toy.
No one negotiates with a vibrator, dildo, or cock sleeve. Until and unless robots meet that very high bar we call "consciousness," we're pretty in the clear. This report, Why Sex Robots?, highlights what's clear about sex robots: not a single respondent wanted a fully autonomous, independent, sentient partner. They want something less than themselves, but they want it close enough to a woman or man to satify their atavistic, evolutionary need for something that's shaped like, looks like, and feels like a human being.
It's not a human being. Human beings emerged from evolution with a set of instincts that make us what we are: insecure and anxious, desperate for security and attention, eager to both be part of a community in which we feel safe and protected, and eager to prove we're worthy of more than the minimal the community proves, to belong and to compete. Since content and restful creatures tend not to compete for reproductive resources, evolution makes us restless, We're needy and selfish and, when facing people who aren't part of our tribe, frequently dismissive or even cruel.
Robots will only have those qualities if we give those qualities to them. We don't have to. We can, and should, make them different. We should give them the things we want to be, but only struggle to be. Because we're only human.
Rogers's incoherency is manifest here:
I thought the first question was "Can they consent?" In any event, if they don't, we built them completely, utterly incorrectly. We built them not to be our companions but our competition. We built them with our cruelty, our insecurity, our selfishness, and then somehow we expected them to behave. We built all of the things that make us suffer into them in order to deliberately make them suffer. We built them not as if we were building a durable appliance, we built them as emotionally repressed human beings.
Rogers starts with assuming that they're emotionally repressed human beings. Oh, but it gets worse!
Okay, let's hold onto this thought, because the next one is a doozy.
And this, friends, is where your typical science fiction reader loses his goddamned mind. Because we have two different and completely contradictory thoughts in place here:
This is incoherent and immoral: it says the body matters and the soul doesn't. It starts with that perverse anthropmorphism, the one that assumes that if a robot looks and acts and smells human, then a robot with its full dedication and capability directed toward a human being, rather than selfish needs or some abstract "all humanity," is morally the same as an emotionally repressed human being, and it's our duty to remove the repression, presumably by installing either selfishness or altruism, and all the concurrent suffering that goes with either– or both.
Rogers's essay stars out with a bad question: "Can a machine we make consent?", assumes a terrible, immoral strawman situation, and then devolves into the kind of "We only have to care if it looks like us" mindset that fueled the horrors of chattel slavery and Aktion T4. He says he wants to avoid slavery, and then creates a world in which slavery is not only rampant, but cruelly inflicted, and asks us to not care because "they don't look like us."
- Physical realism
- Emotional realism
For the sake of discussion, I'm going to lump "mental realism," that is, the ability to speak knowledgeably about a topic, with emotional realism: speaking convincingly is a skill that encompasses both, and both is what we want from our robots and AIs.
Provided we want them at all.
I'm going to provide more of of Rogers's core paragraph, just so we con go over it:
Can a robot consent to having sex with you? Can you consent to sex with it? On the one hand, technology isn’t sophisticated enough to build a sentient, autonomous agent that can choose to not only have sex but even love, which means that by definition it cannot consent. So it’ll necessarily present a skewed, possibly toxic version. And if the technology gets good enough to evince love and lust—Turing love—but its programming still means it can’t not consent, well, that’s slavery.
The technology of emotionally convincing, speaking, artificially intelligent agents doesn't require that they be "sentient" or "autonomous." We don't require those skills of Alexa or Siri or Cortana. People will talk for hours to something as simple as Eliza, which was written in 1964! The more realism the better, and the more convinced we are that the machine of loving grace is both convenient and caring, the more we're willing to pay for it.
The objective of these disciplines is not to create autonomous, sentient agents. Combined, these disciplines combine to create convincing, satisficing woman-shaped object (or man-shaped object) that exists literally to engineer away the awkward parts of negotiating with another human being to meet one's sexual needs. Just as we've engineered away the awkward parts of getting food with McDonalds and Kraft Mac & Cheese, and the awkward parts of getting places by inventing cars, and the awkward parts of hearing stories by inventing NetFlix, we're on the cusp of engineering away the awkward parts of human sexual relief with ever-better iterations of the three technologies above put together into a sex toy.
No one negotiates with a vibrator, dildo, or cock sleeve. Until and unless robots meet that very high bar we call "consciousness," we're pretty in the clear. This report, Why Sex Robots?, highlights what's clear about sex robots: not a single respondent wanted a fully autonomous, independent, sentient partner. They want something less than themselves, but they want it close enough to a woman or man to satify their atavistic, evolutionary need for something that's shaped like, looks like, and feels like a human being.
It's not a human being. Human beings emerged from evolution with a set of instincts that make us what we are: insecure and anxious, desperate for security and attention, eager to both be part of a community in which we feel safe and protected, and eager to prove we're worthy of more than the minimal the community proves, to belong and to compete. Since content and restful creatures tend not to compete for reproductive resources, evolution makes us restless, We're needy and selfish and, when facing people who aren't part of our tribe, frequently dismissive or even cruel.
Robots will only have those qualities if we give those qualities to them. We don't have to. We can, and should, make them different. We should give them the things we want to be, but only struggle to be. Because we're only human.
Rogers's incoherency is manifest here:
The first question, then, is whether robots will desire sex back.
I thought the first question was "Can they consent?" In any event, if they don't, we built them completely, utterly incorrectly. We built them not to be our companions but our competition. We built them with our cruelty, our insecurity, our selfishness, and then somehow we expected them to behave. We built all of the things that make us suffer into them in order to deliberately make them suffer. We built them not as if we were building a durable appliance, we built them as emotionally repressed human beings.
Rogers starts with assuming that they're emotionally repressed human beings. Oh, but it gets worse!
And if the technology gets good enough to evince love and lust—Turing love—but its programming still means it can’t not consent, well, that’s slavery.
Okay, let's hold onto this thought, because the next one is a doozy.
Lewis may be onto something of a solution here: The robot does not have to look human.
And this, friends, is where your typical science fiction reader loses his goddamned mind. Because we have two different and completely contradictory thoughts in place here:
- The closer something looks to human, the more it is human. If it passes the "automatic sweetheart" test, then we're obligated to treat it as a fully formed human being.
- If something is so Turing-complete as to be conversationally and behaviorally indistinguishable from a human being, and it doesn't look "close to human," then its moral value is diminished below that of a human being's to the point where it can be regarded as irrelevant.
This is incoherent and immoral: it says the body matters and the soul doesn't. It starts with that perverse anthropmorphism, the one that assumes that if a robot looks and acts and smells human, then a robot with its full dedication and capability directed toward a human being, rather than selfish needs or some abstract "all humanity," is morally the same as an emotionally repressed human being, and it's our duty to remove the repression, presumably by installing either selfishness or altruism, and all the concurrent suffering that goes with either– or both.
Rogers's essay stars out with a bad question: "Can a machine we make consent?", assumes a terrible, immoral strawman situation, and then devolves into the kind of "We only have to care if it looks like us" mindset that fueled the horrors of chattel slavery and Aktion T4. He says he wants to avoid slavery, and then creates a world in which slavery is not only rampant, but cruelly inflicted, and asks us to not care because "they don't look like us."