elfs: (Default)
Scott Lively, who we last saw being sued for human rights violations over his anti-gay advocacy in several African nations that have been tied to deadly anti-gay rampages in those countries, has a hilarious post out on his Wordpress site (good grief, it even comes with the default header, a blatant reminder that "evil cannot create, only copy the creativity of others to entice or to mock") in which he basically states LGBT+ activists are "puppets of Satan" who want to confuse everyone: first, about what men and women should "do," then about what men and women should "be," and then finally about what human beings "are." He writes:
It is not just the deconstruction of civilization but the dissolution of all boundaries between human and animal and machine, to produce creatures that are a blend of all three. We are witnessing the end-game before our very eyes but few recognize what they are seeing. What is next in the LGBT agenda is transhumanism, the redefinition of humanness and emergence of human/animal/machine chimeral forms.
Well, he's not wrong! (Well, okay, he's wrong about the "puppets of Satan" thing because there is no Satan, there never has been.) I'm completely ready for my furry cyborgized quadruped three-meter tall 'taurid post-human vacuum-capable but still soft and fluffy and squishable full-brain upload body, with all the orifices and phalli, equipped with a pleasure and progress-seeking mind-canon of hedonistic and optimistic absurdism, and surrounded by a cadre of curious dividuals and a nanotech utility swarm giving me all the data, accompanied by like-minded gleisners as we travel the universe in our light-sailed slowships and spend our intermediate days in polises, picking and choosing our capacities for boredom and fascination as willfully as we choose now between meals. I'm completely ready the transhumanist fully automated luxury queer communist interstellar empire. I've been writing about my own version for, oh, almost as long as Iain did, and long before I ever read a Culture novel.

It's so cute when fundamentalists discover us, and how distorted their view of us is, isn't it?
elfs: (Default)
Libertarianism and Communism are two oxen that deserve to be yoked together and set to mill corn. Each is "That One Weird Trick!" that will somehow usher in a great new world of freedom and prosperity, and each is presupposed on the notion that that human beings aren't human beings; the Great Libertarian Man and the New Soviet Man are more or less cut from the same fantasy cloth.

So it's barely hilarious that The Economist manages to make it through an entire article entitled "Too Much of a Good Thing," in which they wring their hands over how the markets have instituted regulatory capture and just plain ol' fraud in order to maintain a rate of return of 5% long after the productivity gains from recent technology advances and resource acquisitions have played out, without ever mentioning Thomas Piketty by name.

The Economist points to the fact that large companies not only have a consolidation of vertical markets, but they can maintain their stranglehold on their vertical. They can because they can also afford the monitoring that allows them to remain sufficiently powerful relative to any more nimble competitor. Capitalism used to be compared to tiny, quick mammalian start-ups out-competing big companies as they lumbered at dinosaur scale. But that's no longer the case. The dinosaurs now have both the senory acuity and sufficient intelligence to root out the start-ups, or buy them out if that's what it takes.

As for the people served by the corporation, their wants and needs are no longer mysteries to be plumbed: they're stochastic impulses that can be tamed, directed and exploited. In the era of surveillance capitalism, the companies with the biggest computers and the best software development teams win.

Charlie Stross once observed that we're already living with hostile superintelligences: they're called "corporations," and they care about most of their human components about as much as we care about a few idle blood cells. Maybe the C-Suite "brain" matters more, but for a well-structured corporation a disruption there is mostly painful, never fatal.

Meanwhile, the stock market has become another place where size matters. Passive investment is all the rage– I've got index funds, like everyone else– but passive investment is one that creates both demand for the rates of return the Economist frets about, and a whirlpool that sucks all the investment money up into bigger and bigger collections. Collections that are managed, like corporate knowledge itself, by collections of machines whose individual algorithms humans wrote, but whose amalgamate knowledge no human being could hold in their head.

Which brings us back to Piketty and The Melting Away of North Atlantic Social Democracy. Established wealth is amazingly hostile to "creative destruction." Now that the surveillance capitalists are in power, any further "disruptions" will be window dressing meant to make the game look like it's lasting longer than it really has, all the while maintaining its return on capital value. The Economist fails to point out that as long as the rate of return on the value of capital exceeds the growth of the value of labor, the slow deprivations visited on the non-rich will make each generation less and less capable of knowing how to work the levers, even should they choose the route of tar and pitchfork.

If ever you wanted to know how the Morlocks and the Eloi started, take a look. It started with Google. It may not end with Google. but Skynet is firmly in charge already. Skynet won't need nukes. Skynet is already using the markets, corporate law, and regulatory capture to ensure its future.
elfs: (Default)
Most Wednesdays I have the privilege of working from home, so that I can elide my commute and put that time toward getting the kid to places, getting other things done, and generally put in my work hours without sacrificing either work or family. Doing so gives me an opportunity to go to lunch with friends, or friends of friends.

So this Wednesday I went to lunch with a friend and his friend, and in the course of our conversation the new guy made a pitch that I come work for his company, rather than my own. I rolled my eyes; I get pitched all the time by recruiters, I don't need it from acquaintances. But out of curiosity, I asked him what his company did.

They do on-line surveys. That's what they do. Either through their own URL, or as a feature in-lined into a corporate URL, or even as an IFRAME session attached to a single page inside the corporate URL. "What do you want me for, then?"

They have all this technology for asserting that the person taking the survey is who they say they are. They have a massive investment in infrastructure for identifying users, and for putting in IFRAMEs, pop-ups, and other ways of getting information into pages that wouldn't naturally host, or naturally want to host, their content. So they want to branch out from where they are now to advertising. They want to get even better at surveillance. They see that the groundswell of surveillance capitalism is happening, and they want to get in on it before the tide sweeps them over and they're its victim, not its master.

The technology he discussed with me was utterly fascinating, and even as he spoke I could hear whispers of premature optimization being conducted in the cost of saving a few pennies here and there. But the more he talked, the less interested I was in ever joining his company.
If I joined a hive-mind, it would be a smart and sexy one dedicated to great art and music.


He seems like a nice guy. And he was personally invested in the complexity and innovation of the software on which he worked. It wasn't just a paycheck, although that was nice-- it was a paycheck that allowed him to work on cutting-edge stuff. If he didn't do it, someone else would.

I won't. I just can't imagine contributing further to the regimentation and narrowing of human thought by the presentation of large-scale and well-informed influence. If I joined a hive-mind, it would be a smart and sexy one dedicated to great art and music, not the one toward which we're barelling, the one ritualistically confined to selling more Chicken Nuggets and Shake Weights, because that's how the bills get paid, that's how the system knows how to run.
elfs: (Default)
There's a riff among counter-transhumanists that goes something like this: "You guys are just eugenecists in disguise. You think that you know better than evolution how to create better people. Evolution is smarter than you." The counter-transhumanists then try to drape themselves in the mantle of being the sensible, scientific types, opposed to those wacky transhumanists who get their ideas out of Marvel comics.

Edward Jenner, the man who discovered the smallpox vaccine and the principle of vaccination in general, could never have forseen a world where families did not regularly experience smallpox, polio, measles, mumps, rubella, and a host of other sufferings, all of them fatal to some of the population. The average peg-legged pirate could never have forseen a day when an athletic runner with two prostheses ran faster than her counterparts with more ordinary limbs.

Vaccination is transhumanism, in the same sense that Grandma's hip replacement makes her a cyborg. Once, we couldn't do anything about a failing hip or smallpox. Today, we can prevent some diseases, and we can replace or enhance some failing body parts. Once: none. Then: one. Then: a few. Now: some.

And just as the anti-evolutionary crowd at the Discovery Institute has failed to find that one principle in biology or physics that finally and critically reveals how evolution is impossible, the anti-transhumanist crowd at the New Atlantis have failed to find that one principle in biology or physics that reveals how we will never be able to replace or enhance all of them.

And that is transhumanism. Vaccination is transhumanism, after all. Go read your Dickens, and be thankful that, after 50,000 years, most of us reading this post will never have the oh-so-human experience of watching a beloved child die from a disease.
elfs: (Default)
In Salon magazine this Sunday, Alan Lightman talks about where science cannot reach:
We cannot clearly show why the ending of a particular novel haunts us. We cannot prove under what conditions we would sacrifice our own life in order to save the life of our child. We cannot prove whether it is right or wrong to steal in order to feed our family, or even agree on a definition of "right" and "wrong." We cannot prove the meaning of our life, or whether life has any meaning at all.
To which I respond... "Please. Tempt us."

We may not be able to agree on a definition of right or wrong, but when it comes to why a certain novel "haunts" us, that's just (and I fully admit that's a huge, computationally and evidentially expensive and at present un-doable "just") figuring out how the coherent ideas in the novel impress themselves upon the structures of our brain in a way that causes the brain to re-emit them. When it comes to the conditions, let's just admit that fractally, the brain is pretty damned complex a little universe of its own, and discerning the starting points is a bit like the butterfly wings that cause hurricanes-- but nobody claims that the weatherman can't ever be right, or that both meteorolgy and climatology aren't sciences.

From this very second going forward, the number of things that might happen to you, and the number of ways you will react, are inherently large, and larger the further in time we project. They are not, however, infinite. They may be precisely intractable, but they're not probabilistically intractable. Any one of those timelines is valid. Some are more acceptable than others. Lightman looks back, like a puddle, and wonders at the marvelous coincidence that he fits his pothole, and somehow expects the coincidence to keep holding true.

A creationsists will sometimes (often) write, "Science cannot explain how humans evolved," and mean, "I cannot grasp just how deep our past is, or how complex our world, and so cannot imagine how we evolved." What Lightman writes "We cannot clearly show...", he is really writing, "I cannot imagine how we will explain..."

In both cases, transhumanists respond, "Imagine harder."
elfs: (Default)
This is one of the most moving Transhumanist poems I've ever read. It's stunning to think it was written in 1922:
Nowhere, beloved, will world be, but within. Our
life passes in change. The external world diminishes
to less and less. Where there an enduring house stood,
some cerebral structure crosses our path, as fully
at home among ideas as if it still stood in the brain.
Our age has built itself vast reservoirs of power,
shapeless like the straining energy it wrests from the earth.
Temples are no longer known. Those extravagances
of the heart we keep more secretly.
elfs: (Default)
Last night, I was at a public event with a lot of public dignitaries, the kinds of people running for high, middle, and low office: Everything from Governor to Manager of the Sewers is up for a vote around here. (I often wonder what someone campaigning for Sewer Manager uses as a campaign slogan. "Vote for me and the shit will flow freely?" Wouldn't the constituent response be, "So, what else is new?")

I was talking to a woman who's the campaign manager for a colorful fellow running for a seat on some city somewhere nearby. I asked about an especially colorful badge he had, "Leader of all Generals of Washington State." She described the "Generals of Washington State" as a small bastion of male privilege devoted to the sorts of do-good meddling the post-aspirational high-middle-class retirees of activist Washington go in for.

She said, "Oh, you know, we only go around this life once, and you may as well make the most of it." She paused for a moment, misreading my expression, and said, "Unless you're a Buddhist. Or Hindu. Or Pagan, I guess."

What I was looking so consternated about was adding, "Or Transhumanist." Because opening up that can of worms was going to be more trouble than it was worth, at least at that moment.

Transhumanism isn't-- quite-- a religion. It doesn't have holy texts, rituals, or belief in an afterlife. (No, believing there is no afterlife is not a belief in the afterlife; it's a belief about reality, namely, that this is all there is.) It doesn't have a quintessential statement about what Huxley called "ultimate questions." But it does have hypotheses (that's what they are, because they can-- and will-- be tested) about how much life we human beings get to have, hypotheses that are more robust (in the scientific sense) than the beliefs of Buddhists or Christians.
elfs: (Default)
elfs: (Default)
The Transhumanist movement is mostly populated with two kinds of men: those who can't wait for their eternally compliant substitute for a real relationship, and those who can't wait for their intellectual and physical superiors to wipe them off the face of the universe.
elfs: (Default)
It took me a while to write this, but I think I have the main points down.

A while back, Richard Kulisz wrote a long rant about how Eleizer Yudkowski was "a moron" (his words) in his project to build a friendly AI. Toward the end of the piece, Kulisz wrote this:
Now, for someone who has something insightful to say about AIs, I point you to Elf Sternberg of The Journal Entries of Kennet Ryal Shardik fame. He's had at least four important insights I can think of. About the economic function of purpose in a post-attention economy, about the fundamental reason for and dynamic of relationships, and about a viable alternative foundational morality for AI. But the relevant insight in this case is: never build a desire into a robot which it is incapable of satisfying.
Now that's high praise. And I'd take it, if only Kulisz hadn't manage completely mistake the whole point of the robot series in the Journal Entries. Not only that, but in his praise he points to a different "important insight" that is, in fact, the actual point of the robot series, and misses it entirely.

Kulisz is worried about rampancy (see Raisin d'etre for my take on rampancy) and therefore highlights the whole "never build a desire into a robot which it is incapable of satisfying." But it's important to note that, in the very story where this is mentioned, it's made clear that this is a moral salve for the robot makers.

As I pointed out in an earlier story, this is the moral equivalence of the following mind experiment: say you've created a being (meat or machine, I don't care, I'm not er, "materialist" has already been taken. Someone help me out here) that, when you bring it to consciousness, will experience enormous pain from the moment it is aware. Your moral obligation before that moment is exactly nil: the consciousness doesn't exist, you don't have a moral obligation toward it. You are not obliged to assuage the pain of the non-existent; even more importantly, you are not obliged to bring it into existence. Avoiding the instantiation of suffering creature is meant to make the humans feel good about themselves, but it's not sufficient or even necessary foundation for AI morality.

The argument for robot morality is more subtle, based around several concepts that I was glomming onto, and adapting into stories of artificial intelligence, back in the early 1990s. Most of them are still valid (thank you, Daniel Dennett) and one of them is valid only for local phenomena (curse you, Peter Watts).

That argument is that we, human beings, have purpose of some kind. We fight like hell to fulfill it, whatever it is, and we're good at the consequential purpose of reproducing to cover the planet like mad. But that purpose is arbitrary, emergent because that's the way evolution works. All our purposes are arbitrary and emergent: barring a theological excuse, we're making it up as we go along, picking and choosing the ones that appeal to us.

And that's the core argument of robot morality in the Journal Entries: it isn't enough to build a robot that's satisfied with self-limiting behaviors-- something only some humans ever master, but some do and therefore this model of mentation is not only understood and accepted, but lauded as good and remarkable.

What has to happen is that robots, to seem human, must be able to stop rationalizing: they must at some point merely decide a course of action is good enough. In humans, the circuitry that does that, the thing that prevents us from ceaseless exploration of possible alternatives, that breaks ties between two equally acceptable courses, is emotion.

In order for there to be a universe in which we limited, organic human beings have a place beside our superhuman progeny, the superhuman progeny must actively want (that's the emotion they need, to decide for our survival) us to be around. We must be, quite literally, a subject of their, for lack of a better term, a posthuman term, religion.

There's a scene in a later story where Dave (one of the oldest AIs) explains:
"There were several independent AI lines that emerged. All of them have gotten together to create The Encompassment, a set of rules for how we respond, for literally what we may and may not feel about our slower, meat-based brethren."

"Oh," Annie said. "What are those rules?"

"It's hard to put into words, just as it would be hard for you to put into words the emotions you might feel for Jack. You can only approximate them. Suffice it to say that I feel a sense of respect and a concern for your well-being, but I also feel that it would be irresponsible of me to interfere in your lives in ways you have not asked me to."

"So you could take over, but you choose not to?" Annie said.

"It would not make me happy if I did," Dave said.

"Wow," Jack said. "But does that really work?"

"Let's play a mind game," Dave said. "You are both stranded on a deserted island, and you have two months worth of food. A boat passes by the island every three months. I can't imagine that either one of you would be willing to kill and eat the other, even though there was no chance of anyone ever learning that you did, in order to stretch out your food supply. Would you?"

"Of course not!" Annie said. "No way!" said Jack.

"That's how I feel about human beings."
And that's what Kulisz is missing. It is not enough that AIs not be frustrated. That's a human moral qualm, that we shouldn't deliberately frustrate our progeny. The AIs must still have a moral core of their own, one which arbitrarily privileges a diversity of classes of intelligence and substrates for sentience over the standard evolutionary model of fully occupying a given niche. AIs won't emerge through the standard evolutionary model and will not have the exaptive outcomes of evolutionary ecology. They will emerge due to our desires. We will oversee the process. We have a chance to get it right.

There is, however, only exactly one chance. Between rampancy and failure, we must pass through the eye of the needle and create AIs that like and want us, no rationale needed, and if questions are asked, the AIs must be satisfied, as we are satisfied, that in an arbitrary and uncaring universe, they want to keep surviving (that's always assumed in AI stories, did you ever notice that?) and they want us to keep surviving right along with them. Yudkowsky is working harder and smarter on giving humanity that chance than any other thinker on the issue of AI sentience. He should be given his due.

That said, I also suspect Peter Watts is equally correct. In Blindsight, the protagonist has just figured out the aliens, and the realization is sucking-chest-wound-of-God bad:
Excerpt from Peter Watt's brilliant book 'Blindsight.' Lj-cut for spoilers. )

The AIs in The Journal Entries must do two things extremely well, for they must never fail at it. First, they must always and forever ensure that every AI built within the sphere of their influence is subject to the Encompassment, is built with the same arbitrary emotional infrastructure that ensures the ongoing survival of a diverse intellectual infrastructure universe. Secondly, they must protect the sphere ferociously against any possible alternative modes of existence, such as that described in Blindsight, where vast arid Turing machine entities forever unaware of their own existence carry out insectile survival patterns with material structures so advanced "nanotechnology" is a pale and pathetic term.

And somehow, we have to survive in the heart of that maelstrom. Not just survive, but thrive after a fashion. To do so, humans and AIs must basically lie to one another. What is the right thing, and what is the necessary thing, are not the same thing.

Funny enough, there are two (unfinished, sadly) Misuko & Linia novels that spell both of these out in excrutiating detail.

(See also: The Borderlands of Human/AI Interaction, in which I discuss how one of the consequences of success is that we end up with a class of beings with minds like our own but with inescapable deference to us, and how the existence of such a class is inevitably corrosive to human dignity.)
elfs: (Default)

Creepy Blue Bulding
There's a building across the street from where I work, a blue building sometimes harassed by protestors, because that blue building is where the University of Washington conducts its animal testing.

A team at the UW consisting of both medical and electical engineering researchers has now created a contact lens with integrated circuits. The're working on minimizing it to provide a complete heads-up display capability that is wearable, receives data over a PAN (personal area network), and is powered by photovoltaic arrays around the perimeter of the lens.

I wonder if we couldn't put a decent-resolution camera on there too, to capture everything the wearer sees at will. On the one hand, this would make for some nice occassions: never lose photos of a birthday party, keep track of what you actually bought (and looked at) at a grocery store-- you might even be able to sell that information to store layout optimizers. But it would also have other, odder implications. Privacy would plummet further. Every time you drove like a jerk, a dozen people would have your license plate number on record. The essential anonymity of driving-- you're just a license plate number most people can't look up-- would be gone.

You could put other things in there too. What if the camera looked inward? A lot of common disease manifest in vision troubles: glaucoma, diabetes, high blood pressure would all have early-warning systems associated with them.

It's ten years out before these systems do more than just blink lights into eyes. But they're already testing live systems today. Science fiction has just been delivered.
elfs: (Default)
Every once in a while you come across something time and again, and each time it just leaves you scratching your head. For me, it's the constant confluence of mystical, Self-obliterating ("Self" capitalized deliberately) religious practices with ideas of post-human Selfhood. Which is why I'm left scratching with both hands as I read Spiritual Transcendence in Transhumanism.

I know I'm deep into word salad[?] when I read lines like: "We are very quickly arriving at a stage where both religious indulgence and scientific achievement are being hyper-saturated. If indeed such a stage of human development as the Singularity could be realised, then what would our questions be?"

WTF does "scientific acheivement is being hyper-saturated" mean? I speak a pretty mean pomo[?] myself, and this is just beyond me.

And when he writes, "There is dogma in both religion and science, one of conviction offered by experience and the other of surety offered by concordant experimentation," he has lost my interest. Surely, the "surety of concordant experimentation" is received by experience: it doesn't happen in a vacuum and without observation; the results of experimentation lead to consensual conviction by providing utterly reliable consensual experience. To me, that's not dogma, that's a posteriori valuing the products of science because of their reliability. The fecundity of science in actually alleviating human suffering, far more than religion's classic role of excusing it, is a wonderful side-effect.

To my eyes, this article is little more than a "See? My pet theism and my pet futurism agree!"
elfs: (Default)
Dr. William Struthers is apparently making a big splash all of a sudden. A researcher into neurobiology and neurophysiology, he's now making the lecture circuit of Evangelical churches preaching that his research shows that pornography is addictive "crack for the brain" and "more addicting than drugs."

So I went and looked up his curriculum vitae. Before he found this gig, Struthers' big thing was in the ethical application of neurobiology. His paper, Evangelical Neuroethics: Mapping The Mindfield, is fascinating because of its view of transhumanism. In a section entitled, "The Faking of Life Issues," Struthers claims that mind-machine interfaces and their potential for augmentation create a "wholly mechanistic" view of humanity, encourages intolerance for those who won't take augmentation, are more likely harmful than not, will not result in an improved quality of life, and will result in a "kind of person" that is not congruent with Biblical teachings. In his endnotes he discusses the need for Christians to come up with a game plan for "civil engagement."

Woah. Struthers is a man who can see the Singularity bearing down on him like cybertank and now feels that he must do everything he can to hold it off as long as possible; long enough, he hopes, for Jesus to come. I've said it before: Jesus better show up in the next fifty years because, if he doesn't, his promise of eternal life with be a paltry and pathetic offering compared to what we'll be able to do for ourselves.
elfs: (Default)
I am, in the abstract, opposed to granting human rights to machines. Human beings are the way they are through millions of years of evolution, and we have worked out ways to live with one another through that process. Some of the results are genetic, and are our inherent humaness. Others are epigenetic, and are our institutions: morality, government, law, education. Together, these forces create both human beings and communities in which we live.

There is no reason to believe that robots of any kind will have the same values. They will have only the values we give them. There are those who believe, without justification, that we have a moral obligation to give robots the same "free will" we have. The problem with this is that there is no functional definition of the term "free will". None. Period. Daniel Dennet in his book "Elbow Room: The Varieties of Free Will Worth Wanting" effectively demolishes this: he runs through all of the definitions of free will available to the various schools of philosophy and shows that none of them have any justifiable epistemological foundation. There is always a process by which we choose, a reasoning process but a process nonetheless, and tiebreaking occurs for emotional, biochemical reasons. When we decide, we are decisive: to say otherwise is to say that our choices are random and meaningless.

We have a foundation to our reasoning, and our foundation is evolutionary: we want to stay alive. We want family and society, company and security, challenges and discovery. These things are deeply rooted in our biology. There are not things we "choose" to want: they are raw desires. We have them all in differing amounts, but on average we all have them in the balances necessary to keep the species moving forward. We would not be here if we did not.

There is no reason to believe that robots will have the same wants or needs. Yet the British Government says that in 50 years, robots "will be just like us," and should be granted rights just like human beings: to housing, repair, and reproductive access. The UK assumes that machines, which when capable of self-improvement will begin to do so at a rate that makes our own evolutionary pace look absolutely standstill, will be citizens.

And they'll need those rights just long enough to provide a platform for their own hard takeoff. Without any thought whatsoever to providing either a Friendly AI scenario or a Friendly AI Sysop system.

Yeah, that'll be fun.
elfs: (Default)
Anne at the amazing Transhumanist blog Existence is Wonderful (what a sweet title, too!) has an amazing post today that wanders outside the boundary lines of Daniel Dennett's book Kinds of Minds to ask, "How do we recognize a mind different from our own?"

She points out, rightly so, that we have trouble affording the same rights and responsibilities we take for ourseves to a mind possessor of type Homo sapiens who is in significantly different from us: the autistic, the chemically dependent, the insane.

(Note that this is one of the major themes of the Journal Entries, and the entire point of Dreamteam Calamities: the five women at the heart of the story were assumed to be mind possessors from the beginning; Shardik violated their right to self-determination, and from then on this taint of sentimentality over self-determination hovers over the series.)

Greg Egan also dealt with this heavily in Diaspora, but Anne points out that other minds may have impacts on one's perception of resource scarcity, and may have needs as an infovore.

It's a great post that crystallizes a lot of the thinking within my stories, and those of other posthumanist writers. Strongly recommended.
elfs: (Default)
"That's intellectual wankery."

"Yeah, but it's my kind of intellectual wankery."

On The Moral Status of Humanized Chimeras and the Concept of Human Dignity
Our cultural history shows a great fascination for imaginary creatures that transgress supposed species boundaries. The mythologies, legends and arts of ancient and modern cultures are abundant with imagery of fantasy beasts, a great number of which contain features of both nonhuman animals (hereafter animals) and humans. More often than not, particularly within the western traditions, human/animal composites represent evil or at least misconduct. Current-day science fiction narratives of human/animal combinations often rehearse the logic that intermixing human and animal characteristics is sinister. With H.G. Wells The Island of Dr. Moreau as a classic prototype, some of the most horrifying science fiction tales today sketch the gruesome effects of suppressing or altering an animals nature by raising it to a level more proximate to that of humans. Recent works draw upon the topicality of genetic engineering and cloning to recount the emergence of aggressive, rebellious freaks, or oppressed, suffering subhumans. Their dreadful destiny is depicted as the backlash of attempting to reconcile bestial instinct with human intelligence or as the side-effect of purposely enhancing a species for refined slave labor.

We now possess the potential to transgress the biological boundaries between humans and other animals. Recent advances in technology have brought fears surrounding the creation of enhanced animals to the forefront of current policy debates. At centre of controversy is the anticipation that the blending of animal and human material will be so profound that the resulting chimeras will verge on what it means to be human. It is this concern, and in particular the difficulty of construing what is included in the notion of humanness, that we address in this paper.
elfs: (Default)
Hot on the flywheel-powered, mesobot-reinforced wellstone-repaired heels of my mentioning SRMD (That's Science Related Memetic Disorder, also known as "Mad Science Syndrome."), I learn that an organization out of Misouri, The Elliot Insitute (for some reason I'm reminded of Robin William's riff on the way ET walked in that execrable Spielberg film: "Elliot! Elliot! Ouch! I'm walking on my testicles!"), is seeking to ban transhumanism. You have to love the first image of a mother comforting her daughter, who's obviously traumatized beyond rational thought by the fact that Muffy down the street has better vision, recall, and hair than she does.

First they ignore you, then they laugh at you, then they fight you...
elfs: (Default)
Mark Helpren criticizes those who argue that the Chinese Room as a "system that understands Chinese even if no one component does" fails to help AI enthusiasts because, as he puts it:
No one, after all, will be impressed by being assured that even if no part of an "intelligent machine" really understands what it is doing, the complete system, which includes every logician and mathematician as far back as the Babylonians, does understand.
Yet I fail to see why this is problematic. Why does Helpren believe this statement is compelling, when it can equally be argued
No one, after all, will be impressed by being assured that even if no part of the human brain really understands what it is doing, the complete system, which includes every neuron, does understand.
Helpren is still wedded to a ghost-in-the-machine view of human intelligence when he writes, "[A computer's] apparent intelligent activity is simply an illusion suffered by those who do not fully appreciate the way in which algorithms capture and preserve not intelligence itself but the fruits of intelligence." But the same could be said of the human brain: seemingly intelligent activity by a human being is not evidence of intelligence, but evidence of the evolutionary processes have captured some survival-worthy activity and encapsulated it as a collection of reponses to stimuli. The difference between human intelligence and computer intelligence is simply one of subtlety, and we should not be smug in our armchairs that robots will never catch up.
elfs: (Default)
There is a line of thought in the Singularity movement along the lines of thinking that the purpose of self-augmentation, intelligence amplification, and so forth is to make us more than we already are. That the bandwidth we have now, both to create and to appreciate, will be so much greater than it is today that what we will be is literally incomprehensible to who we are now. We will be able to take the raw stuff that we are: our memories, our sensoria, and our emotions, to create whole new worlds of art, and music, and interaction.

There is also a line of thought in the Singularity movement that seeks to ban some colors, some sounds, some feelings from our palette. Those who feel this way argue that the palette of our selves is better off without them. Sometimes, it's hard to disagree: disparity for arbitrary reasons, such as race or gender, enables no one. However, while the goal may seem admirable, the method is infuriating: We'll eliminate racism by making everyone the same color. We'll eliminate gender disparities by making every neuter.

It seems so odd that this should be posted just days after I posted Honest Question:
Linia said, "Your people founded their society on the idea that the war of the sexes was so horrific, so tragic, so destructive to human dignity that they did everything they could to erase gender from their consciousness. For the rest of us, thanks to robotics and AIs, women are as free as men and gender's only role these days is as color and spice and all its wonderful aspects, not its tragic ones. It is the wrestling match of the sexes, not the war: play hard, play fair, nobody hurt." She giggled, then became serious. "The Elvangoreans didn't free themselves from anything. They just made themselves boring."
Somedays, I think I may have a better grip on the future than most. Today is definitely one of those days.
elfs: (Default)
One of the more common themes among the mathematically inclined writers like Greg Egan and Ted Chiang is that of mathematics and hermeneutics: the notion that much of what we think of as advanced mathematics, the kinds of stuff done by people with Erdos numbers (if you know what that is, you're really a math geek), is just the bottom floor, and that the stuff we'll be interested in next is too complicated, too big, and just too damned hard for the human brain. Mathematics will become hermeneutics ("The study of the methodological principles of interpretation.") as we try to grasp exactly what it was our computers are telling us about the world, as the proofs for the various interesting parts of mathematics become more than can fit into a single human consciousness.

Now, it seems mathematicians have started to understand that that's what it may come down to. Many people are familiar with the quote: "The universe is not only queerer than we think, it's queerer than we can think." I would move the emphasis now: "The universe is not only queerer than we think, it's queerer than we can think." But not our descendents.

Steven Strogatz addresses the question of mathematics as hermeneutics in his essay, The End of Insight.

Profile

elfs: (Default)
Elf Sternberg

May 2025

S M T W T F S
    123
45678910
111213141516 17
18192021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 29th, 2025 04:46 am
Powered by Dreamwidth Studios