elfs: (Default)
I was listening to NPR yesterday and there was this guy debunking the whole UFO thing, and at one point he starts going into the whole nature of what intersteller travel entails. He says, “Look, we’ve been over this. There really is a speed limit to the universe. My degree was in physics and I worked on atom smashers, where we accelerate particles to very close to the speed of light. Our best day, we reached 99.999954% of the speed of light. And you could put twice as much energy into accelerating those particles and they would just go a tiny bit faster, but they wouldn’t go past the limit. So no matter what, it would take a minimum of four years to get from the nearest star to here, and there’s all sorts of limits on accelerating to that speed an decelerating enough to visit our solar system in any meaningful way. So it would take a long time. Aliens who want to travel between stars would have to be very patient.”

It was the word ‘patient’ that perked my interest because, here’s the thing: patience is an emotion.


It was named “The Permanent Problem” by economists because it had been solved. There really were enough calories for every human being. They worried: If you take away Human Platform Problem One, what does human software do now?

Impatience is human restlessness. In the 1920s, this was identified as “The Permanent Problem:” regardless of your bent about evolutionary psychology, human beings in the aggregate have three fundamental drives: how do I get enough food, how do I form a community for mutual support in acquiring food and shelter and survival, and how do I find a mate to help both of those into the future?

That’s built in. There are many and wide deviations from those, which is how we get eating disorders and psychopaths and, yes, homosexuality. Some of those are morally neutral and require communal acceptance, some are personally harmful and require intervention, and some require stricter controls.

Evolution is constantly emitting new variants of the human platform, some of which are useful, and some are not. The “nots” get exapted out. And along the way we’ve developed a far more vast and complicated collection of responses to the world around, even creating a world inside ourselves where we think about how the world works and how other people might react to our ideas, and we call this world “consciousness.”

Patience is the ability to hold on and wait while all that restlessness is poking at us, because human beings are complicated creatures who can think into the future and realize that patience has a payoff.

There’s every reason to think that aliens, especially aliens capable of crossing the vast gulfs of space, with all the biological, physical, cybernetic and even cognitive hardening that might entail, would come with a different set of emotions, a different emotional framework.

In a lot of the science fiction I write, the good parts cribbed from Greg Egan and his early short stories like “Tap” and “Jewel,” human beings have developed the ability and knowledge to reach into their own minds and twiddle with some of the knobs. One of the most commonplace adaptations is called the Canon. The Canon is basically a nightly reset. It’s an emotional Groundhog Day. You wake up every morning with all the memories of the day before, but your emotional state can only be affected by it so far; there’s a range outside of which the Canon will not let your feelings go. The most common use for a Canon is between lovers who want limerance, the sensation of being madly in love, to never fade away between the two of them.

There’s no reason to think that a species capable of interstellar travel couldn’t have that same ability, and make “patience” a moot point in their emotional frameworks as they maintain their vessels and pilot between the stars.
elfs: (Default)



Jeet Heer called that an "unpopular opinion." I think he's right.

I'm going to do some intellectual violence to Buddhism here, but in doing so I hope to open up a couple of pointers and maybe open up a discussion about a central tenant of Buddhism. Buddhism is basically an entirel religion built around one essential insight: you are not your thoughts, and your thoughts arise for reasons over which you have very little control.

That's it. The rest of Buddhism is an attempt to make sense of this insight, and to use it fruitfully. The technology for doing so is meditation. ("technology: the means and knowledge used to provide for human sustenance and comfort" — oh, if only!) And there are only three subdisciplines of meditation that you have to master in order to acheive Buddha's essential insight.

Concentration


The very first skill of Buddhism is being able to focus and control your thoughts on demand. This is the infamous Breath Meditation, the one that bores everyone and is the first major hurdle to overcome. This is the time when you spend first five, then ten, then twenty, then longer, concentrating only on a single thing: your breath, a candle flame, a mantra, a thought, a feeling. That's it. It's a discipline.

And for someone like me, with mild ADHD, it's been incredibly useful.

Mindfulness


The second skill is mindfulness. You can't even begin to practice mindfulness until done concentration for a while. Mindfulness starts with being able to recognize when your mind has wandered off from concentration. Over the weeks and months of concentration practice, you develop a sense of mindfulness about your own mind. There are two subdisciplines of mindfulness: external and internal.

External is easier: you become mindful of what's going on around you. You pay attention to the world, to everything around you, labeling every stimuli accurately but not considering anything else about it: not its origin, not its disposition. You can do it with your eyes open, even.

Internal starts out simply enough: meditating on physical states, like what temperature is your big toe, how much pressure is being exerted by your knees, what angle are you carrying your head at. Eventually, though, mindfulness moves to emotions: what does it feel like to feel sad, or angry? Where in your body do you carry stress? Where in your body do you feel happiness?

Between these two, you develop a sense of the transience of all these feelings. Thoughts happen to distract you, you are not entirely, or even mostly, in control of them. The best you can do is keep them marshalled.

Insight


Insight is the hardest of all. It builds off mindfulness. Insight is the realization that those feelings you're having aren't you. You've already developed a sense that your feelings aren't under your control. The distraction to get up and get a drink, or turn away from whatever you're working on to watch YouTube or hit Facebook, is terrible, but that distraction either is you or isn't you, and there's not a whole lot of in-between.

Even more importantly, the border between "you" and the "world" gets a little fuzzy. Sure, it seems to be your skin, but the world comes in through eyes, ears, your nose and mouth. Your skin and the world are in a constant negotiation about the temperatures and pressures to which you're subject, its comfort and its texture: is your skin "you," or is it doing something without your "self" making decisions?

Now the point of insight is to chip away, mindful moment by moment, that maybe there is nothing at all that is you. There's nothing you can point to that's "you" in a coherent sense. There's a version of you that's hungry, and cranky, and happy, and joyful, but none of those is in a real sense "you," an incontrovertible noun that represents you-ness.

The Buddhists claim that those who have had the full insight, the moment when all of sense of yourself has been extinguished and you've fully embraced the idea that there's no coherent "you," you become ineffably aware of the fragility of everyone else, and in doing so become more compassionate and wise, an arahat.

The science fictional view


"The Transporter Paradox," which asks who you are if you're disassembled in one place and reassembled in another, complete and accurate down to the last quark, is a classic of modern science. I've played with it myself. My robots talk a lot (too much, maybe) about negotating that barrier between themselves and the world, about the nature of thought, even about the way we come up with narratives to explain why we act in certain ways. My brain uploads find that giving up the body has its own suite of challenges, and many opt for simulated bodies to keep the level of stimuli familiar and comforting.

But the one thing that brain uploads also challenge is the idea of reifying time. In the current world, we have these lovely tools called "time traveling debuggers," which record the state of the program as its running, and allow programmers to view the program's memory state as a graph of use-over-time, looking for spikes and strange behaviors and bugs. If we reified someone's brain state in the same way, would that be the "self" she claims as her own? It would be more concrete, it would wrap the Buddhist objections about "impermanence" in a malleable, permanent representation. It would, in fact, challenge Buddhism to treat time as a phenomenon that is part of, and not distinct from, the three-dimensional representation of the body.

On the other hand, it would also definitely reify the way "you" and "your world" are inseparable; just as a time travelling debugger makes no sense without both a program to run and a computer to run it on, a consciousness running on any substrate, be it meat or metal, requires a context in which to exist. So in one sense, we've found a thought experiment that solidifies one sense in which Buddhism's insights about human nature might not be true, and one in which they are even more true. "Impermanence" is itself an illusory effect of any one human being's inability to perceive more than a singular instant of time. And yet, "selfhood" itself becomes something without any independent existence at all; your "self" doesn't exist except as an illusion, like a seam of silver in the great mines of spacetime that can't be extracted without destroying both.

elfs: (Default)
Re-reading Sam Brinson's Are We Destined To Fall In Love With Androids?, and my response to it, I noticed a pattern between the stories to which I linked, the ones in which I showed how much the "literature of the future" (which is, in fact, really about the present, and ways to address the present) has addressed the question of "human / cyborg relations" (to use fussy C-3PO's term). One of the overriding questions asked in these stories, one which was elided in 2001 and addressed directly if awkwardly in 2010, was this:

What is our moral obligation to the robots we create?

In a lot of ways, science fiction writers use this as a metaphor for the question of our moral obligation to our children and our progeny, but as experience with actual AI starts to get real we (science fiction writers) are already starting to ask questions about our moral obligations to our creation. This isn't a new problem. The very first "artificial life" story, Frankenstein, addresses the issue head-on in the last dialogues between Victor and the Monster, and later between Walton and the Monster.

If you, like me, believe that consciousness is the story we tell ourselves about ourselves, a way of maintaining a continuity of self in a world of endless stimuli and the epiphenomenal means by which we turn our actions into grist for the decisions we make in the future, then maybe there will never be conscious robots, only p-zombie machines indistinguishable from the real thing, William James' automatic sweetheart.

But if we want our robots to have the full range of human experiences, to be lovable on the inside as much as we are, then we're going to have to give them an analogous capacity to reason, to tell themselves stories that model what might happen, and what might result, and therefore we have to ask ourselves what moral obligations we have toward people who are not entirely like us, or whose desires are marshalled in a way that suits us entirely.

My own takes has been rather blunt: we are obligated to actually existing conscious beings as if they are moral creatures, and they have the rights and responsibilities of all moral creatures. At the same time, the ability to alleviate them of the anxieties and neuroses of human beings, our own vague impulses shaped by evolutionary contingency that make us miserable (and they do: happy people lack ambition; they do not build empires) may make them more moral than we are. (Asimov addressed this a lot; in many ways he was far ahead of his time.)
elfs: (Default)
So, it's December, 2007. I have the 2001 British Telecomm white paper "The Technology Timeline" here, and let's see if their futurists got anything right at all. Here's the 2007 list:
  • Artificial Inteligence & Artificial Life
    • Domestic appliances with personality and talking head interface
    • Systems to understand text and drawings (e.g. patent information)
    • People have some virtual friends but don't know which ones
    • AI students
  • Business & education
    • Lifestyle brands dominate
    • Network based learning causes polarisation in classes - streaming is essential
    • Global classes used for multicultural immersion
  • Environment & countryside
    • Virtual farming co-operatives
  • Home & office
    • Emotional objects, switches etc around home
  • Life & leisure in a cyberspace world
    • People reduce tax liability by being partially paid in information products
    • On line voting in UK
  • Materials & electronic devices
    • Material with refractive index variable by 0.1 in electric or magnetic field
    • Self organising adaptive integrated circuits
  • Processing, memory and storage
    • Optical neuro-computers
    • Quantum computer
  • Robotics
    • Totally automated factories
  • Security, law, war
    • Data mining use in trials
    • First net war between cyber-communities
    • Remote override capability on planes
  • Shopping & money
    • Paper and coins largely replaced by electronic cash
    • Shops start being paid by manufacturers as try-on outlets
    • Electronic cash from internet migrates onto high street
    • Next generation space telescope launch
  • Transport & travel
    • All new cars fitted with positioning systems as standard
    • Portable translation device for simple conversation
    • Kaleidoscopic clothes using materials with embedded pigment micro-capsules
I might have virtual friends, but I don't know of any AI students yet. Global classes is a joke, there aren't widespread "emotion sensitive" houses, and both quantum computing and neurocomputing are still gleams in the eyes of researchers. Data mining in law cases has happened, not all cars have positioning black boxes, and portable translation boxes are still a few years out.

Profile

elfs: (Default)
Elf Sternberg

June 2025

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
2930     

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 23rd, 2025 11:08 am
Powered by Dreamwidth Studios