Date: 2007-01-31 03:00 am (UTC)
Before he found this gig, Struthers' big thing was in the 'ethical' application of neurobiology. [quotes mine] So, in other word, "Brainwashing-4-Jeezus."

In a section entitled, "The Faking of Life Issues," Struthers claims that mind-machine interfaces and their potential for augmentation create a "wholly mechanistic" view of humanity, Ah, but we're already have a "wholly mechanistic" view of humanity. To whit: almost everyone in all branches of science approaches human cognition as if it were a purely-linear process.

Of course, it's not. Even small collections of neurons are highly-nonlinear systems.

Having specialized in nonlinear-dynamics for my doctorate, I can tell you a few things about them:
  1. They are inherently unpredictable.
  2. They aren't random. They're just not predictable. (The two aren't the same.)
  3. In a linear system, the whole is the sum of its parts. In a nonlinear system, the whole almost never the sum of its parts.
  4. There are strong indications (but no conclusive data or models yet) that, sometimes, in a nonlinear system, the whole is greater than the sum of its parts …


There's something else that I recall from my grad-school days, from a course in the emerging science of "Complexity". We were studying ANN
==artificial neural nets. As I recall, real neurons are kinda-sorta-vaguely-like analog transistors. Only, not. Unlike a a transistor, a biological neuron has arbitrary number of inputs, outputs, and can be anything between "on" or "off"(Think dimmer knobs, not light switch.) Yet even the esteemed computer scientist, Peter Naur, in a recent technical article, conflated neurons with the on-or-off, switchlike behavior of a transistor.

So, question: How do we develop humans who are, "beyond-human," if we can't let go of our über-simplified analogies of the brain's "circuitry" long enough to start looking at the actual components and how they work?

But I have a second problem with the whole trans-human idea, again, stemming from studying ANNs in that class. Seems that ANNs have a "capacity," though not like the RAM in your computer. No, this "capacity" is more of a soft-limit of how much you can "store" in the artificial neural net. For example, if you teach 8 letters to an ANN that does character-recognition, all and good. Let's say this ANN's "capacity" is 10 characters. What happens if you try and teach it all 26 letters?

Oh, it appears to learn them. But a funny thing happens — it starts having trouble recognizing letters. It'll confuse, for example, "E" and "F". And as you try to store more letters in it, it'll start confusing "E" and "Z". In short: try to store too much in an artificial neural net, and it has trouble remembing.

Sound familiar?

So, what if those implications are correct? What if the flaky-memory of old age isn't merely physiological, but also due to our brains just getting full? What does that mean for a nigh-immortal trans-human?
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

elfs: (Default)
Elf Sternberg

May 2025

S M T W T F S
    123
45678910
111213141516 17
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 18th, 2025 08:38 pm
Powered by Dreamwidth Studios