Active Entries
- 1: Surge Pricing for Grocery Stores is a Disaster Only Psychopath MBAs Could Love
- 2: Antarctica Day 7: Swimming In the Antaractic Seas
- 3: Restarted my yoga classes, and I discovered I'm a total wreck
- 4: Antarctica: Getting To the Boat and the Disaster That Awaited
- 5: The Enshittification of All That Lives
- 6: How the green energy discourse resembles queer theory
- 7: Tori's Sake & Grill (restaurant, review)
- 8: I'm Not Always Sure I Trust My ADHD Diagonosis
- 9: You can't call it "Moral Injury" when your "morals" are monstrous
- 10: Ebay vs Newmark: You're all just cogs. Accept it. There is no joy in it, but you have no choice.
Style Credit
- Base style: ColorSide by
- Theme: NNWM 2010 Fresh by
Expand Cut Tags
No cut tags
no subject
Date: 2007-01-31 03:00 am (UTC)Ah, but we're already have a "wholly mechanistic" view of humanity. To whit: almost everyone in all branches of science approaches human cognition as if it were a purely-linear process.
Of course, it's not. Even small collections of neurons are highly-nonlinear systems.
Having specialized in nonlinear-dynamics for my doctorate, I can tell you a few things about them:
There's something else that I recall from my grad-school days, from a course in the emerging science of "Complexity". We were studying ANN
==artificial neural nets. As I recall, real neurons are kinda-sorta-vaguely-like analog transistors. Only, not. Unlike a a transistor, a biological neuron has arbitrary number of inputs, outputs, and can be anything between "on" or "off"(Think dimmer knobs, not light switch.) Yet even the esteemed computer scientist, Peter Naur, in a recent technical article, conflated neurons with the on-or-off, switchlike behavior of a transistor.
So, question: How do we develop humans who are, "beyond-human," if we can't let go of our über-simplified analogies of the brain's "circuitry" long enough to start looking at the actual components and how they work?
But I have a second problem with the whole trans-human idea, again, stemming from studying ANNs in that class. Seems that ANNs have a "capacity," though not like the RAM in your computer. No, this "capacity" is more of a soft-limit of how much you can "store" in the artificial neural net. For example, if you teach 8 letters to an ANN that does character-recognition, all and good. Let's say this ANN's "capacity" is 10 characters. What happens if you try and teach it all 26 letters?
Oh, it appears to learn them. But a funny thing happens — it starts having trouble recognizing letters. It'll confuse, for example, "E" and "F". And as you try to store more letters in it, it'll start confusing "E" and "Z". In short: try to store too much in an artificial neural net, and it has trouble remembing.
Sound familiar?
So, what if those implications are correct? What if the flaky-memory of old age isn't merely physiological, but also due to our brains just getting full? What does that mean for a nigh-immortal trans-human?