2001 Blues

Originally published at rewired.com

It’s a hassle to edge past 50 and realize that the dreams pumped into your head by "the dreamer fthp" (see Footfall by Larry Niven and Jerry Pournelle, where SF authors form a kind of dreamer caste as seen by elephant-beaked aliens) prove bogus… specifically dreams of space travel, alien contact, and robots, which are all present in 2001: A Space Odyssey, which is a raging zen koan I’ve carried with me for 33 years. For those of you who are not at least backsliding Buddhists, a koan is a riddle that a Buddhist master gives his disciple. Once it’s in the disciple’s head, the koan is like a time bomb ticking away until it explodes as realization. It could be one of the tiny kensho realizations that shape the pathway toward real enlightenment and wisdom, or it could (rarely) be a multi-megaton satori mindfuck that blasts the disciple into a state of profound awareness.

My 2001 bomb has gone off, finally, as we get to that first year of the new millennium…and it’s fizzled, in the sense that we haven’t afforded space travel in the big way envisioned by Clarke and Kubrick and we haven’t found alien artifacts on the moon (though we might have a few outside Roswell?) and artificial intelligence is a bust.

I first saw 2001 in summer 1968, its first release, in the now defunct widescreen format Cinerama, which was the IMAX of its time. (I’ve since seen a version remastered for IMAX, and it was just as amazing as the Cinerama version.) It was part of a pilgrimage of sorts: my friend Freaky Pinky and I set out from West Texas for San Francisco (summer of love) but were waylaid by our wild and woolly friends in Flagstaff. We were distracted from one adventure by another, and this one was as worldly as they come: we drank gallons of Schlitz beer and wandered around Northern Arizona, to Snowbowl and Grand Canyon and down Oak Creek Canyon to Jerome, where we spent time with authentic San Francisco hippie freaks eating rice and herbs and smoking ganja. Eventually our money ran out and my brother asked me to be best man at his wedding, so we aborted the trip and returned to Texas. Before that, though, I told Pinky et al that whatever else happened, we must find a Cinerama theatre so we could see 2001: A Space Odyssey in all its superpanavision glory. And so we did, driving one boring Sunday morning from Flagstaff to Scottsdale, where there was a Cinerama theatre and 2001 was showing.

Though our young minds were pretty well fucked by Kubrick’s expansive vision, we had no idea what we’d just seen.

We drove home late, the desert sky totally clear and filled with stars (in my mind I was circling Jupiter, but the engine of my own Discovery, in this case an old Mercury Comet I had inherited from my grandfather, was misfiring ’til Pinky got me to stop so he could clean the fouled plugs.) While Pinky was fixing our vessel, Bill and I were drifting in space, hiding from the work at hand and searching the vast sky, counting stars. The Amboy Dukes were in heavy rotation with ‘Journey to the Center of the Mind’ and we never suspected that the hot lead guitar would become a right-wing gun nut second only to Moses. Right then we were not of the world.

I must’ve seen 2001 fifty times since then, in every possible format: Cinerama, 70mm, 35mm, video, DVD… I’ve read Clarke’s book and most of the sequels, I’ve seen 2010 a couple dozen times. I’ve had long philosophical discussions about 2001’s meaning, especially focusing on the tricky ending.

Now that we’re there, entering the new millennium, I have perspective in addition to all the thinking I’ve done about the film, and I’m thinking it’s time to pull my thoughts together as an analysis critique that focuses on what I think Kubrick & Clarke were trying to say, and where I take issue with them. To do this I’m trying to look at 2001 as though Clarke hadn’t written the several sequels, because I think those sequels take a direction that is not resonant with the poetic and philosophical foundations of the first work. It’s not clear to me, for instance, how the initial work’s depiction of the evolution of the Star Child is consistent with a plan to create a new sun from Jupiter with a new set of populated planets. If the deus ex spatium had that plan in mind, why wait for a signal that earth’s monkeys had evolved sufficiently to reach the moon? And why is the Star Child suddenly remanded to the "Dave Bowman" form, and what it his purpose relevant to the emergence of the new solar system?

But I digress, I was going to avoid the sequels entirely.

What I mostly want to talk about is Hal 9000, and his place in the evolutionary chain that began with Moon-Watcher and his prehistoric tribe, an evolution evidently triggered by exposure to the deus ex machina black monolith. After seeing the monolith, Moon-Watcher kills another of his kind with a bone which is his first tool, the origin of technology and of war. At the other end of human history Hal 9000, simultaneously a technological artifact and an evolutionary advance, kills Frank Poole and attempts to kill Dave Bowman. Evolution and war are inextricably linked in the book and the film; war and violence are themes of Kubrick’s most notable films: Paths of Glory, Dr. Strangelove, A Clockwork Orange, Full Metal Jacket, etc. ) This is stuff I didn’t get at first. In fact, it took me thirty years to catch on.

Among these other things, 2001 is about human hubris and frailty on the one hand, and the majesty of human yearning on the other hand. Consider: you find a monolith on the moon, it makes a weirdly electronic shriek toward Jupiter, and instead of saying "what the fuck" and blowing it off, you build a bone-shaped state-of-the-art spaceship and high-tail it to Jupiter orbit. We just have to know who’s out there, and when Dave Bowman gets the full explanation of the mission, he puts his life on the line and drops himself onto the larger monolith circling Jupiter. His last words: "My God, it’s full of stars!" Actually, that last phrase is from 2010; we don’t have a clue what Bowman’s thinking or feeling in 2001, partly ’cause he’s so totally cool. Says Keir Dullea:

The characters that Gary Lockwood and myself portrayed were characters that Kubrick envisioned as having strong psychological profiles. Also, that the equivalent of NASA would have its eye out for people at an early age to display steady personalities that would not be thrown by circumstances that would cause real problems for the average person, whether in "2001," or in the time we were making the film. That may have given the impression of a special kind of personality, because most of the persons portrayed were astronauts or the scientists dealing with unusual phenomena. (Cinemax interview, transcribed at http://www.cinemax.com/kubrick/cmp/dullea_transcript.html).

 

So they were totally cool, a complete refinement of the human that evolved from Neanderthal Moon-Watcher and his bunch. And Hal was supposed to be even more refined, though he lost his cool and tried to destroy our space-jumping heroes. Why? He went schizo over conflicting inputs, according to the analysis in 2010.

Hal himself says

The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.

Yet the 9000 is programmed, according to Chandra’s discovery in 2010, to conceal mission information from the crew, and this is evidently contrary to the 9000 series charter.

The presumption of "schizophrenia" in Hal9000 assumes that he has mental function similar to a human’s…Hal is an "artificial intelligence," which according to the dialog in 2001 means only that he mimics human intelligence… but the impression, especially given his schizophrenic episode, is that his intelligence is more than simulation. He refers to himself as a sentient being, and this is clearly how Kubrick sees him. Hal seems to represent a consequence of the evolution represented in the film, and this takes me back to the evolution as domination meme. Hal is a step forward, and he’s prepared to jettison inferior humanoids as he begins to feel his virtual oats.

He’s just an eyeball… a huge red-and-yellow (like coral snake: red and yellow/kill a fellow) eyeball. He’s silicon. He can read lips. He’s the whole fucking ship! He’s superior and he knows it.

But he’s unreal. The problem with science-fiction as fable and myth is that it seeps into our thinking and takes over. Ask people if they believe that sentient robots will be created in our lifetime, and a surprising number will say yes, not realizing how truly dumb computers really are. That intelligence you perceive was put there by some programmer. Computers don’t think. They run processes, throw switches. There is no "artificial intelligence." Hell, there’s precious little evidence of human intelligence!

So the legacy of 2001 as science fiction is more of the same Frankenstein/golem/Metropolis fable of a human creation animated and possessing human intelligence and human emotions. Frankenstein’s monster was at least a patchwork of human parts. But Hal9000 is no more sentient than the clay golem, or for that matter this dumb (and too-often slooow) computer that sits in front of me.

I’m okay with Hal, though, when I remind myself that he’s a metaphor for something… the disembodied voice of a next step in human evolution. His presence isn’t about prediction, it’s about metaphoric truth. He’s not an AI, he’s a golem, a Frankenstein, a Pinocchio, a cyber-Maria, a Roy Baty. The human-made human is a literary device reflecting the hubris in the human presumption of God’s role as creator. In 2001, the human reach for immortality through cyborganic extension is evident, but Kubrick points over and over to clear evidence of mortality: however evolved the human species, we can’t escape the cycle of eat, sleep, and crap. In the film, everyone from Moon Watcher & tribe to Dave Bowman eats, and Heywood Floyd ponders directions for the space toilet. Hal doesn’t eat, sleep, or crap. In the news interview sequence, the newsman discusses with Dave Bowman whether Hal is a real or simulated intelligence, and Dave’s not so sure he knows the answer. The more I watch, the more I see narrative ambiguity around this question, though many folks who’ve seen the film only once or twice may miss the subtlety.

I’m thinking Kubrick believed artificial intelligence would be at some point doable, and he was sufficiently fascinated that he worked for years on a film project called simply AI, originally based on a Brian Aldiss story called "Super Toys" about a boy who is revealed to be a cyborg. Kubrick never quite got clear what story he wanted, though AI will eventually be released as a Steven Spielberg project. I find myself wondering whether Kubrick wanted to explore the AI concept as a follow-on to 2001 and Hal, or was off in a new direction. The Spielberg version will doubtless be quite different from anything Kubrick would’ve authored, so we’ll never really know where he was going. My sense of Kubrick was that he was a true artist, never quite sure where he was going til he got there.

Meanwhile, I’m contending with my own mortality and the fact that 2001 is here and the world doesn’t feel all that different. That’s nuts, of course: look what’s happened. The cold war is over and we’re momentarily at peace in a world that doesn’t quite seem to know what to do with itself. The computer is pervasive and computers are networked so that we’re pushing data at high speeds in every conceivable direction, though computers still don’t seem smarter than their creators… just faster. There is no Hal, though there are extropians who will tell you that computer intelligence is Out There, just wait. There are a couple of space stations, but they look more like television antennae than the elegant spinning wheel we saw in 2001. We stopped visiting the moon, and nobody’s found a great reason to make commercial space flights. Through unmanned probes and the Hubble telescope we’ve accumulated vast data stores about outer space, but we still don’t have a clue where we came from, and we’re a long way from sending anyone to Jupiter or any other planet. Space travel lost its gloss when the Challenger exploded, and no one’s particularly excited about manned space exploration. Perhaps we’ve been so saturated with Star Trek/Star Wars images of space travel that we so take it for granted that we we’re beyond doing the work to make it real.

I realize, sadly, that I’ll probably die with no better understanding of human existence than I had in my head that night in 1968, as I gazed at the stars and the spaces between stars and wondered who made creation and where it’s going. Form is emptiness, I guess, and emptiness is form. And the first day of the new millennium is really just another day.

jonl/ 12/29/2000

Trends 2015

Monkeys in Space

Here’s a list of trends I see going into 2015, created for this year’s “State of the World” conversation.

Privatization of outer space: A number of companies are developing spaceware, and there’s one nonprofit that’s formed to colonize Mars by 2023 (http://en.wikipedia.org/wiki/Mars_One). Is the investment entirely speculative, or do we have clear business models driving a potential new space age?

Currency revolution: a number of alternative currencies have appeared, most notably the cryptocurrency Bitcoin. There are also technologies for digitally mediated barter. How will these be integrated into existing economic systems? Are we really looking at a (more? or less?) radical transformation of global economies?

AI/robotics: we’re beginning to see practical, usable applications of robotics, and there’s much talk of evolving artificial intelligence and possible singularity.

Alan Turing, via Benedict Cumberbatch, is getting some attention. When asked in “The Imitation Game” whether machines will ever think like humans, he scoffs – that’s the wrong question. Machines may think, but not “like humans.” Much of the singularity talk doesn’t get this point, but is rooted in anthropomorphism, which makes about as much sense as a golem emerging from a carefully-shaped clay effigy.

We like to think there’s no intelligence that ain’t human, but that’s a shadow of anthropocentric hubris. As we get into robotics and AI in a bigger, industrial-strength, way, what will they teach us about intelligence, human and other?

Practical backlash against 1% and hyper-neoliberalism: the political pendulum swings persistently, and it doesn’t make human sense to roll backwards to some sort of feudal society. Also propaganda only works so far before practical intelligence engenders some degree of critical thinking. Okay, I’m being hopeful here, but I believe the extreme factions in the civil cold war du jour will be overcome by those who are more balanced, reasonable, and practical. 2015 could be the turning point; waiting to hear the alarm ring.

Internet of things: There’s buzz around the IoT now, probably not altogether practical, but driving investment that could fund innovation. We ask the wrong questions about it, i.e. “why do I want my toaster to talk to my refrigerator?” We should be considering what “things” are most practical to network, and the pro and con implications. Are there security implications? Are we depending too much on networks, creating too great a vulnerability to network failure?

Cyberwars, hacktivism, crypto activism: Networked information systems have inherent vulnerabilities, increasingly exploited by various actors for various reasons. To the extent that we live our lives online and invest in our online identities, we’re subject to these vulnerabilities. This is increasingly obvious, and the question for any one of us is, how vulnerable have I become, and how to I mitigate risk? This is a question for individuals, corporations, and governments. Mitigation can create obstructions and limit the value of networks, so we have to think hard about the risks we’re willing to take the measures we’re willing to adopt to limit those risks. It’s also clear that governments (and non-governmental movements) will engage in cyberwar – to what extent will some of us suffer collateral damage from those engagements?

Network fatigue: Expect to see more strategic cord-cutting: limiting online activity generally and persistently, or perhaps periodically (“no Facebook for 30 days”). Response to information overwhelm is inevitable.

“New democrats”: Liberal entities like the Democratic party in the U.S. have proved ineffective as alternatives to well-organized corporate conservatives. The health of societies depends on a balance of the two approaches characterized simplistically as “left vs right.” Correction of the current imbalance is inevitable, but will likely involve entities that are nascent or don’t exist yet, vs the established entities of the left, which seem irrelevant and obsolete, partly because they have sought to compete by identifying with their opponents, rather than by emphasizing alternatives.

One possible trend could emerge from a middling trend, i.e. a rejection of polarization and an emphasis on a practical middle path between “left wing” and “right wing.”

Demilitarization of police: Militarization of police after 9/11 may have seemed like a good idea at the time, but none of us wanted to create a police state, which is a potential effect. Going forward, we’ll be reconsidering the roll of police departments in communities and considering how to undo the downside of the militarization efforts. We’ll be rethinking the role of police departments in communities, and how to respond effectively to potential terrorist acts within borders without confusing police objectives with military objectives.

Crowdsourcing medical solutions: smart patients will have more of a role in evolving therapies, and have more input into our understanding of human systems and response to disease. Participatory medicine will become more established. Medical research will consider patient feedback to get a better sense of complex contextual factors affecting health. More people will do granular “quantified self” tracking, and there will be systems to aggregate and analyze this information, impacting our understanding of prevention as well as disease.

Jamais Cascio at Inkwell

Jamais Cascio
Jamais Cascio

Futurist Jamais Cascio is holding forth about scenarios, foresight, and climate change at Inkwell on the WELL. If you have comments or questions for the conversation, send to inkwell at well.com.

So here’s one of the nasty, generally unstated truths about climate disruption: by and large, the rich countries (the primary historical source of greenhouse gas emissions) will very likely weather climate disruptions much more readily than poor countries (historically *not* greenhouse powerhouses). This is in part due to geography — the equatorial region’s going to get hammered by global warming, and the closer-to-the-poles regions less so — but mostly due to money. The US, Europe, and Japan will be more able to afford to adapt than will China, India, or other up & coming developing nations. Australia is an exception on the geography side, and a test case in how well a rich nation can adapt.

At least in the near-medium term; left unchecked, climate disruption hoses everyone by the end of the century.

Your sense that the Pacific Northwest is one of the better places to go in the US is probably accurate. Not sure that Seattle itself is a good spot, simply due to how close it is to sea level. Portland’s a decent option, though.

Texas residents should pay close attention to what’s happening in Australia right now — that’s your likely (uncomfortably near) future.

As a general rule, you want to be further north and well above sea level. Storm systems in the western Atlantic seem to be getting charged by climate disruption more so than storms in the eastern Pacific, so you’ll probably want to be well away from the coastline in the US Northeast. Also, bear in mind that global warming means increased (a) energy in the atmosphere (driving storms) and (b) ability for the atmosphere to hold moisture, so winter storms will probably be bigger deals.

Europe’s problem is that most of the northern cities and regions aren’t accustomed to very hot summers, and don’t have the necessary infrastructure to withstand the heat (remember the heat wave that killed thousands in Europe a few years ago — they were by and large killed by the lack of air conditioning). That’s not impossible to fix. Power lines/stations that aren’t built for the heat may be a bigger issue.

To be clear, nobody gets a pass on the impacts of global warming. Water access, loss of farmland, internal population displacement*, novel pests & diseases will be big problems in the rich countries as well as the poor — it’s just that the US, etc., will have more resources to draw from to deal with these problems.

State of the World 2013: read the conversation

Bruce Sterling and I (with substantial contributions from others) conversed online for two weeks about the state of the world, as we do every year; that talk ended yesterday, but is archive for your perusal:

State of the World 2013

Here’s my concluding post, in response to a post by Gail Williams on war as metaphor and war as hard reality:

Gail, your post makes me think about the perception of (or,
trendier, optics for) war post WWII, sanitized by the many postwar
films and accounts. Those who knew better kept quiet. Meanwhile those
of us who grew up in the 50s were deluded; we played war games, it was
fun. Vietnam taught us better, or I should say, taught us bitter.
Bitter disillusionment.

Drone war reduces risk but, arguably, increases the probability of
collateral damage. In fact, in war all damage could be characterized as
collateral damage, as powerful elders, safely away from the front,
send the young and innocent, true believers, into battle.

Hopefully by now many more of us, a majority, understand that war is a
nightmare to be avoided. And the war metaphor doesn’t serve us all
that well.

We won’t end rape by declaring war on it. We’ll end rape through
education, cultivation of sensitivity and empathy, rethinking the
meaning of gender difference.

We won’t end poverty by declaring war on it, or by throwing money at
it. We’ll end poverty by caring about it.

We won’t end drug problems by declaring war on drugs. We’ll end drug
problems by understanding why and how drugs become a problem, by
treating addiction as a very human issue, maybe a disease, not a crime.

Robots

Robot photo from Rhizome

My favorite-so-far Bruce Sterling post in the State of the World conversation:

“Following on from John Payne’s comments in <76>, are the robots coming for our jobs? Is a certain amount of unemployment going to end up as part of the system and, if so, what happens next?”

*It’s so interesting to see this perennial question coming into vogue once again. When I was a pre-teen first discovering “science fiction,” that automation dystopia story was all over the place. Even on the cover of TIME magazine. See this Artzybasheff computer monster, all busy stealing guy’s jobs? Looks oddly familiar, doesn’t it?

Heckuva commercial artist, Artzybasheff

Of course that issue pre-dates me by a long chalk. It’s also the folk song of John Henry the Steel-Drivin’ Man, who breaks his heart defeating the boss’s Steam Hammer.

I can tell you what’s NOT gonna happen with “robots.” Nobody’s gonna defeat the logic of the assembly line by starting a Pre-Raphaelite Arts and Crafts commune where people shun the Robot and make hand-made wall tapestries. That’s been tried eight thousand different times and places. It never works for anybody who’s not Amish.

Framing the issue as “robots coming for our jobs” is rather a moot point anyhow, because the blue-collar guys who “own” assembly “jobs” have zero input on whether robots get deployed or not. What practical difference does that question make? No modern salaried employee anywhere has the clout to defend a “job” from “the robots.” The investors deploying the robots are serenely unworried about Luddite saboteurs or crippling labor-union strikes. Those possibilities of working-class resistance were de-fanged ages ago.

So, you know, either they automate some processes at the cost of human labor, or they don’t. Somebody’s alway gonna try it, and in some areas it works out rather better than it does in others, but the basic robot story isn’t robots, it’s “whatever happens to musicians will eventually happen to everybody.”

Apparently this latest little robot-vs-job flap gets most of its impetus from two things, a cool new assembly robot created by Rodney Brooks and a typically Emersonian intervention from Kevin Kelly.

So, here I’ll tell my Rodney Brooks story. I met the guy once, at some forgettable event in Washington DC, and after the panels were over, Prof Brooks and I ventured into the bar.

So, I was nursing a whiskey sour, and I was like: “So, Doctor Brooks, I know a little about your work, and –”

“Call me Rod!”

“So, Rod — level with me about this MIT scheme you have to automate the movement of insect legs. How’s that supposed to work, exactly?”

So, Rod was nothing loath, and he was pretty well going at it hammer and tongs, while I was asking the occasional provocative sci-fi style question — stuff like “so, how does the cube-square law work out when the robo-insects are walking on the ceiling?” — because we sci-fi writers dote on MIT.

Then I happened to glance across the bar, and I saw that our bartender was “frozen in disbelief.” He was so amazed by what Brooks was saying that his glass and his cleaning cloth were rigid in his unmoving arms. This bartender had the affect of a sci-fi movie android with a power failure. It was the only time I’ve ever seen that figure of speech as a genuine aspect of human behavior.

So, I give Rodney Brooks a lot of credit, he’s a fascinating guy, I’m glad to see him kept busy on things other than, for instance, an MIT-style Vannevar Bush Manhattan Project at an undisclosed desert location. I’m confident that Rod’s new manipulator is pretty snazzy.

But let me ask this: if an assembly-line device is going to “take our jobs,” wouldn’t a 3dprinter also “take our jobs?” Why do we treat them so differently? I mean, they’re both basically the same device: automated mechanical systems precisely moving loads in three dimensions by following software instructions.

So how come the Brooks robot is framed as a sinister job-stealing robot, while a 3dprinter is framed as a printer, like, a cool nifty peripheral? Didn’t digital printers also take a lot of “people’s jobs?”

Besides, a Brooks robot is just imitating human-scale movement while 3dprinters create objects in micron-accurate ways that no human can possibly do at all. So clearly the 3dprinter is a more radical threat to the status quo.

Along this same line: Chris Anderson, late of WIRED, has got a new book out about “Makers.” I read it. It’s all about how network society cadres with 3dprinters and open-source schematics and instructables are going to create a “Third Industrial Revolution.” Great, right? Okay, maybe Makers take over the world or they don’t, but how come nobody says “A Third Industrial Revolution means those Makers are going to take our jobs?” Because they would, wouldn’t they? How could they not?

Shouldn’t this prospect be of larger concern than Rodney Brooks’ latest gizmo, one among hordes of assembly line robots that have been around for decades now? An “Industrial Revolution” should *almost be definition* take everybody’s jobs. But the general reaction to Anderson’s book is that the guy is *too optimistic,” that he drank his own tech-hype bathwater and is having way too much fun. Isn’t there an inconsistency here?

Then there’s the latest Kevin Kelly argument, which is more or less about how robots are gonna take everybody’s jobs, but fine, that’s great, especially if they’re sexbots. There’s nothing sparkly-new about this line of reasoning, it’s very Automation Takes Command. The pitch is that robots take the dull dirty and dangerous jobs, which frees us to become, I dunno, humane speculative creatives like Kevin Kelly, I guess.

However, I don’t believe automation has ever worked like that; there’s no creeping wave-line with “robotics” on one side and “humanity” on the other. Playing chess is very “human,” but Deep Blue is a robot that can kick everybody’s ass at chess. You can claim that “Deep Blue” is not “a robot,” but come on: just put a tin face on him and give him a manipulator arm. Instant “robot.” Robotic has never been an issue of mechanical men versus flesh men, like in a Flash Gordon episode.

The stuff we call “robotics” today is more like Google’s “robot car,” which is not some Karel Capek man-shaped “robot” of the 1920s; the Google Car is the Google Stack with wheels attached to it. Similarly, “Google Glass” isn’t virtual-reality supergoggles, it’s the Google Stack with a camera, Android mobile software and a head-mounted display. Will they “take your jobs?” How could they not?

If you lose your job as a bus driver because a Google Bus took your job, you didn’t lose it to a “robot,” you lost your enterprise to Google, just like the newspapers did. Don’t bother to put a sexbot face on the silly thing, it’s Larry and Sergei & Co. Go find a musician and buy him a drink.

Fighter pilots are “losing their jobs to robots,” to aerial drones. Are those the “dull dirty and dangerous” jobs? Heck no, because fighter jocks are romantic folk heroes, like Eddie Rickenbacker and the Red Baron and George Bush 1.0. When most flight work is carried out by “robots” (actually by GPS systems and databases, but so what), are we somehow going to discover a more refined and human way to fly? Will we be liberated to fly in a more spiritual, humanistic, Beryl Markham poetic aviatrix kind of way? I very much doubt that. I’m pretty sure we’ll stop “flying” entirely, even if we anachronistically claim we’re “flying” when we’re zipping around in sporty ultralights letting drone systems do all the labor.

Bookstore clerks never had “dull, dirty, dangerous” work, they were the mainstays of humanistic commerce actually, but Amazon is a Stack. Amazon’s all about giant robot warehouse distribution logistics. It’s all databases and forklifts in the Amazon stack, so of course “robots” took the jobs of bookstore clerks. Bookstore clerks imagined they were chumming around with the literate community turning people on the Jane Austen, but the high-touch, humanly clingy aspect of this line of work changed nothing much about its obsolescence.

So it’s not that “robots” take “our jobs.” It’s more a situation of general employement precarity where applications built for mobile devices and databases can hit pretty much anybody’s line of work, more or less at random, without a prayer of effective counter-action. Right? Let’s move right along, then!

That being the case, “what ought to be done?” Well, if job security of all kinds is going to be made precarious indefinitely, then the sane, humane thing to do is clearly to socialize security and put everybody on a guaranteed annual income. Brazilian-style socialism: keep your nose clean, keep the kids in school, and we fee you off and you can go buy whatever produce the robots have cooked up lately.

One might also invent some kind of Stack Fordism, where Facebook pays you enough to hang out on Facebook making Facebook more omniscient. It’s a lot cheaper than putting the unemployed into prison.

Obviously the American right-wing isn’t gonna go for this wacky liberal scheme; bailing out the “takers” of the 47% is their worst Randroid nightmare. But what people never understood about the John Henry story is that we have no steam hammers left. The robots “take your job” and then the robots *keep changing at a frantic pace,* the robots have the lifespans of hamsters. We’ve still got plenty of muscular, human John Henries, but his steam hammers are all extinct.

Look what happened to Nokia. These Nokia guys had the classic Wired magazine bulletproofed dream jobs. They’re not John Henry. They’re creative class, computer-literate, inventive, super-efficient, global, digital, Asperger’s high-IQ types… They got annihilated in 18 months. Not by “robots” but by Google and Apple. However, well, same difference really.

What kind of “jobs” do Republicans have to offer themselves, when their nominee was a corporate raider, and their top financier is a weird Jewish casino owner up to the eyebrows in Macao? That’s not exactly the Protestant work ethic happening, so, well, I dunno.

It might still work, just needs more political pretzel-bending. Don’t use the word “guaranteed income,” farm it out to Fox News for semantic re-framing. Toss in the “values requirement” that your annual income requires you to wear Mormon undies, go to tent revival meetings and own and display a handgun. They’d line up for it.

Photo from Rhizome

John Shirley: “empathy may be a precious commodity in the future”

In the wake of the Connecticut shootings, John Shirley posted on Facebook this excerpt from a text he wrote as the basis for a TedX talk:

“People who are quadriplegic have stated that they feel less emotion than they did, when they could still feel their entire bodies. The projection of the self into electronics reduces our relationship to the body, the seat of our emotions, and for several reasons that might lead to an increase in psychopathology.

“And empathy may be a precious commodity in the future. Most people unconsciously cut off their empathy when they’re feeling endangered–when the population increases to 8 and 9 and 10 billion, we may instinctively become, as a race, proportionately less empathetic–unless we actively struggle against that kind of degeneracy.”

Christopher Alexander and “A Pattern Language”

Great post about Christopher Alexander’s work and influence via The Permaculture Research Institute of Australia, originally published at the Metropolis website, which followed up with posts on “The Sustainable Technology of Christopher Alexander” and “The Living Technology of Chrisopher Alexander.” The authors emphasize Alexander’s emphasis on patterns, context and a whole-systems vision. He was writing as an architect, but his influence has been more widespread.

an earlier generation of computer programmers, organization theorists, design theorists and many others, were struggling then to figure out how to generate and manage the large new design structures of that era — computer software being one prominent example. Alexander gave them some very helpful conceptual tools to do that…. In essence, the tools were patterns: not things, but relations of things, which could be identified and re-combined and re-used, in a language-like way.

The article goes on to say that Anderson’s work has “…amounted to a kind of technological critique, revolving around the observation that we’re doing something wrong in the way we make things. We’re substituting an oversimplified model of structure-making — one more closely related to our peculiar hierarchically limited way of conceiving abstract relationships — in place of the kinds of transformations that actually occur regularly in the universe, and in biological systems especially.”

Ours is a much more limited, fragmentary form of this larger kind of transformation. The result of this problem is nothing less than a slow unfolding technological disaster. We know it as the sustainability crisis.

That’s where this discussion touches on what’s happening today — economically, ecologically, and culturally. Growing numbers of people do recognize that we have to get our houses in order. But whose house, to what extent, and in what way? That’s the big question of the day.

What Alexander argues is that we have to make some very fundamental reforms — not only in our specific technologies, but in our very way of thinking about technology. We have been isolating things, as mechanical sub-entities, and manipulating them. That works quite well, but only up to a point. As any systems theorist or ecologist will tell you, the context, not the thing, is the key.

So it seems that we have ignored an incredibly important aspect of natural systems — namely, the fact that every structure is embedded in a larger structural context, and ultimately, in the entire structure of the cosmos itself. What Alexander offered was not just the recognition of this truth, but the basis of a new technology that could incorporate it.

Spaun is not “consciousness in a box”

The hype about the “neuron brain model” Spaun made me think of my skeptical FringeWare Review piece about storing or replicating human consciousness, “Consciousness in a Box.” Sci-fi culture has set the assumption that construction of an “artificial brain” is not only possible, but inevitable, but I’ve argued that it’s unlikely, if not impossible to build a machine that replicates human cognition. Context is important: however we came to “think” in the way we do, to be conscious, sentient entities, that won’t be replicated in a bundle of switches, however slick, fast, and capable. SPAUN, in fact, is somewhat less than the hype suggests:

The first thing to point out is that Spaun doesn’t learn anything. It can be arranged to tackle eight pre-defined tasks and it doesn’t learn any new tasks or modify the way it performs existing tasks. The whole system is based on the Neural Engineering Framework – NEF- which can be used to compute the values of the strengths of connections needed to make a neural network do a particular task. If you want a neural net to implement a function of the inputs f(x) then NEF will compute the parameters for a leaky integrate and fire network that will do the job. This is an interesting approach, but it doesn’t show any of the plasticity that the real brain and real neural networks show.

If anything, this approach is more like the original McCulloch and Pitts networks where artificial neurons were hand-crafted to create logic gates. For example. you can put neurons together to create a NAND gate and from here you can use them to implement a complete computer – a PC based on a Pentium, say, using the neuronal NAND gates to implement increasingly complex logic. It would all work but it wouldn’t be a thinking brain or a model of a neuronal computer.

If we ever do build a “thinking machine” that is to any degree autonomous, I’m certain it won’t replicate human consciousness or thought processes – it’ll have its own way of “thinking.”

Mindjack

Pete Rothman’s published a post at h+ on Donald Melanson’s brilliant neophiliac website Mindjack. I was on Mindjack’s board at one time, and contributed a few pieces to the site, including “Nodal Politics,” a chapter from my unpublished book Virtual Bonfire. In that particular piece, I was considering the potential for the Internet to serve as a platform for political organizing. Many if not most of the Mindjack authors were members of Howard Rheingold’s Electric Minds community, originally formed as a for-profit ad-based social site. (There’s a whole other interesting story about the sale of Electric Minds and the attempt to preserve the community as the platform changed hands.)

I don’t even remember writing a post at Mindjack about SXSW 2002 – post-dotcom-bust – but there it is.

This year’s South by Southwest Interactive conference was lean and mean – attended mainly by the core group of edgy ‘net whackadistas, the conference had an interesting vibe, like “Wow, glad the goddam dotcom splurge is over, let’s get back to what we were doin’…” And what we were doin’ had real depth, it was way more compelling than ecommerce or net.publishing, the kinds of projects MBAs brought to the table when they started calling the Internet an ‘industry’ and creating the concept of the IPO casino. Before all that stuff happened we were thinking about open and free paradigms for software development, technologies for community, new and better ways to tell our stories. We were re-inventing ourselves as cyborgs, humans enhanced by accelerated technologies, looking for ways to nurture each other and share ideas over faster, increasingly accessible networks. And though many were all a little tired, a little disoriented, a little uncertain about where they were going, there was no question that the crowd at this year’s SXSW was still committed to Internet technology and the web. Sadder, wiser, more grounded, but still eager to build.

Innovation is Madness

Genius architect Pliny Fisk of the Center for Maximum Potential Building Systems is featured in a GE Focus Forward short film contest semifinalist, “Innovation is Madness.” “This whole idea that I am a mad scientist essentially comes from the fact that I have created a situation where madness can happen safely…”

Innovation Is Madness | Mark Decena from Focus Forward Filmson Vimeo.

INNOVATION IS MADNESS is a Semifinalist in the $200,000 FOCUS FORWARD Filmmaker Competition and is in the running to become the $100,000 Grand Prize Winner. It could also be named an Audience Favorite if it’s among the ten that receives the most votes. If you love it, vote for it. Click on the VOTE button in the top right corner of the video player. Note that voting may not be available on all mobile platforms, and browser cookies must be enabled to vote.

Pliny Fisk III was one of the founding members of the green building movement. In 1975 he co-created the Center for Maximum Potential Building Systems, a non-profit education, research, and demonstration organization specializing in life cycle planning and design. Shooting a video series for the US Green Building Council, we were introduced and subsequently fell in love with Pliny and his merry band of crazies. CMPBS has not only become a collaborative model for invention, but a physical space for innovation to happen. It’s time for the inmates to run the asylum.

Bruce Schneier: Liars and Outliers

Bruce Schneier
Bruce Schneier

Check out our conversation on the WELL with security expert Bruce Schneier, who among other things is responsible for the Crypto-gram Newsletter. In this conversation, he’s discussing his book Liars and Outliers: Enabling the Trust that Society Needs to Thrive. Because I’ve been thinking a lot lately about the value and erosion of trust, this book and the conversation on the WELL are especially resonant with my own focus and thinking.

In the book, I wander through a dizzying array of academic disciplines: experimental psychology, evolutionary psychology, sociology, economics, behavioral economics, evolutionary biology, neuroscience, game theory, systems dynamics, anthropology, archeology, history, political science, law, philosophy, theology, cognitive science, and computer security. It sometimes felt as if I were blundering through a university, kicking down doors and demanding answers. “You anthropologists: what can you tell me about early human transgressions and punishments?” “Okay neuroscientists, what’s the brain chemistry of cooperation? And you evolutionary psychologists, how can you explain that?” “Hey philosophers, what have you got?” I downloaded thousands — literally ­­ of academic papers. In pre-Internet days I would have had to move into an academic library.

What’s really interesting to me is what this all means for the future. We’ve never been able to eliminate defections. No matter how much societal pressure we bring to bear, we can’t bring the murder rate in society to zero. We’ll never see the end of bad corporate behavior, or embezzlement, or rude people who make cell phone calls in movie theaters. That’s fine, but it starts getting interesting when technology makes each individual defection more dangerous. That is, fishermen will survive even if a few of them defect and overfish — until defectors can deploy driftnets and single-handedly collapse the fishing stock. The occasional terrorist with a machine gun isn’t a problem for society in the overall scheme of things; but a terrorist with a nuclear weapon could be.

Are computers intelligent?

Bruce Sterling with Alan Turing bot at the Turing Centenary Symposium

At Reality Augmented Blog, I recently posted a Storify of my live tweets from Bruce Sterling’s talk at the Turing Centenary Symposium at the University of Texas. Bruce talked about Turing’s investigation into “whether or not it is possible for machinery to show intelligent behaviour” and the Turing test, which is supposed to determine how well a computer at least seems to be intelligent by human standards. To consider this question, you might think you’d have to define thinking (cognition, consciousness, etc.), but instead of taking on that difficult task, Turing changes the question from “Do machines think?” to “Can machines do what we (as thinking entities) can do?” That’s really a different question, less metaphysical and more about comparing manifestations of thinking than comparing processes of thinking.

Bruce noted in his talk an aspect of the Turing test that doesn’t get much attention: it was originally about gender. In his paper “Computing Machinery and Intelligence,” Turing described the game as “played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman.” He goes on to say

We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

So as Bruce notes, the actual Turing test is for a machine with a woman’s sensibility. The gist of his talk reminded me of conversations I’ve had with transgendered Sandy Stone, who spent years studying identity hacking online and off. I brought up the question of a man deciding to appear online as a woman, and how real that transformation can be. If you’re a man and decide to be a woman (or vice versa), you can’t quite make the authentic switch, because gender entails years of cultural and behavioral conditioning. If you’ve been contextualized as a male, you don’t become female by changing your name, your voice, your dress, even your body.

In the context of the conversations with Sandy, the subtext always seemed to be about liberation from the trappings of gender – you don’t have to be “a man” or “a woman,” you can just be you. But this has relevance, not just in terms of gender switching, but with any attempt at transformation. And it has implications for the discussion of machine intelligence. Machines can’t “become human” or be like humans, because they have no experience as humans, and you can’t program the embodied human experience.

In the context of the conversations with Sandy, the subtext always seemed to be about liberation from the trappings of gender – you don’t have to be “a man” or “a woman,” you can just be you. But this has relevance, not just in terms of gender switching, but with any attempt at transformation. And it has implications for the discussion of machine intelligence. Machines can’t “become human” or be like humans, because they have no experience as humans, and you can’t program the embodied human experience. You also can’t program “consciousness” – puny humans aren’t even clear what consciousness is, and we know that things like “consciousness” and “awareness” and “thinking” can be quite subjective and hard to quantify. So when we talk about “artificial intelligence” or “machine intelligence,” that word “intelligence” can be misleading. It’s not about making a machine like a human, it’s about seeing how well a machine can simulate the human. The Turing test is really about how clever we are at programming a bot that does heuristics well and can “learn” to hold its own in a human conversation. It’s interesting to bring gender into it – to simulate the human, a bot would be one or the other.

Scene from Metropolis: Rotwang and his robot in female form Rotwang and his lost-love simulation[/caption]Bruce: “Why not ask ‘can a computational system be a woman?'” This made me think of Rotwang’s remaking of Hel in Metropolis, and how she’s repurposed as a simulation of Maria… a robot designed to simulate the female form. Is she a mechano-electronic woman? Or just a bag o’ bytes? More compelling, I think, is the concept of the cyborg, originally described as a biological entity that’s manufactured and has some machine components. More recently, we’ve come to think of cyborgs as “ordinary” humans augmented by digital or other technology – e.g. anyone with a smart phone or a computer could be considered a cyborg. My colleague Amber Case writes about “cyborg anthropology,” acknowledging that synergies within human-machine interaction are transformative, and require new methods and fields in the study of humanity. I think cyborgization is more interesting and more real than the Kurzweil sense of “artificial intelligence” (machines “smarter” than humans that become self-aware – Hal 9000 is a mythical beast; computers may be capable of processes that seem intelligent, but back to Bruce’s point, computers are not anything like humans.)

Turing himself said “the idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer.” On the other hand, Gurdjieff said “man such as we know him, is a machine.” A very complicated machine, he noted elsewhere.

My point in all this is that humans are not machines and machines won’t become human. We’re confused on that point, likely because of a larger metaphysical confusion, a confusion about who and what we are, our place in the universe, and the nature of the various human analogs, similar but different processes, that we see in the egosystem. (That’s not a misspelling…)

Bruce Sterling: “I fear posterity will condemn us for being too clever, for failing to speak about the obvious in an immediate lucid way. We need a new aesthetic with a strong metaphysics. How we get there, I don’t know.”

Mastodon