post

Trends 2015

Monkeys in Space

Here’s a list of trends I see going into 2015, created for this year’s “State of the World” conversation.

Privatization of outer space: A number of companies are developing spaceware, and there’s one nonprofit that’s formed to colonize Mars by 2023 (http://en.wikipedia.org/wiki/Mars_One). Is the investment entirely speculative, or do we have clear business models driving a potential new space age?

Currency revolution: a number of alternative currencies have appeared, most notably the cryptocurrency Bitcoin. There are also technologies for digitally mediated barter. How will these be integrated into existing economic systems? Are we really looking at a (more? or less?) radical transformation of global economies?

AI/robotics: we’re beginning to see practical, usable applications of robotics, and there’s much talk of evolving artificial intelligence and possible singularity.

Alan Turing, via Benedict Cumberbatch, is getting some attention. When asked in “The Imitation Game” whether machines will ever think like humans, he scoffs – that’s the wrong question. Machines may think, but not “like humans.” Much of the singularity talk doesn’t get this point, but is rooted in anthropomorphism, which makes about as much sense as a golem emerging from a carefully-shaped clay effigy.

We like to think there’s no intelligence that ain’t human, but that’s a shadow of anthropocentric hubris. As we get into robotics and AI in a bigger, industrial-strength, way, what will they teach us about intelligence, human and other?

Practical backlash against 1% and hyper-neoliberalism: the political pendulum swings persistently, and it doesn’t make human sense to roll backwards to some sort of feudal society. Also propaganda only works so far before practical intelligence engenders some degree of critical thinking. Okay, I’m being hopeful here, but I believe the extreme factions in the civil cold war du jour will be overcome by those who are more balanced, reasonable, and practical. 2015 could be the turning point; waiting to hear the alarm ring.

Internet of things: There’s buzz around the IoT now, probably not altogether practical, but driving investment that could fund innovation. We ask the wrong questions about it, i.e. “why do I want my toaster to talk to my refrigerator?” We should be considering what “things” are most practical to network, and the pro and con implications. Are there security implications? Are we depending too much on networks, creating too great a vulnerability to network failure?

Cyberwars, hacktivism, crypto activism: Networked information systems have inherent vulnerabilities, increasingly exploited by various actors for various reasons. To the extent that we live our lives online and invest in our online identities, we’re subject to these vulnerabilities. This is increasingly obvious, and the question for any one of us is, how vulnerable have I become, and how to I mitigate risk? This is a question for individuals, corporations, and governments. Mitigation can create obstructions and limit the value of networks, so we have to think hard about the risks we’re willing to take the measures we’re willing to adopt to limit those risks. It’s also clear that governments (and non-governmental movements) will engage in cyberwar – to what extent will some of us suffer collateral damage from those engagements?

Network fatigue: Expect to see more strategic cord-cutting: limiting online activity generally and persistently, or perhaps periodically (“no Facebook for 30 days”). Response to information overwhelm is inevitable.

“New democrats”: Liberal entities like the Democratic party in the U.S. have proved ineffective as alternatives to well-organized corporate conservatives. The health of societies depends on a balance of the two approaches characterized simplistically as “left vs right.” Correction of the current imbalance is inevitable, but will likely involve entities that are nascent or don’t exist yet, vs the established entities of the left, which seem irrelevant and obsolete, partly because they have sought to compete by identifying with their opponents, rather than by emphasizing alternatives.

One possible trend could emerge from a middling trend, i.e. a rejection of polarization and an emphasis on a practical middle path between “left wing” and “right wing.”

Demilitarization of police: Militarization of police after 9/11 may have seemed like a good idea at the time, but none of us wanted to create a police state, which is a potential effect. Going forward, we’ll be reconsidering the roll of police departments in communities and considering how to undo the downside of the militarization efforts. We’ll be rethinking the role of police departments in communities, and how to respond effectively to potential terrorist acts within borders without confusing police objectives with military objectives.

Crowdsourcing medical solutions: smart patients will have more of a role in evolving therapies, and have more input into our understanding of human systems and response to disease. Participatory medicine will become more established. Medical research will consider patient feedback to get a better sense of complex contextual factors affecting health. More people will do granular “quantified self” tracking, and there will be systems to aggregate and analyze this information, impacting our understanding of prevention as well as disease.

post

Bruce Sterling at SXSW 2013

post

Jamais Cascio at Inkwell

Jamais Cascio

Jamais Cascio

Futurist Jamais Cascio is holding forth about scenarios, foresight, and climate change at Inkwell on the WELL. If you have comments or questions for the conversation, send to inkwell at well.com.

So here’s one of the nasty, generally unstated truths about climate disruption: by and large, the rich countries (the primary historical source of greenhouse gas emissions) will very likely weather climate disruptions much more readily than poor countries (historically *not* greenhouse powerhouses). This is in part due to geography — the equatorial region’s going to get hammered by global warming, and the closer-to-the-poles regions less so — but mostly due to money. The US, Europe, and Japan will be more able to afford to adapt than will China, India, or other up & coming developing nations. Australia is an exception on the geography side, and a test case in how well a rich nation can adapt.

At least in the near-medium term; left unchecked, climate disruption hoses everyone by the end of the century.

Your sense that the Pacific Northwest is one of the better places to go in the US is probably accurate. Not sure that Seattle itself is a good spot, simply due to how close it is to sea level. Portland’s a decent option, though.

Texas residents should pay close attention to what’s happening in Australia right now — that’s your likely (uncomfortably near) future.

As a general rule, you want to be further north and well above sea level. Storm systems in the western Atlantic seem to be getting charged by climate disruption more so than storms in the eastern Pacific, so you’ll probably want to be well away from the coastline in the US Northeast. Also, bear in mind that global warming means increased (a) energy in the atmosphere (driving storms) and (b) ability for the atmosphere to hold moisture, so winter storms will probably be bigger deals.

Europe’s problem is that most of the northern cities and regions aren’t accustomed to very hot summers, and don’t have the necessary infrastructure to withstand the heat (remember the heat wave that killed thousands in Europe a few years ago — they were by and large killed by the lack of air conditioning). That’s not impossible to fix. Power lines/stations that aren’t built for the heat may be a bigger issue.

To be clear, nobody gets a pass on the impacts of global warming. Water access, loss of farmland, internal population displacement*, novel pests & diseases will be big problems in the rich countries as well as the poor — it’s just that the US, etc., will have more resources to draw from to deal with these problems.

post

State of the World 2013: read the conversation

Bruce Sterling and I (with substantial contributions from others) conversed online for two weeks about the state of the world, as we do every year; that talk ended yesterday, but is archive for your perusal:

State of the World 2013

Here’s my concluding post, in response to a post by Gail Williams on war as metaphor and war as hard reality:

Gail, your post makes me think about the perception of (or,
trendier, optics for) war post WWII, sanitized by the many postwar
films and accounts. Those who knew better kept quiet. Meanwhile those
of us who grew up in the 50s were deluded; we played war games, it was
fun. Vietnam taught us better, or I should say, taught us bitter.
Bitter disillusionment.

Drone war reduces risk but, arguably, increases the probability of
collateral damage. In fact, in war all damage could be characterized as
collateral damage, as powerful elders, safely away from the front,
send the young and innocent, true believers, into battle.

Hopefully by now many more of us, a majority, understand that war is a
nightmare to be avoided. And the war metaphor doesn’t serve us all
that well.

We won’t end rape by declaring war on it. We’ll end rape through
education, cultivation of sensitivity and empathy, rethinking the
meaning of gender difference.

We won’t end poverty by declaring war on it, or by throwing money at
it. We’ll end poverty by caring about it.

We won’t end drug problems by declaring war on drugs. We’ll end drug
problems by understanding why and how drugs become a problem, by
treating addiction as a very human issue, maybe a disease, not a crime.

post

Robots

Robot photo from Rhizome

My favorite-so-far Bruce Sterling post in the State of the World conversation:

“Following on from John Payne’s comments in <76>, are the robots
coming for our jobs? Is a certain amount of unemployment going to end
up as part of the system and, if so, what happens next?”

*It’s so interesting to see this perennial question coming into vogue
once again. When I was a pre-teen first discovering “science fiction,”
that automation dystopia story was all over the place. Even on the
cover of TIME magazine. See this Artzybasheff computer monster, all
busy stealing guy’s jobs? Looks oddly familiar, doesn’t it?

Heckuva commercial artist, Artzybasheff

Of course that issue pre-dates me by a long chalk. It’s also the folk
song of John Henry the Steel-Drivin’ Man, who breaks his heart
defeating the boss’s Steam Hammer.

I can tell you what’s NOT gonna happen with “robots.” Nobody’s gonna
defeat the logic of the assembly line by starting a Pre-Raphaelite Arts
and Crafts commune where people shun the Robot and make hand-made wall
tapestries. That’s been tried eight thousand different times and
places. It never works for anybody who’s not Amish.

Framing the issue as “robots coming for our jobs” is rather a moot
point anyhow, because the blue-collar guys who “own” assembly “jobs”
have zero input on whether robots get deployed or not. What practical
difference does that question make? No modern salaried employee
anywhere has the clout to defend a “job” from “the robots.” The
investors deploying the robots are serenely unworried about Luddite
saboteurs or crippling labor-union strikes. Those possibilities of
working-class resistance were de-fanged ages ago.

So, you know, either they automate some processes at the cost of human
labor, or they don’t. Somebody’s alway gonna try it, and in some
areas it works out rather better than it does in others, but the basic
robot story isn’t robots, it’s “whatever happens to musicians will
eventually happen to everybody.”

Apparently this latest little robot-vs-job flap gets most of its
impetus from two things, a cool new assembly robot created by Rodney
Brooks and a typically Emersonian intervention from Kevin Kelly.

So, here I’ll tell my Rodney Brooks story. I met the guy once, at
some forgettable event in Washington DC, and after the panels were
over, Prof Brooks and I ventured into the bar.

So, I was nursing a whiskey sour, and I was like: “So, Doctor Brooks,
I know a little about your work, and –”

“Call me Rod!”

“So, Rod — level with me about this MIT scheme you have to automate
the movement of insect legs. How’s that supposed to work, exactly?”

So, Rod was nothing loath, and he was pretty well going at it hammer
and tongs, while I was asking the occasional provocative sci-fi style
question — stuff like “so, how does the cube-square law work out when
the robo-insects are walking on the ceiling?” — because we sci-fi
writers dote on MIT.

Then I happened to glance across the bar, and I saw that our bartender
was “frozen in disbelief.” He was so amazed by what Brooks was saying
that his glass and his cleaning cloth were rigid in his unmoving arms.
This bartender had the affect of a sci-fi movie android with a power
failure. It was the only time I’ve ever seen that figure of speech as a
genuine aspect of human behavior.

So, I give Rodney Brooks a lot of credit, he’s a fascinating guy, I’m
glad to see him kept busy on things other than, for instance, an
MIT-style Vannevar Bush Manhattan Project at an undisclosed desert
location. I’m confident that Rod’s new manipulator is pretty snazzy.

But let me ask this: if an assembly-line device is going to “take our
jobs,” wouldn’t a 3dprinter also “take our jobs?” Why do we treat them
so differently? I mean, they’re both basically the same device:
automated mechanical systems precisely moving loads in three dimensions
by following software instructions.

So how come the Brooks robot is framed as a sinister job-stealing
robot, while a 3dprinter is framed as a printer, like, a cool nifty
peripheral? Didn’t digital printers also take a lot of “people’s
jobs?”

Besides, a Brooks robot is just imitating human-scale movement while
3dprinters create objects in micron-accurate ways that no human can
possibly do at all. So clearly the 3dprinter is a more radical threat
to the status quo.

Along this same line: Chris Anderson, late of WIRED, has got a new
book out about “Makers.” I read it. It’s all about how network society
cadres with 3dprinters and open-source schematics and instructables
are going to create a “Third Industrial Revolution.” Great, right?
Okay, maybe Makers take over the world or they don’t, but how come
nobody says “A Third Industrial Revolution means those Makers are going
to take our jobs?” Because they would, wouldn’t they? How could they
not?

Shouldn’t this prospect be of larger concern than Rodney Brooks’
latest gizmo, one among hordes of assembly line robots that have been
around for decades now? An “Industrial Revolution” should *almost be
definition* take everybody’s jobs. But the general reaction to
Anderson’s book is that the guy is *too optimistic,” that he drank his
own tech-hype bathwater and is having way too much fun. Isn’t there an
inconsistency here?

Then there’s the latest Kevin Kelly argument, which is more or less
about how robots are gonna take everybody’s jobs, but fine, that’s
great, especially if they’re sexbots. There’s nothing sparkly-new
about this line of reasoning, it’s very Automation Takes Command. The
pitch is that robots take the dull dirty and dangerous jobs, which
frees us to become, I dunno, humane speculative creatives like Kevin
Kelly, I guess.

However, I don’t believe automation has ever worked like that; there’s
no creeping wave-line with “robotics” on one side and “humanity” on
the other. Playing chess is very “human,” but Deep Blue is a robot
that can kick everybody’s ass at chess. You can claim that “Deep Blue”
is not “a robot,” but come on: just put a tin face on him and give him
a manipulator arm. Instant “robot.” Robotic has never been an issue
of mechanical men versus flesh men, like in a Flash Gordon episode.

The stuff we call “robotics” today is more like Google’s “robot car,”
which is not some Karel Capek man-shaped “robot” of the 1920s; the
Google Car is the Google Stack with wheels attached to it. Similarly,
“Google Glass” isn’t virtual-reality supergoggles, it’s the Google
Stack with a camera, Android mobile software and a head-mounted
display. Will they “take your jobs?” How could they not?

If you lose your job as a bus driver because a Google Bus took your
job, you didn’t lose it to a “robot,” you lost your enterprise to
Google, just like the newspapers did. Don’t bother to put a sexbot
face on the silly thing, it’s Larry and Sergei & Co. Go find a
musician and buy him a drink.

Fighter pilots are “losing their jobs to robots,” to aerial drones.
Are those the “dull dirty and dangerous” jobs? Heck no, because
fighter jocks are romantic folk heroes, like Eddie Rickenbacker and the
Red Baron and George Bush 1.0. When most flight work is carried out
by “robots” (actually by GPS systems and databases, but so what), are
we somehow going to discover a more refined and human way to fly? Will
we be liberated to fly in a more spiritual, humanistic, Beryl Markham
poetic aviatrix kind of way? I very much doubt that. I’m pretty sure
we’ll stop “flying” entirely, even if we anachronistically claim we’re
“flying” when we’re zipping around in sporty ultralights letting drone
systems do all the labor.

Bookstore clerks never had “dull, dirty, dangerous” work, they were
the mainstays of humanistic commerce actually, but Amazon is a Stack.
Amazon’s all about giant robot warehouse distribution logistics. It’s
all databases and forklifts in the Amazon stack, so of course “robots”
took the jobs of bookstore clerks. Bookstore clerks imagined they were
chumming around with the literate community turning people on the Jane
Austen, but the high-touch, humanly clingy aspect of this line of work
changed nothing much about its obsolescence.

So it’s not that “robots” take “our jobs.” It’s more a situation of
general employement precarity where applications built for mobile
devices and databases can hit pretty much anybody’s line of work, more
or less at random, without a prayer of effective counter-action.
Right? Let’s move right along, then!

That being the case, “what ought to be done?” Well, if job security
of all kinds is going to be made precarious indefinitely, then the
sane, humane thing to do is clearly to socialize security and put
everybody on a guaranteed annual income. Brazilian-style socialism:
keep your nose clean, keep the kids in school, and we fee you off and
you can go buy whatever produce the robots have cooked up lately.

One might also invent some kind of Stack Fordism, where Facebook pays
you enough to hang out on Facebook making Facebook more omniscient.
It’s a lot cheaper than putting the unemployed into prison.

Obviously the American right-wing isn’t gonna go for this wacky
liberal scheme; bailing out the “takers” of the 47% is their worst
Randroid nightmare. But what people never understood about the John
Henry story is that we have no steam hammers left. The robots “take
your job” and then the robots *keep changing at a frantic pace,* the
robots have the lifespans of hamsters. We’ve still got plenty of
muscular, human John Henries, but his steam hammers are all extinct.

Look what happened to Nokia. These Nokia guys had the classic Wired
magazine bulletproofed dream jobs. They’re not John Henry. They’re
creative class, computer-literate, inventive, super-efficient, global,
digital, Asperger’s high-IQ types… They got annihilated in 18
months. Not by “robots” but by Google and Apple. However, well, same
difference really.

What kind of “jobs” do Republicans have to offer themselves, when
their nominee was a corporate raider, and their top financier is a
weird Jewish casino owner up to the eyebrows in Macao? That’s not
exactly the Protestant work ethic happening, so, well, I dunno.

It might still work, just needs more political pretzel-bending. Don’t
use the word “guaranteed income,” farm it out to Fox News for semantic
re-framing. Toss in the “values requirement” that your annual income
requires you to wear Mormon undies, go to tent revival meetings and
own and display a handgun. They’d line up for it.

Photo from Rhizome

post

John Shirley: “empathy may be a precious commodity in the future”

In the wake of the Connecticut shootings, John Shirley posted on Facebook this excerpt from a text he wrote as the basis for a TedX talk:

“People who are quadriplegic have stated that they feel less emotion than they did, when they could still feel their entire bodies. The projection of the self into electronics reduces our relationship to the body, the seat of our emotions, and for several reasons that might lead to an increase in psychopathology.

“And empathy may be a precious commodity in the future. Most people unconsciously cut off their empathy when they’re feeling endangered–when the population increases to 8 and 9 and 10 billion, we may instinctively become, as a race, proportionately less empathetic–unless we actively struggle against that kind of degeneracy.”

post

Christopher Alexander and “A Pattern Language”

Great post about Christopher Alexander’s work and influence via The Permaculture Research Institute of Australia, originally published at the Metropolis website, which followed up with posts on “The Sustainable Technology of Christopher Alexander” and “The Living Technology of Chrisopher Alexander.” The authors emphasize Alexander’s emphasis on patterns, context and a whole-systems vision. He was writing as an architect, but his influence has been more widespread.

an earlier generation of computer programmers, organization theorists, design theorists and many others, were struggling then to figure out how to generate and manage the large new design structures of that era — computer software being one prominent example. Alexander gave them some very helpful conceptual tools to do that…. In essence, the tools were patterns: not things, but relations of things, which could be identified and re-combined and re-used, in a language-like way.

The article goes on to say that Anderson’s work has “…amounted to a kind of technological critique, revolving around the observation that we’re doing something wrong in the way we make things. We’re substituting an oversimplified model of structure-making — one more closely related to our peculiar hierarchically limited way of conceiving abstract relationships — in place of the kinds of transformations that actually occur regularly in the universe, and in biological systems especially.”

Ours is a much more limited, fragmentary form of this larger kind of transformation. The result of this problem is nothing less than a slow unfolding technological disaster. We know it as the sustainability crisis.

That’s where this discussion touches on what’s happening today — economically, ecologically, and culturally. Growing numbers of people do recognize that we have to get our houses in order. But whose house, to what extent, and in what way? That’s the big question of the day.

What Alexander argues is that we have to make some very fundamental reforms — not only in our specific technologies, but in our very way of thinking about technology. We have been isolating things, as mechanical sub-entities, and manipulating them. That works quite well, but only up to a point. As any systems theorist or ecologist will tell you, the context, not the thing, is the key.

So it seems that we have ignored an incredibly important aspect of natural systems — namely, the fact that every structure is embedded in a larger structural context, and ultimately, in the entire structure of the cosmos itself. What Alexander offered was not just the recognition of this truth, but the basis of a new technology that could incorporate it.

post

Spaun is not “consciousness in a box”

The hype about the “neuron brain model” Spaun made me think of my skeptical FringeWare Review piece about storing or replicating human consciousness, “Consciousness in a Box.” Sci-fi culture has set the assumption that construction of an “artificial brain” is not only possible, but inevitable, but I’ve argued that it’s unlikely, if not impossible to build a machine that replicates human cognition. Context is important: however we came to “think” in the way we do, to be conscious, sentient entities, that won’t be replicated in a bundle of switches, however slick, fast, and capable. SPAUN, in fact, is somewhat less than the hype suggests:

The first thing to point out is that Spaun doesn’t learn anything. It can be arranged to tackle eight pre-defined tasks and it doesn’t learn any new tasks or modify the way it performs existing tasks. The whole system is based on the Neural Engineering Framework – NEF- which can be used to compute the values of the strengths of connections needed to make a neural network do a particular task. If you want a neural net to implement a function of the inputs f(x) then NEF will compute the parameters for a leaky integrate and fire network that will do the job. This is an interesting approach, but it doesn’t show any of the plasticity that the real brain and real neural networks show.

If anything, this approach is more like the original McCulloch and Pitts networks where artificial neurons were hand-crafted to create logic gates. For example. you can put neurons together to create a NAND gate and from here you can use them to implement a complete computer – a PC based on a Pentium, say, using the neuronal NAND gates to implement increasingly complex logic. It would all work but it wouldn’t be a thinking brain or a model of a neuronal computer.

If we ever do build a “thinking machine” that is to any degree autonomous, I’m certain it won’t replicate human consciousness or thought processes – it’ll have its own way of “thinking.”

post

Mindjack

Pete Rothman’s published a post at h+ on Donald Melanson’s brilliant neophiliac website Mindjack. I was on Mindjack’s board at one time, and contributed a few pieces to the site, including “Nodal Politics,” a chapter from my unpublished book Virtual Bonfire. In that particular piece, I was considering the potential for the Internet to serve as a platform for political organizing. Many if not most of the Mindjack authors were members of Howard Rheingold’s Electric Minds community, originally formed as a for-profit ad-based social site. (There’s a whole other interesting story about the sale of Electric Minds and the attempt to preserve the community as the platform changed hands.)

I don’t even remember writing a post at Mindjack about SXSW 2002 – post-dotcom-bust – but there it is.

This year’s South by Southwest Interactive conference was lean and mean – attended mainly by the core group of edgy ‘net whackadistas, the conference had an interesting vibe, like “Wow, glad the goddam dotcom splurge is over, let’s get back to what we were doin’…” And what we were doin’ had real depth, it was way more compelling than ecommerce or net.publishing, the kinds of projects MBAs brought to the table when they started calling the Internet an ‘industry’ and creating the concept of the IPO casino. Before all that stuff happened we were thinking about open and free paradigms for software development, technologies for community, new and better ways to tell our stories. We were re-inventing ourselves as cyborgs, humans enhanced by accelerated technologies, looking for ways to nurture each other and share ideas over faster, increasingly accessible networks. And though many were all a little tired, a little disoriented, a little uncertain about where they were going, there was no question that the crowd at this year’s SXSW was still committed to Internet technology and the web. Sadder, wiser, more grounded, but still eager to build.

post

Innovation is Madness

Genius architect Pliny Fisk of the Center for Maximum Potential Building Systems is featured in a GE Focus Forward short film contest semifinalist, “Innovation is Madness.” “This whole idea that I am a mad scientist essentially comes from the fact that I have created a situation where madness can happen safely…”

Innovation Is Madness | Mark Decena from Focus Forward Filmson Vimeo.

INNOVATION IS MADNESS is a Semifinalist in the $200,000 FOCUS FORWARD Filmmaker Competition and is in the running to become the $100,000 Grand Prize Winner. It could also be named an Audience Favorite if it’s among the ten that receives the most votes. If you love it, vote for it. Click on the VOTE button in the top right corner of the video player. Note that voting may not be available on all mobile platforms, and browser cookies must be enabled to vote.

Pliny Fisk III was one of the founding members of the green building movement. In 1975 he co-created the Center for Maximum Potential Building Systems, a non-profit education, research, and demonstration organization specializing in life cycle planning and design. Shooting a video series for the US Green Building Council, we were introduced and subsequently fell in love with Pliny and his merry band of crazies. CMPBS has not only become a collaborative model for invention, but a physical space for innovation to happen. It’s time for the inmates to run the asylum.

post

Bruce Schneier: Liars and Outliers

Bruce Schneier

Bruce Schneier

Check out our conversation on the WELL with security expert Bruce Schneier, who among other things is responsible for the Crypto-gram Newsletter. In this conversation, he’s discussing his book Liars and Outliers: Enabling the Trust that Society Needs to Thrive. Because I’ve been thinking a lot lately about the value and erosion of trust, this book and the conversation on the WELL are especially resonant with my own focus and thinking.

In the book, I wander through a dizzying array of academic disciplines: experimental psychology, evolutionary psychology, sociology, economics, behavioral economics, evolutionary biology, neuroscience, game theory, systems dynamics, anthropology, archeology, history, political science, law, philosophy, theology, cognitive science, and computer security. It sometimes felt as if I were blundering through a university, kicking down doors and demanding answers. “You anthropologists: what can you tell me about early human transgressions and punishments?” “Okay neuroscientists, what’s the brain chemistry of cooperation? And you evolutionary psychologists, how can you explain that?” “Hey philosophers, what have you got?” I downloaded thousands — literally ­­ of academic papers. In pre-Internet days I would have had to move into an academic library.

What’s really interesting to me is what this all means for the future. We’ve never been able to eliminate defections. No matter how much societal pressure we bring to bear, we can’t bring the murder rate in society to zero. We’ll never see the end of bad corporate behavior, or embezzlement, or rude people who make cell phone calls in movie theaters. That’s fine, but it starts getting interesting when technology makes each individual defection more dangerous. That is, fishermen will survive even if a few of them defect and overfish — until defectors can deploy driftnets and single-handedly collapse the fishing stock. The occasional terrorist with a machine gun isn’t a problem for society in the overall scheme of things; but a terrorist with a nuclear weapon could be.

post

Bucky Day

Pesco posted a reminder at bOING bOING that today’s Buckminster Fuller’s birthday – worthy of celebration and remembrance!

http://storify.com/jonl/bucky-day

post

Are computers intelligent?

Bruce Sterling with Alan Turing bot at the Turing Centenary Symposium

At Reality Augmented Blog, I recently posted a Storify of my live tweets from Bruce Sterling’s talk at the Turing Centenary Symposium at the University of Texas. Bruce talked about Turing’s investigation into “whether or not it is possible for machinery to show intelligent behaviour” and the Turing test, which is supposed to determine how well a computer at least seems to be intelligent by human standards. To consider this question, you might think you’d have to define thinking (cognition, consciousness, etc.), but instead of taking on that difficult task, Turing changes the question from “Do machines think?” to “Can machines do what we (as thinking entities) can do?” That’s really a different question, less metaphysical and more about comparing manifestations of thinking than comparing processes of thinking.

Bruce noted in his talk an aspect of the Turing test that doesn’t get much attention: it was originally about gender. In his paper “Computing Machinery and Intelligence,” Turing described the game as “played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman.” He goes on to say

We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

So as Bruce notes, the actual Turing test is for a machine with a woman’s sensibility. The gist of his talk reminded me of conversations I’ve had with transgendered Sandy Stone, who spent years studying identity hacking online and off. I brought up the question of a man deciding to appear online as a woman, and how real that transformation can be. If you’re a man and decide to be a woman (or vice versa), you can’t quite make the authentic switch, because gender entails years of cultural and behavioral conditioning. If you’ve been contextualized as a male, you don’t become female by changing your name, your voice, your dress, even your body.

In the context of the conversations with Sandy, the subtext always seemed to be about liberation from the trappings of gender – you don’t have to be “a man” or “a woman,” you can just be you. But this has relevance, not just in terms of gender switching, but with any attempt at transformation. And it has implications for the discussion of machine intelligence. Machines can’t “become human” or be like humans, because they have no experience as humans, and you can’t program the embodied human experience.

In the context of the conversations with Sandy, the subtext always seemed to be about liberation from the trappings of gender – you don’t have to be “a man” or “a woman,” you can just be you. But this has relevance, not just in terms of gender switching, but with any attempt at transformation. And it has implications for the discussion of machine intelligence. Machines can’t “become human” or be like humans, because they have no experience as humans, and you can’t program the embodied human experience. You also can’t program “consciousness” – puny humans aren’t even clear what consciousness is, and we know that things like “consciousness” and “awareness” and “thinking” can be quite subjective and hard to quantify. So when we talk about “artificial intelligence” or “machine intelligence,” that word “intelligence” can be misleading. It’s not about making a machine like a human, it’s about seeing how well a machine can simulate the human. The Turing test is really about how clever we are at programming a bot that does heuristics well and can “learn” to hold its own in a human conversation. It’s interesting to bring gender into it – to simulate the human, a bot would be one or the other.

Scene from Metropolis: Rotwang and his robot in female form Rotwang and his lost-love simulation[/caption]Bruce: “Why not ask ‘can a computational system be a woman?'” This made me think of Rotwang’s remaking of Hel in Metropolis, and how she’s repurposed as a simulation of Maria… a robot designed to simulate the female form. Is she a mechano-electronic woman? Or just a bag o’ bytes? More compelling, I think, is the concept of the cyborg, originally described as a biological entity that’s manufactured and has some machine components. More recently, we’ve come to think of cyborgs as “ordinary” humans augmented by digital or other technology – e.g. anyone with a smart phone or a computer could be considered a cyborg. My colleague Amber Case writes about “cyborg anthropology,” acknowledging that synergies within human-machine interaction are transformative, and require new methods and fields in the study of humanity. I think cyborgization is more interesting and more real than the Kurzweil sense of “artificial intelligence” (machines “smarter” than humans that become self-aware – Hal 9000 is a mythical beast; computers may be capable of processes that seem intelligent, but back to Bruce’s point, computers are not anything like humans.)

Turing himself said “the idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer.” On the other hand, Gurdjieff said “man such as we know him, is a machine.” A very complicated machine, he noted elsewhere.

My point in all this is that humans are not machines and machines won’t become human. We’re confused on that point, likely because of a larger metaphysical confusion, a confusion about who and what we are, our place in the universe, and the nature of the various human analogs, similar but different processes, that we see in the egosystem. (That’s not a misspelling…)

Bruce Sterling: “I fear posterity will condemn us for being too clever, for failing to speak about the obvious in an immediate lucid way. We need a new aesthetic with a strong metaphysics. How we get there, I don’t know.”

post

Mark Dery dances the apocalypso

Mark Dery

I’m leading a two-week asynchronous discussion with erudite author and culture critic Mark Dery, whose provocative essay collection I Must Not Think Bad Thoughts has been turning my head on its axle. At the moment, we’re discussing apocalypse:

I *do* think we live in times of chaos and complexity, when society is “far from equilibrium,” as
scientists who study dynamical systems like to say. Steven Pinker’s claim, in _The Better Angels of Our Nature_, that violence is on the decline notwithstanding, most peoples’ experience of the wider world—which is to say, as a funhouse-mirror reflection in the media—seems to be as a growingly out-of-control place. Ideological extremism and lockstep partisanship are monkeywrenching the American political system—an article by Ezra Klein in the March 19 _New Yorker_ notes that ideological “rigidity has made American democracy much more difficult to manage”; the culture wars are reaching a boiling point, ginned up by backroom dealmakers like the Koch brothers, whose real agenda is simply to create the most deregulated, tax-free landscape in which to Do the Lord’s Work; and Angry White Guys are stockpiling guns and training their crosshairs on scapegoats, post-traumatically stressed by a black man in the Oval Office, the demographic rise of the nonwhite population, the sea change in households where women earn more than men, and the econopocalypse.

But if you’re shopping for apocalypses, the rough beast right around the bend is Envirogeddon. Come of the middle of the 21st century, we—at least, those of us who can’t afford a climate-controlled biosphere lush with hydroponic greenery and an artesian well guarded by a private army—are going to be living in one of Ballard’s disaster novels. Global Weirding, as climate scientists call it, is *the* pressing issue of the near future, and I have every confidence my friends on the right will bury their heads in the sand, on that issue, until the sand superheats and turns to glass.

If you’re not a member of the WELL but want to submit a comment or question, just scroll to the bottom of the page and look for: Nonmember: Submit a comment or question. That links to a form that, when submitted, will send your comment or question to someone at the WELL who can add it to this discussion.