State of the World 2014

Bruce Sterling and I are into our annual “state of the world” mischief. [Link]

Google was going wild in early 2013, they were like android demigods.
Now Google is, all of a sudden, presto, Russia. Google is a
surveillance secret-police empire with spy binoculars on their faces.
Sergey Brin’s pet Moonshots are just a lame prestige show.

It’s sad, really. Larry and Sergei used to be the Not-Evil Guys, they
empowered the users and won their instinctive trust. Now, if Snowden
entered the boardroom of Google, Larry and Sergei would shriek in
falsetto like the Wicked Witches of the West and melt into two puddles
of black wax.

That doesn’t make Microsoft, Apple, Facebook and Amazon any better
than Google — Facebook in particular, oh my God — but it’s the first
time that these new titans of American industry have really looked
genuinely ugly. Just, nasty. Because they’re rich and powerful, but
they’re also narcs. They’re creeps and snoops. They’re police
informants.

They were kinda tricked into it — but everybody knows it, and their
unwillingness to face up the stark embarrassment is an act of tacit
consent. The Brazilians, Germans, French, Italians, Russians, the
Chinese ten times over, everybody, they all know. It takes a while for
that kind of damage to the reputation to sink in, but it will.

2013 Top Ten (Social/Political/Technical) Culture Blasts

These are things I thought were important in 2013.

NSA Leaks and surveillance society

I always figured the NSA was watching, but it was still a shock to find how extensive surveillance had become – and it was disturbing to see clearly how surveillance of this kind was somebody’s job, something they would inscribe in how-to PowerPoint presentations. This realization via Snowden leaks brought the panopticon home in a big way: as we move so much of our lives into massive databases, we’re increasingly trackable, increasingly exposed to those who know how to capture and analyze the data, and especially vulnerable to government scrutiny. But NSA and government is only part of the story. We’re seeing widespread surveillance by both public and private entities – marketing analytics engines could be or become as robust as NSA tools, meanwhile none of us has ownership of, or control over, our personal data.

In 2013 our level of trust was low and declining. We especially don’t trust governments and corporations with our data because we’re so increasingly aware of the potential for, if not the fact of, abuse. To some extent concerns are legitimate, and to some extent they emerge from a culture of paranoia that has evolved in the wake of mass media and network technology, which have had several relevant effects: greater awareness of abuses when they happen, feeding into myriad fictional surveillance and pursuit fantasies, and more recently the emergence of a social media panopticon. But the Snowden revelations make paranoia feel pretty rational.

Andrew Leonard has a good Salon piece about surveillance/sousveillance: http://www.salon.com/2013/12/27/how_to_defeat_big_brother/

Death of the Internet /DIY/free culture etc.

As the Internet has become the pervasive platform for media and commerce, it has ceased to be the “network of networks” of the 90s. As so many of us predicted, the Internet has been transformed into something more like the cable networks. Content and technology are increasingly locked down behind paywalls and other barriers. Even social media have become more professional, less DIY. Anyone can still participate, but few will capture attention or persistent mindshare as the Internet version of mass media has emerged, more conversational and less top-down than the 20th century version, but nothing like the transitional blogosphere. As small publishers moved from desktop publishing to the web, 2013 saw bloggers moving onto managed platforms like Facebook and Tumblr. We now have a media environment that includes a relatively small number of high-profile content sources, and smaller clusters of online conversation and sharing. Shirkification proceeds (referring to Clay Shirky’s predictions that just such a thing would happen). Question is, how will cream rise to the top? How will new voices emerge and capture attention? Or they be excluded by stricter gateways and media dominance by a limited few. The promise of the Internet was that it could bring a vibrant mix of new perspectives and a cheerfully unmanageable confluence of cultures, but we lose that, if network culture is dominated by a top-down mass media paradigm.

Boston Marathon bombing

The Marathon bombing was similar to the 9/11/2001 attack on the World Trade Center, though smaller scale and evidently involving only two Chechen Muslim perpetrators who didn’t seem to be acting as part of a larger conspiracy or movement like Al Qaeda. This seemed to be more the case of “another nut with a gun” (and some bombs. However I find it just as troubling, maybe more so, to see the bombing as part of an epidemic of random acts of senseless violence. Note also that there were 359 mass shootings in the USA in 2013. (http://www.reddit.com/r/GunsAreCool/wiki/2013massshootings)

The Tea Party gets elected

Through a combination of hard work, effective propaganda, big money, and possibly a heavy thumb on the voting scales, a number of Tea Party politicians have been elected to public office, have been empowered by their supposed popularity, and have managed to freeze Congress from producing any effective legislative solutions. 2013 has been the year of peak Tea Party ascendance, much to the dismay of Democrats and pragmatic Republicans whose business-as-usual has been derailed. The debate about the role and extent of government may ultimately be healthy, if it doesn’t kill us first.

Pope Francis

As religious figures go, this is a breath of fresh air. Pope Francis, the first Jesuit pope, and the first pope from the Americas, is known for his humility and openness, and the simplicity of his demeanor. These are welcome traits in the leader of the world’s largest, and arguably most influential, Christian religious organization. He hasn’t sold all the Catholic gold, but he’s wearing less of it.

Economy, what?

We keep hearing that the economy will tank any day now, and for anyone who’s on the exasperating downside, that doesn’t seem so speculative. Tech is booming (but it could be a bubble), and there are signs of life in the world of manufacturing. Innovation is everywhere. However the American middle class is on the ropes, and much of the world’s money is socked away in Swiss bank accounts, i.e. out of play. And while there are many experts in the infosphere, nobody seems to have a definitive clue. There’s a lot of “next economy” talk, and we may very well see a collapse of traditional means of exchange and the ascendance of new forms – worker cooperatives, alternative currencies and barter systems, resilient communities, etc. These are gathering steam (and may have to be steam-driven, as fossil fuels burn away).

“Obamacare”

Affordable Care Act (aka Obamacare) depends on the commitment of citizens and corporations to make it work. However opponents who see in the potential for broad mandated insurance a kind of socialism, where the strong support the weak, have undermined that commitment. Those that are healthy/wealthy don’t want the sick and the poor in their insurance pools, just as they don’t want their tax dollars spent on benefits or “entitlements” for the lower and middle classes. The actual launch of Obamacare was the best they could’ve hoped for: the web technology to support ACA exchanges and enrollments was poorly planned and executed, and this seemed to validate the opponents’ arguments that the ACA would be a disaster. But the botched website development doesn’t say anything about the viability of the ACA system itself. While the law’s not ideal, it’s a step toward universal healthcare and improvement of the whacky dysfunctional American healthcare system. As of this writing, the website’s working better, so we may be past that particular glitch. Meanwhile ideological wrangling over the complex (ergo not well understood) legislation sucked much of the political energy out of 2013.

Chelyabinsk meteor

What happens when an asteroid strikes the earth? We’ve often wondered, and the answer depends on the size of the particular rock. Many think the Tunguska event in Russia was an asteroid or comet strike. The Chelyabinsk meteor, also in Russia, was also thought to have been an asteroid, and the first case where a meteor blast caused documented widespread injuries. I’ve used the word “strike” here, but in the case of both Tunguska and Chelyabinsk, there wasn’t a direct hit. Both exploded above the earth; most of the damage was caused by shock waves.

How can we prevent larger asteroids from striking the earth? NASA’s currently planning an asteroid-tow-and-study mission that would be a step in the right direction: http://www.space.com/22764-nasa-asteroid-capture-mission-candidates.html

Miley Cyrus twerking

Miley’s unconventional, racy MTV Video Music Awards appearance shocked the Twitterverse and escalated her prominence as a pop culture icon, not so much because of the performance itself (which I saw as a clever, entertaining parody of pop culture excess) as her smart handling of the supposed controversy. Can’t say that there was any shift in mainstream commercial pop culture as a result of the furor, but hey, it was just a bit of fun.

Google Glass

I guiltily admit that I haven’t taken any opportunity, and there’ve been some, to give Google Glass a try. I’m skeptical whether I’ll be able to see much of the overlay, but it might be cool to shoot photos and videos on the fly, though a GoPro would be better for that. To me, the real significance is not so much of the specific product or platform but the boost for the wearable computing meme, which we’ve been talking about since the early 90s. However my pocket device is useful enough, I don’t have to “wear” it (though I’m jonesing for a wearable health data tracker like FitBit.)

The point of “wearable” is that computers are increasingly embedded in the fabric of everyday life, via devices like Glass, Nest, FitBit et al, and concepts like the Internet of Things. 2013, two decades after the Internet’s mainstreaming began in 1993, these next generation technologies have arrived. Soon enough, they’ll be commonplace and boring.

What the Internet Is

This works as a manifesto. I didn’t write it — it’s by the brilliant technologist David P. Reed.

Occasionally, people ask my perspective on the Internet, since I often object to confusing it with things like the telephone or Cable TV. Recently I composed a response that captures my perspective, as one of the participants in its genesis, and as an advocate for sustaining its fundamental initial design principles. I hope these words clarify what I believe many of those who continue to create the Internet continue to do, even though most of them are not aware of it. I also hope many will see their interest in keeping the core principles of the Internet alive.

The Internet must be fit to be the best medium of discourse and intercourse [not just one of many media, and not just limited to democratic discourse among humans]. It must be fit to be the best medium for commercial intercourse as well, though that might be subsumed as a proper subset of discourse and intercourse.

Which implies interoperability and non-balkanization of the medium, of course. But it also implies flexibility and evolvability – which *must* be permissionless and as capable as possible of adapting to as-yet-unforeseen uses and incorporating as-yet-unforeseen technologies.

I’ve used the notion of a major language of inter-cultural interaction, like English, Chinese, or Arabic, as an explicit predecessor and model for the Internet’s elements – it’s protocols and subject matter, it’s mechanism of self-extension, and it’s role as a “universal solvent”.

We create English or Chinese or Arabic merely by using it well. We build laws in those frameworks, protocols of all sorts in those frameworks, etc.

But those frameworks are inadequate to include all subjects and practices of discourse and intercourse in our modern digital world. So we invented the Internet – a set of protocols that are extraordinarily simple and extraordinarily independent of medium, while extensible and infinitely complex. THey are mature, but they have run into a limit: they cannot be a framework for all forms of (digital information). One cannot encode a photograph for transmission in English, yet one can in the framework we have built beginning with the Internet’s IP datagrams, addressing scheme, and agreed-upon mechanics.

The Internet and its protocols are sufficient to support an evolving and ultimately ramifying set of protocols and intercourse forms – one’s that have *real* impact beyond jurisdiction or “standards body”.

The key is that the Internet is created by its users, because its users are free to create it. There is no “governor” who has the power to say “no” – you cannot technically communicate that way or about that.

And the other key is that we (the ones who began it, and the ones who now add to it every day, making it better) have proven that we don’t need a system that draws boundaries, says no, and proscribes evolution in order to have a system that flourishes.

It just works.

This is a shock to those who seem to think that one needs to hand all the keys to a powerful company like the old AT&T or to a powerful central “coordinating body” like the ITU, in order for it not to fall apart.

The Internet has proven that the “Tower of Babel” is not inevitable (and it never was), because communications is an increasing returns system – you can’t opt out and hope to improve your lot. Also because “assembly” (that is, group-forming) is an increasing returns system. Whether economically or culturally, the joint creation of systems of discourse and intercourse *by the users* of those systems creates coherence while also supporting innovation.

The problem (if we have any) is those who are either blind to that, or willfully reject what has been shown now for at least 30 years – that the Internet works.

Also there is too much (mis)use of the Fallacy of Composition that has allowed the Internet to be represented as merely what happens when you have packets rather than circuits, or merely what happens when you choose to adopt certain formats and bit layouts. That’s what the “OSI model” is often taken to mean: a specific design document that sits sterile on a shelf, ignoring the dynamic and actual phenomenon of the Internet. A thing is not what it is, at the moment, made of. A river is not the water molecules that currently sit in the river. This is why the neither owners of the fibers and switches nor the IETF can make the Internet safe or secure – that idea is just another Fallacy of Composition. [footnote: many instances of the “end-to-end argument” are arguments based on a Fallacy of Composition].

The Internet is not the wires. It’s not the wires and the fibers. It’s never been the same thing as “Broadband”, though there has been an active effort to confuse the two. It’s not the packets. It’s not the W3C standards document or the IETF’s meetings. It’s NONE of these things – because those things are merely epiphenomena that enable the Internet itself.

The Internet is an abstract noun, not a physical thing. It is not a frequency band or a “service” that should be regulated by one of the service-specific offices of the FCC. It is not a “product” that is “provided” by a provider.

But the Internet is itself, and it includes and is defined by those who have used it, those who are using it and those who will use it.

At SXSW Interactive: Art, Activism, and Augmented Reality

Pool Hopping on the Island of the Bull

Artist/activist Patrick Lichty and I have a session at SXSW Interactive called “Art, Activism, and Augmented Reality.” Here’s the blurb from the SXSWi schedule:

One of the latest genres in New Media art is that of Augmented Reality, or the overlay of digital content onto physical reality through the use of smart phones and computer vision. Marshall McLuhan heralded artists as early adopters of new technology, and the emergence of AR as an art genre is no surprise. Numerous AR works have sought to explore the expressive and critical possibilities of the technology, and groups like Manifest.AR have used this medium as a means of creative dissent through their Occupy Wall Street AR intervention. With AR a burgeoning platform for New Media art investigation, we will discuss the potentials and limitations of the medium, the history and context of work being done today, and the tactical potentials of AR as political intervention.

Over the last two decades, thinking about technology and activism, I’ve followed the process of embedding the former in the latter, and the evolution of a straightforward kind of cyborg activism with standard functionality: using email and social media to rally the troops, using SMS for coordination on the ground, spreading grassroots memes through websites, etc. The impact of technology, and the automation of the activist, is clear enough; with lower costs of coordination, grassroots movements at least Have A Chance. However much of the deployment of technology has, as in other fields of endeavor, fallen into the funding groove – we’re using computer-mediated activist approaches to fill the coffers of various organizations, large and small, and truly disruptive uses are rare.

ar-financeThe Occupy movement brought a new crop of activists to the table with open minds and (often) open hearts, and a commitment to disrupt established political machinations that exploit rather than serve. Occupy worked, not as an activist project, but as a movement-building enterprise, and it worked partly by using art and design to burrow into the collective psyche. Some of the more fascinating approaches that emerged within Occupy leveraged augmented reality applications to make points that are better driven by art than by polemics. See the example on the right.

So Patrick, one of my colleagues at Reality Augmented Blog, and I will be talking how AR, activism and art can support social and political movements. If you’re at SXSW Interactive this year, try to drop by.

Image: “Pool Hopping” at the Island of the Bull, Mark Skwarek.

Robots

Robot photo from Rhizome

My favorite-so-far Bruce Sterling post in the State of the World conversation:

“Following on from John Payne’s comments in <76>, are the robots
coming for our jobs? Is a certain amount of unemployment going to end
up as part of the system and, if so, what happens next?”

*It’s so interesting to see this perennial question coming into vogue
once again. When I was a pre-teen first discovering “science fiction,”
that automation dystopia story was all over the place. Even on the
cover of TIME magazine. See this Artzybasheff computer monster, all
busy stealing guy’s jobs? Looks oddly familiar, doesn’t it?

Heckuva commercial artist, Artzybasheff

Of course that issue pre-dates me by a long chalk. It’s also the folk
song of John Henry the Steel-Drivin’ Man, who breaks his heart
defeating the boss’s Steam Hammer.

I can tell you what’s NOT gonna happen with “robots.” Nobody’s gonna
defeat the logic of the assembly line by starting a Pre-Raphaelite Arts
and Crafts commune where people shun the Robot and make hand-made wall
tapestries. That’s been tried eight thousand different times and
places. It never works for anybody who’s not Amish.

Framing the issue as “robots coming for our jobs” is rather a moot
point anyhow, because the blue-collar guys who “own” assembly “jobs”
have zero input on whether robots get deployed or not. What practical
difference does that question make? No modern salaried employee
anywhere has the clout to defend a “job” from “the robots.” The
investors deploying the robots are serenely unworried about Luddite
saboteurs or crippling labor-union strikes. Those possibilities of
working-class resistance were de-fanged ages ago.

So, you know, either they automate some processes at the cost of human
labor, or they don’t. Somebody’s alway gonna try it, and in some
areas it works out rather better than it does in others, but the basic
robot story isn’t robots, it’s “whatever happens to musicians will
eventually happen to everybody.”

Apparently this latest little robot-vs-job flap gets most of its
impetus from two things, a cool new assembly robot created by Rodney
Brooks and a typically Emersonian intervention from Kevin Kelly.

So, here I’ll tell my Rodney Brooks story. I met the guy once, at
some forgettable event in Washington DC, and after the panels were
over, Prof Brooks and I ventured into the bar.

So, I was nursing a whiskey sour, and I was like: “So, Doctor Brooks,
I know a little about your work, and –”

“Call me Rod!”

“So, Rod — level with me about this MIT scheme you have to automate
the movement of insect legs. How’s that supposed to work, exactly?”

So, Rod was nothing loath, and he was pretty well going at it hammer
and tongs, while I was asking the occasional provocative sci-fi style
question — stuff like “so, how does the cube-square law work out when
the robo-insects are walking on the ceiling?” — because we sci-fi
writers dote on MIT.

Then I happened to glance across the bar, and I saw that our bartender
was “frozen in disbelief.” He was so amazed by what Brooks was saying
that his glass and his cleaning cloth were rigid in his unmoving arms.
This bartender had the affect of a sci-fi movie android with a power
failure. It was the only time I’ve ever seen that figure of speech as a
genuine aspect of human behavior.

So, I give Rodney Brooks a lot of credit, he’s a fascinating guy, I’m
glad to see him kept busy on things other than, for instance, an
MIT-style Vannevar Bush Manhattan Project at an undisclosed desert
location. I’m confident that Rod’s new manipulator is pretty snazzy.

But let me ask this: if an assembly-line device is going to “take our
jobs,” wouldn’t a 3dprinter also “take our jobs?” Why do we treat them
so differently? I mean, they’re both basically the same device:
automated mechanical systems precisely moving loads in three dimensions
by following software instructions.

So how come the Brooks robot is framed as a sinister job-stealing
robot, while a 3dprinter is framed as a printer, like, a cool nifty
peripheral? Didn’t digital printers also take a lot of “people’s
jobs?”

Besides, a Brooks robot is just imitating human-scale movement while
3dprinters create objects in micron-accurate ways that no human can
possibly do at all. So clearly the 3dprinter is a more radical threat
to the status quo.

Along this same line: Chris Anderson, late of WIRED, has got a new
book out about “Makers.” I read it. It’s all about how network society
cadres with 3dprinters and open-source schematics and instructables
are going to create a “Third Industrial Revolution.” Great, right?
Okay, maybe Makers take over the world or they don’t, but how come
nobody says “A Third Industrial Revolution means those Makers are going
to take our jobs?” Because they would, wouldn’t they? How could they
not?

Shouldn’t this prospect be of larger concern than Rodney Brooks’
latest gizmo, one among hordes of assembly line robots that have been
around for decades now? An “Industrial Revolution” should *almost be
definition* take everybody’s jobs. But the general reaction to
Anderson’s book is that the guy is *too optimistic,” that he drank his
own tech-hype bathwater and is having way too much fun. Isn’t there an
inconsistency here?

Then there’s the latest Kevin Kelly argument, which is more or less
about how robots are gonna take everybody’s jobs, but fine, that’s
great, especially if they’re sexbots. There’s nothing sparkly-new
about this line of reasoning, it’s very Automation Takes Command. The
pitch is that robots take the dull dirty and dangerous jobs, which
frees us to become, I dunno, humane speculative creatives like Kevin
Kelly, I guess.

However, I don’t believe automation has ever worked like that; there’s
no creeping wave-line with “robotics” on one side and “humanity” on
the other. Playing chess is very “human,” but Deep Blue is a robot
that can kick everybody’s ass at chess. You can claim that “Deep Blue”
is not “a robot,” but come on: just put a tin face on him and give him
a manipulator arm. Instant “robot.” Robotic has never been an issue
of mechanical men versus flesh men, like in a Flash Gordon episode.

The stuff we call “robotics” today is more like Google’s “robot car,”
which is not some Karel Capek man-shaped “robot” of the 1920s; the
Google Car is the Google Stack with wheels attached to it. Similarly,
“Google Glass” isn’t virtual-reality supergoggles, it’s the Google
Stack with a camera, Android mobile software and a head-mounted
display. Will they “take your jobs?” How could they not?

If you lose your job as a bus driver because a Google Bus took your
job, you didn’t lose it to a “robot,” you lost your enterprise to
Google, just like the newspapers did. Don’t bother to put a sexbot
face on the silly thing, it’s Larry and Sergei & Co. Go find a
musician and buy him a drink.

Fighter pilots are “losing their jobs to robots,” to aerial drones.
Are those the “dull dirty and dangerous” jobs? Heck no, because
fighter jocks are romantic folk heroes, like Eddie Rickenbacker and the
Red Baron and George Bush 1.0. When most flight work is carried out
by “robots” (actually by GPS systems and databases, but so what), are
we somehow going to discover a more refined and human way to fly? Will
we be liberated to fly in a more spiritual, humanistic, Beryl Markham
poetic aviatrix kind of way? I very much doubt that. I’m pretty sure
we’ll stop “flying” entirely, even if we anachronistically claim we’re
“flying” when we’re zipping around in sporty ultralights letting drone
systems do all the labor.

Bookstore clerks never had “dull, dirty, dangerous” work, they were
the mainstays of humanistic commerce actually, but Amazon is a Stack.
Amazon’s all about giant robot warehouse distribution logistics. It’s
all databases and forklifts in the Amazon stack, so of course “robots”
took the jobs of bookstore clerks. Bookstore clerks imagined they were
chumming around with the literate community turning people on the Jane
Austen, but the high-touch, humanly clingy aspect of this line of work
changed nothing much about its obsolescence.

So it’s not that “robots” take “our jobs.” It’s more a situation of
general employement precarity where applications built for mobile
devices and databases can hit pretty much anybody’s line of work, more
or less at random, without a prayer of effective counter-action.
Right? Let’s move right along, then!

That being the case, “what ought to be done?” Well, if job security
of all kinds is going to be made precarious indefinitely, then the
sane, humane thing to do is clearly to socialize security and put
everybody on a guaranteed annual income. Brazilian-style socialism:
keep your nose clean, keep the kids in school, and we fee you off and
you can go buy whatever produce the robots have cooked up lately.

One might also invent some kind of Stack Fordism, where Facebook pays
you enough to hang out on Facebook making Facebook more omniscient.
It’s a lot cheaper than putting the unemployed into prison.

Obviously the American right-wing isn’t gonna go for this wacky
liberal scheme; bailing out the “takers” of the 47% is their worst
Randroid nightmare. But what people never understood about the John
Henry story is that we have no steam hammers left. The robots “take
your job” and then the robots *keep changing at a frantic pace,* the
robots have the lifespans of hamsters. We’ve still got plenty of
muscular, human John Henries, but his steam hammers are all extinct.

Look what happened to Nokia. These Nokia guys had the classic Wired
magazine bulletproofed dream jobs. They’re not John Henry. They’re
creative class, computer-literate, inventive, super-efficient, global,
digital, Asperger’s high-IQ types… They got annihilated in 18
months. Not by “robots” but by Google and Apple. However, well, same
difference really.

What kind of “jobs” do Republicans have to offer themselves, when
their nominee was a corporate raider, and their top financier is a
weird Jewish casino owner up to the eyebrows in Macao? That’s not
exactly the Protestant work ethic happening, so, well, I dunno.

It might still work, just needs more political pretzel-bending. Don’t
use the word “guaranteed income,” farm it out to Fox News for semantic
re-framing. Toss in the “values requirement” that your annual income
requires you to wear Mormon undies, go to tent revival meetings and
own and display a handgun. They’d line up for it.

Photo from Rhizome

“We’re headed for a disaster of biblical proportions!”

Jeremy Grantham has been doing the math, and is convinced that world resources are way insufficient to support the current population.

Grantham believes that the planet can only sustainably support about 1.5 billion humans, versus the 7 billion on Earth right now (heading to 10-12 billion). For all of history except the last 200 years, the human population has been controlled via the limits of the food supply. Grantham thinks that, eventually, the same force will come into play again.

This is where we should be innovating – how do we match the level of resources to the (growing) need? Space travel is the old school sci-fi remedy: let’s go to Mars!

Google’s insanely great data kingdom

Photo of part of the physical infrastructure for Google's data system.
Photo: Google/Connie Zhou

Steven Levy wrote the book on Google (In The Plex: How Google Thinks, Works, and Shapes Our Lives); now Google’s let him into its formerly top secret data center in Lenoir, North Carolina. The massive data infrastructure is a wonder to behold. [Link]

This is what makes Google Google: its physical network, its thousands of fiber miles, and those many thousands of servers that, in aggregate, add up to the mother of all clouds. This multibillion-dollar infrastructure allows the company to index 20 billion web pages a day. To handle more than 3 billion daily search queries. To conduct millions of ad auctions in real time. To offer free email storage to 425 million Gmail users. To zip millions of YouTube videos to users every day. To deliver search results before the user has finished typing the query. In the near future, when Google releases the wearable computing platform called Glass, this infrastructure will power its visual search results.

Cliff Figallo on the acquisition of the WELL

I recently posted about the acquisition of the seminal online community, the WELL, by some of its members. At Social Media Today, my friend and former WELL director Cliff Figallo has an informative and insightful post that gives some context. “The people who log in and participate can be numbered in the hundreds,” he says, “but thousands of people have been active members at one time or another and many of them still think of the community as just that – a true online community that they consider to be their first home in Cyberspace.” He notes its history and influence:

In many ways, The WELL called attention to the social imperative in the early days of the Internet and the Web. It was one of the very first businesses to get an Internet domain name in 1992 – well.sf.ca.us. It inspired early Web developers to design platforms that would support social interaction. In 1996, Wired Magazine put The WELL on its cover, calling it “The World’s Most Influential Online Community,” and documenting some of the melodrama and technical “exploration” that had made it something more than an online forum.

When Katie Hafner was writing that piece for Wired, which later became a book, she interviewed me, and in her office she had a diagram that showed how the WELL derived influence from the communal movement in the sixties, and how it conveyed that influence to the larger Internet and the World Wide Web. Along with the BBS world, Usenet, and email lists, as Cliff says, the WELL inspired the social web – but not just developers, also users who, like me, were discovering that computers are platforms for communication and social connection.

Many of us who are still members of the WELL, and dissatisfied with a lack of depth in drive-by interactions on social media platforms, are hoping to see new growth within the community following its acquisition.

Mondo 2000 in the Late 20th Century Cyberculture

Mondo #8

Former Mondo 2000 editor RU Sirius has been working many moons on a history of the magazine and its predecessors (High Frontiers, Reality Hackers). I was privileged to help a bit with infrastructure for gathering stories as well as contributions on the Texas and WELL perspectives on Mondo.

RU’s published the preface at Acceler8tor…

Called MONDO 2000 — the magazine took the just-then-emerging future of digital culture, dangerous hacking and new medias; tossed them in the blender along with overdoses of hallucinogenic drugs, hypersex and the more outrageous edges of rock and roll; added irreverent attitudes stolen from 20th Century countercultures from the beats to the punks, the literary and art avant gardes, anarchism, surrealism, and the new electronic dance culture— and then, it deceptively spilled that crazy Frappe all out across really slick, vaguely commercial looking multicolored printed pages with content that was Gonzo meets Glam meets Cyberpunk meets something else that has never been seen before or since… but which those of us who were there simply called MONDO — as in, “Yes, the article you submitted is definitely MONDO.” Or, “No. This isn’t MONDO. Why don’t you try Atlantic Monthly?”

We called it “a beribboned letterbomb to the core address of consensus reality.” Briefly, and, in retrospect, unbelievably, it became the flagship of the new culture; the new world that was being created by the onrush of the new technologies.

Should Apple use Google Maps?

Word on the street is that the new iOS6 maps app is a mess, but Philip Elmer-DeWitt argues in Fortune that launching its own maps system was something Apple should have done long ago. However that argument focuses more on what’s good for Apple than what’s good for the consumer. He says “the company found itself in the position of feeding its customers’ priceless location information into the mapping database of its mortal enemy. That couldn’t go on forever.” Google maps have matured over the years, growing ever more accurate and robust as Google has leveraged “billions upon billions of data points supplied by hundreds of millions of users” to improve its maps over the years, which makes “Google Maps seem so smart and iOS 6’s new Maps app seem so laughably stupid.”

I can think of a competing argument here: letting Google do what it’s done so well and continuing to leverage it could make sense in they way it would support the user’s experience.

On the other hand having the two companies compete, each trying to outdo the other with its offering, could longer-term lead to a better user experience.

It’ll be interesting to see how this plays out. Here’s Mashable’s more detailed critique of the new Maps app, with comments and a slideshow, noting that “the problems are so numerous, Maps may never have a chance to prove itself before Google comes back strong. The search giant will soon release its iOS version, probably after just enough time has passed for us all to try Apple’s Maps and pronounce it DOA.”

Listening to blogs

Detailed Architecture of BlogSum

Kurzweil posts about a system developed for “mining the blogosphere,” i.e. BlogSum, a sophisticated listening natural language processing system for evaluating and indexing blog content developed at Concordia University. “The system is capable of gauging things like consumer preferences and voter intentions by sorting through websites, examining real-life self-expression and conversation, and producing summaries that focus exclusively on the original question.” This is a technical concept that David DeMaris and I had discussed some years ago, thinking of potential activist/political applications. It’ll be interesting to see how this technology is deployed.

You are not safe online!

Compliance and Safety, a company that makes safety training videos, has a good infographic that resonates with parts of the Doc Searls discussion: who controls your data? How safe are you online? Excerpt below, full graphic at the Compliance and Safety website.

Online Privacy infographic, excerpt

Doc Searls discussion: The Intention Economy

Doc Searls

I’m leading a discussion on the WELL with Doc Searls about his new book, The Intention Economy: When Customers Take Charge, which explores new thinking about the power relationship between customers/consumers and vendors. Doc has been rethinking those relationships through Project VRM (via his fellowship at the Berkman Center at Harvard), which has recently led to the creation of a “customer commons.”

It’s an old saw to say that listening to customers is a way to improve and gain new market advantages. But the difference with VRM will be adapting to standards and practices set on the customers’ side — ones that work the same for all companies. There will be less and less leverage in communicating only within a company’s on communication silo. IMHO, “social” services like Twitter and Facebook are not going to provide those standard ways, because they too are privately owned silos.

Scale will only happen when everybody uses the same stuff in the same way. The Internet and its core protocols scaled because they were essentially NEA: Nobody owned them, Everybody Used them and Anybody could improve them. (Yes, some were owned in a legal sense, but in a practical sense they were ownerless. This is why, for example, Ethernet beat Token Ring. Intel, Xerox and Digital essentially released Ethernet into the public domain while IBM wanted to keep Token Ring fully private and charge or it. This bitter lesson had leverage later when IBM embraced Linux.) Email as we know it won because it scaled in exactly that way.