True cyborgs: “disabled patients mind-meld with robots”

RobotDisabled patients are learning to use robot extensions directed by brain activity, currently in a limited way – but tests are promising. One hope is that “locked-in” patients, those unable to communicate with the outside world, can use robots to communicate and interact.

Researchers set up a modified Robotino robot with an interface that translates EEG signals into realtime navigation instructions. Initial tests were with healthy subjects, then with disabled subjects who had been confined to bed for 6-7 years.

Researcher José del R. Millán, a biomedical engineer at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, says “says he wasn’t terribly surprised that disabled people could control the robot.” However

he was surprised how fast they learned. He is now hoping to involve more bed-bound patients, including locked-in patients in the study. He also sees future applications for the shared control brain-machine interface, such as modifying it to let a user control a prosthetic limb or a wheelchair. And the researchers may eventually add an arm to the current telepresent robot to allow it to grasp objects.

[Link]

How should the Internet be governed?

This piece hints at the politicization of the Internet and the complexity of its future. The Internet Corporation for Assigned Names and Numbers (ICANN) is the closest thing we have to “Internet governance.” It’s the organization that coordinates the standards and processes associated with Internet addresses – the assigned names and numbers referenced in the organization’s name. In “ICANN’s ‘Unelected’ Crisis” Michael Roberts write about the controversy over ICANN’s unelected leadership and multistakeholder model. “If ICANN is to maintain its quasi-independence, a hard boiled, Kissinger-like brand of pragmatic statesmanship will be necessary.” [Link]

Bruce Sterling: Augmented Reality and “Dead Drops”

Bruce Sterling’s been “Visionary in Residence” again this summer at the Pasadena Art Center, where he’s been in cyborg mode, focusing on augmented reality, or reality augmented and mediated by computer-generated sensory input. Bruce has developed an application that runs on the Layar platform, called Dead Drops, inspired by the work of German media artist Aram Bartholl, which per Sterling is “all about hidden data revealed in real-world, three-dimensional spaces.” A Dead Drop is

an anonymous, offline, peer to peer file-sharing network in public space. USB flash drives are embedded into walls, buildings and curbs accessable to anybody in public space. Everyone is invited to drop or find files on a dead drop. Plug your laptop to a wall, house or pole to share your favorite files and data. Each dead drop is installed empty except a readme.txt file explaining the project.

It’s sorta like geocaching, where the cache is digital, and anybody who finds the drop can add to it. The application Bruce has developed is for finding and mapping the drops.

Post-Internet Google+ Beta Madness

I’ve been researching, thinking about, and presenting on the future of the Internet, and this week I’m preparing to propose a SXSW panel and getting ready for a presentation next week at Bootstrap Interactive in Austin. At the moment I’m thinking we’re in a “post-Internet” era. The collaborative, peer to peer network of networks has been co-opted and overlaid by a very few large corporations, and as was the case with earlier information technologies (film, radio, television) monopolies (or duopolies) are forming for network access, hardware, and information services, and the advertising model originated by David Sarnoff et al. for radio is pervasive on an Internet thick with ads – increasingly sites you visit throw an obnoxious full-screen ad in your face as you land. I’m hearing more and more conversations about building a new alternative Internet (and, for that matter, alternative economies and forms of governance).

As I was thinking hard about this, and digging deeper, Google + launched, and the geekiest cohort among my friends started showing up for the limited beta. Plus is YAAS (“yet another activity stream”), probably better-engineered and more social than Facebook’s. No real marketing vibe so far, just a lot of people hanging out (often literally, using G+’s “Hangout” feature, a high-quality form of videoconferencing that’s very cool but crashy).

Google + is the Next Big Rockit. People who are (or wannabe) paid to think about social media are filling many buckets with bits of speculative and often redundant information about the system, which doesn’t strike me as particularly new and innovative in the patterns it’s aggregated. But it is a welcome change from the other high-adoption social environments du jour, namely Facebook and Twitter. Unlike Twitter, it allows longer-form posts and inline media-sharing. Unlike Facebook, it has functional management of relationships (via Circles) and better handling of both transparency and privacy…

And did you mention Diaspora? Their launch has been so constrained as to be a mere whisper, next to the great swooshing sound of the Google+ launch.

I saw Robert Scoble post that he likes it because he can share videos and articles with everybody, and I assume that his emphasis was not on the ability to share (because we’ve been sharing on Facebook, Twitter, Friendfeed et al), but on the idea of sharing “with everybody.” Google + is structured so that you can see and reach more people, and when you’re selective about what you see it’s your choice, not a selection by algorithm as you have in Facebook’s “Top Stories.” At Google + you can drop people into “circles” according to whatever categorization scheme fits your DNA, and that’s really the only operational filter at this point.

But, back to my point about the post-Internet world, what’s been cool about Google+ so far has been the absence of that overlay of commercial messaging that has fogged other sites. It’s been a relatively spam-free zone, reminding me what fired me up about online social spaces from the 80s onward. How long the beta period will last I don’t know, but it’s been a nice reminder of what we could potentially have, if we could turn down the volume the advertising and marketing blasts that seem so much pervasive online lately than even on television or radio.

Back to thinking hard about the future of the Internet.

More on bandwidth: light and darkness

My friend Robert Steele emailed me in response to my last post, saying there’s more to consider, and I agree. He mentions Open Spectrum.

I’m feeling cynical. Here’s how I responded:

I’m aware of open spectrum… I’m in other conversations with various wonks & engineers who’re discussing bandwidth, spectrum, etc. Of course we could have a much different scene if we weren’t constrained by markets and politics. People how can see one sense of the obvious often miss another, which is that the world we’re in is not an ideal world, and the ideals we can conceive are not necessarily easy or even possible to implement. I pay less attention to the “next net” list we’re both on because so much of it is fantasy and masturbation.

I own a nice home in rural Texas but I can’t live there because I can’t even get 500kbps. I thought it was amusing that Vint is arguing for gigabit bandwidth when most of the U.S. is dark and there’s too little monetary incentive to bring light to the darkness. Of course I think we need a public initiative to make it happen, but in this era “public” is a dirty word. I halfway expect to see all roads become toll roads; a world where only the elite can travel, and only the elite will have broadband access. Though aging, I’m struggling to remain part of the elite… *8^)

Increase bandwidth exponentially

Internet prime mover Vint Cerf echoes what I’ve been hearing from other architects of the TCP/IP network: we should focus on building much fatter pipes, and get away from the enforced/legacy scarcity and build gigabit broadband networks. Nothing here about the cost of providing gigabit access, nothing here about the fact that much of the (rural) U.S. has no access to broadband at any speed. What policies do we need to have pervasive gigabit broadband, urban and rural, in the U.S.? Who will pay for the buildout? [Link]

Technology: tipping the balance

Roger Cohen in the New York Times:

Something immense is happening as the world transitions to a hyperconnected state where, for many, the distinction between the real and virtual worlds has ceased to exist. All the trailing paraphernalia of states and borders and government-to-government palavers, not to mention privacy laws, look so 20th century.

The more I speak and write about “the future of the Internet,” the more I realize that I’m talking about the future of the human world. Cohen goes on to say “that technology and international relations are becoming interchangeable topics. There are many more networks in our future than treaties.”

Clueful, yes. Also interesting is the article’s mention of Google Ideas and Jared Cohen’s thinking

that technology is agnostic: It can be used in the cause of freedom — and has been to great effect from Tunis to Cairo — just as it can be used in the cause of repression. So how do you “tip the balance in favor of the net positive?”

There’s seven billion people in the world, population’s growing every day. We’ve been organized as nations, and more recently corporations have been taking power and authority for action (though they still work through legacy forms, i.e. legislatures that are influenced by various means, including contributions of money and personal persuasion). We see a tendency for people to want to have something we call “freedom,” though the meaning of that label, and its limits, are not always clear. Traditionally effective action has been associated with authority and leadership, and the nature and meaning of leadership in a democratized world is unclear. (Also the pervasive influence of corruption, and how it will play out if systems of authority are diminished, as we have more “freedom.”)

We live in exciting and “interesting” times, but we should be skeptical – and I appreciate Jared Cohen’s point about the uncertain potential in social technology. We should be exploring how to tip the balance.

Consciousness in a Box

Stumbled onto this piece I wrote in 1994 for FringeWare Review, triggered by a meeting with Hans Moravec, as I recall.

Robotics has two sides — real-world practical application and
development, and scifi mythopoetic phantasy construction — and like
most real/surreal dichotomies of the Information Age, these two sides
are blurred and indistinct within human consciousness, whatever that
might be….

A good question in this context: What is consciousness? This is hard
to answer because of the obvious blind spot inherent in self-definition
(conscious process defining consciousness), you can’t see the forest for
the trees or the neurons for the nerves, as the case may be. Because
the “conscious” part of me is as deep as I usually go, or as I need to go
in order to play the various survival games, I tend to confuse
consciousness, an interface between the internal me and the external
“thou,” as the totality of my being, as a real thing rather than a
conveniently real-seeming process. (Then again, if consciousness
defines reality, what’s real is what consciousness says is real, but that’s
a digression….)

The sages tell me I’m delusional (attached to the delusion of samsara,
of the world, in the Buddhist view), but I can’t quite figure out what
this means. That’s because “I” am as much the noun, delusion, as the
adjective, delusional. So much of what I am is filtered ouot,
inaccessible to the ego-interface.

But wait. The delusional “I am” is a convenience that facilitates
individual survival-stuff, so I’m not dissin’ it. The purpose of this rant is
to make a point, not about ego or delusion (I’ll let the sages stew in
those juices), but about robotics and AI research and the belief, often
expressed in both scifi and real-world contexts, that you, or more
precisely “your consciousness,” can be stored digitally. In most scifi
depicitons of “consciousness in a box,” the object is immortality: you
store what’s essentially you, and it “lives” forever, or until the plug’s
pulled, whichever comes first (I know where I’m putting my money).
In scifi, this is just another device for exploring the question of
immortality, which has fascinated scifi authors and the mythmakers that
preceded them as a way to come to terms with the death thing. Trying
to rationalize the inescapable. But you find other optimistic folks
(Hans Moravec, the Extropians) who are quite serious about the
potential for immortality and who consider the consciousness-in-a-box
scenario a viable means to that end.

I have a couple of problems with the scenario, myself, the first being
that, even if you digitized your consciousness and stored it in a
psychoelectronic device of some kind, it would not be you. Your
awareness would still fold when you discorporate; the thing that’s
stored might emulate your thinking or even your behavior, but it would
be a simulacrum, like you but not you.

The other problem I have is best expressed in the form of a question:
What are we storing? There seems to be a confusion between process
and object. If consiousness is indeed only a shallow process handling
the various negotiations between what we call subconscious and
external reality, what is the character of the data you’re uploading and
defining as you. Rules, implementations, stored memories —
consciousness is really a hash consisting of no single, store-able entity.
It’s like trying to package a tornado — what do you put in the package?
Do you include all the chaotic elements of weather formation and all
the applied physical rules that are manifest in the tornado’s brief life
span as a process event?

The bottom line here is that you can’t really isolate a single entity
“consciousness” and divorce it from its generative context.

Can you even simulate consciousness? Or intelligence, which
probably has a clearer rule base than the vaguer concept of
consciousness, but is still elusive. An “artificial” intelligence with
sufficient density and complexity to mimic human consciousness is the
very real goal of a particular thread of applied research, but so far no
digital simulacrum has been constructed that “thinks” as we know
thinking. The problem here resonates with the earlier argument about
stored consciousness: we don’t have clarity about the definition and
composition of human consciousness, so how can we copy it? It’s
hard enough to copy something we know.

The mythic representations of scifi robots like Robbie or Gort or
Hal9000 are like consciousness in a black box, deus-ex-machina stuff
that might serve to carry a plot forward but, to those who punch code
into dumb processors day after day, doesn’t ring any more true than a
fairy tale or myth, which is to say that it’s more about wishes and fears
than about any current or projected reality. It’s one thing to load a few
rules, even with algorithms to simulate heuristic process, into the CPUs
of this world, but it’s a real stretch to conceptualize silicon-based
thinking or awareness.

Human and animal consciousness are products of code generations
and modifications that reach `way back, perhaps to the inception of the
universe, and are driven by an unfathomable creative force compared
to which our efforts to construct artificial minds seem comparatively
short-sighted and pitiful. Then again, I suppose in our efforts to mimic
“the gods” we’re channeling that creative force, whatever its true
origins, because it must be inherent in the coce structure of the human
genome. And if that’s so, perhaps we’re destined to coevolve with our
own creations, which have themselves evolved from basic practical
and conceptual tools to today’s ubiquitous computing systems. This
coevolution may produce cyborganic life forms which, though not
created entirely by our hands, may be seen as products of an obsessive
desire to be as we imagine gods to be, creatively self-perpetuating and
therefore, as a race if not individually, immortal.

EFF-Austin revives meeting series

EFF-Austin has been quietly working along presenting occasional events (like the recent Texas Government 2.0 Camp), operating its email list, and publishing occasional blog posts,  but we’ve had a growing sense of a need to ramp up our activities. We can see major threats to the distributed, decentralized Internet and the expectation of a “freedom to connect,” and there are all sorts of interesting conversations and movements emerging that are relevant to EFF-Austin’s stated mission: “EFF-Austin advocates establishment and protection of digital rights and defense of the wealth of digital information, innovation, and technology. We promote the right of all citizens to communicate and share information without unreasonable constraint. We also advocate the fundamental right to explore, tinker, create, and innovate along the frontier of emerging technologies.”

A first important step in our ramping up: we’re restarting our monthly meeting series, coordinated by our new board member, Anna Kozminski. The first of the new series of meetings is June 1 at the Flying Saucer at the Triangle – information and (free) registration here. Open Source software developer Tom Brown, who among other things maintains oscurrency for the Austin Time Exchange and founded Superborrownet, Inc., will talk about his experience attending Internet Identity Workshop 12, and about the Identity Commons movement in general. Come to the meeting, express your support for EFF-Austin’s mission, volunteer to be part of our future going forward.

(Note that EFF-Austin is an independent nonprofit formed originally as a potential chapter of the national Electronic Frontier Foundation (EFF). When EFF decided not to have chapters, we went our own way as an influential organization focused on cyber liberties and more, such as digital culture and arts.)

Filter bubbles

This talk by Eli Pariser reminds me of discussions with David Weinberger about online echo chambers. I recall that this came up as social technology became part of the political process in ~2004. I’ve been concerned that the polarization we’re seeing in the U.S. and elsewhere is exacerbated if not caused by our tendency to pay all of our attention where we agree, and none of it where we’re challenged by opposing or new ideas.

Technology, politics, and balance

When Mitch Ratcliffe and I published Extreme Democracy in 2005, the question came up whether the discussion of politics and social technology was technoutopian. Without getting into the specifics of the book (which included diverse articles, some more positive than others about the potential role of what we now call social media in our political life), I can say that I rejected the “technoutopian” label as a rather shallow dismissal of a complex question: does a technology that gives everyone the potential to have more of a voice bring us closer to a democratic ideal? Or does it turn up the noise and overwhelm the signal? Or could it do both?

I thought about this after reading Cory Doctorow’s piece about the general question of technology optimism vs pessimism. Cory says that he’s a techno-optimist, but note that his position embodies both optimism and pessimism: ” the concern that technology could be used to make the world worse, the hope that it can be steered to make the world better.” He has a great example:

To understand techno-optimism, it’s useful to look at the free software movement, whose ideology and activism gave rise to the GNU/Linux operating system, the Android mobile operating system, the Firefox and Chrome browsers, the BSD Unix that lives underneath Mac OS X, the Apache web-server and many other web- and e-mail-servers and innumerable other technologies. Free software is technology that is intended to be understood, modified, improved, and distributed by its users. There are many motivations for contributing to free/open software, but the movement’s roots are in this two-sided optimism/pessimism: pessimistic enough to believe that closed, proprietary technology will win the approval of users who don’t appreciate the dangers down the line (such as lock-in, loss of privacy, and losing work when proprietary technologies are orphaned); optimistic enough to believe that a core of programmers and users can both create polished alternatives and win over support for them by demonstrating their superiority and by helping people understand the risks of closed systems.

On the question of democracy, I feel an optimism that we can have better transparency and more participation through the Internet-based technologies we’ve created, and are still evolving. Certainly more people are engaged in conversations about politics and the ideas that inform political action. However I have worrisome concerns. One is that too many voices in the mix, too much commitment to consensus, can stall or prevent effective action in governance, and we have too many critical problems to be stalled.

I’m also concerned that “the will of the majority” is not necessarily guided by intelligence, and that it can be manipulated by effective propaganda, such as self-serving well-orchestrated astroturf email and blog campaigns founded on a memetics of fear, uncertainy, and doubt – the “birther” phenomenon is an example of this.

In my last post I mentioned the possibility of an Open Source party that leverages the kind of thinking about organization and action that has emerged from projects based on what Benkler calls commons-based peer production, “in which the creative energy of large numbers of people is coordinated (usually with the aid of the Internet) into large, meaningful projects mostly without traditional hierarchical organization (and often, but not always, without, or with decentralized, financial compensation).” I’m feeling cautiously optimistic about this idea; will be exploring it further… I’m clearly pessimistic that the current direction of politics is sustainable, so it’s time for new ideas and experimentation.

Open Source Politics and Religion

I just sent the following to an email list I’m on, and thought it would be worth sharing here:

I’ve been involved with R.U. Sirius in instigating an International Open Source Party (version 2.0 – we tried it before but it didn’t quite launch). He wrote about it here: http://hplusmagazine.com/2011/02/24/open-source-party-2-0-liberty-democracy-transparency/ This article includes the principles I came up with for Open Source politics, which I include below. Open Source is not a religion, i.e. not based on faith in something that can’t be observed or experienced. It’s about transparency: when we apply the term Open Source we’re talking about following methods and processes in production and distribution such that whatever we define as “source code” can be observed and experienced, so to me it’s the opposite of religion. Eric Hughes once explained to me, when I was new to Open Source thinking, that a particular encryption tool should be Open Source so that its source could be examined and its effectiveness and integrity verified. Politics should be like this, and if we all insisted on this approach, religion would be transformed into practice (a la Buddhism and 4th Way) rather than dogma (a la much of Christianity).

Principles of Open Source Politics:

Openness

Many of us who are tech-focused have come to understand the power of open approaches and open architectures. Even technologies that aren’t strictly “Open Source” benefit from Open APIs and exposure of operating code (kind of inherent with scripting languages like Perl and PHP). When we know how something works, we know how to work with it. And we know how to transform it to meet our needs.

Government should be as open and transparent as possible. There may be some rationales for closed doors, but few — for the most part, citizens should be able to clearly see how decisions are made. That’s a key component of our political platform: we want to see the actual “source code” for the decisions that affect our lives.

Collaboration

Open Source projects are often highly collaborative and can involve many stakeholders, not just manager and coders. The Open Source Party sees this as a great way to do government. (I’m partial to charrette methodology, personally.)

Emergent Leadership

Effective action and decison-making requires leadership. In an Open Source form of politics, leaders emerge through merit -— by providing real leadership and direction, not by appointment, assignment, or election. Nobody made Linus Torvalds the lead for Linux, or Matt Mullenweg the lead for WordPress. They saw a need, created a project, and found an effective following who acknowledged their vision, expertise, and ability to manage and lead. Emergent leaders aren’t handed authority. They earn it, and if they cease to be engaged or effective, they pass the baton to other leaders who emerge from within the group.

Extensible and Adaptable

Open Source projects and structures are agile and malleable. They can be adapted and extended as requirements changed. Governance should have this kind of flexibility, and our system of governance in the U.S. was actually built that way. We should ensure that bureaucracies and obsolete rule sets don’t undermine that flexibility.

Connectivism

Have you ever thought about how completely irrelevant structured learning is? Indeed. “The illiterate of the 21st century will not be those who cannot read or write, but those who cannot unlearn and relearn.” – Alvin Toffler. The video below advocates a change in how we learn – network-centric, personal, based on your context, not based on some institution’s agenda. (Thanks to Judi Clark for sending me the link to this video.)

Transitional Internet

I continue to be focused on the future of the Internet and aware of divergent paths. In the later 2000s, following a period of digital and media convergence and given broad adoption of evolving high speed (broadband) network connectivity, the Internet has become an environment for mixed media and marketing. The Internet is increasingly centralized as a platform that serves a global business engine. It’s a mashup of business to business services and business to consumer connections, a mashup of all the forms of audio, text, and video communication and media in a new, more social/participatory context: the faceless consumer now has an avatar, an email address, and a feedback loop.

The sense of the Internet as a decentralized free and open space has changed, but there are still many advocates and strong arguments for approaches that are bottom-up, network-centric, free as in freedom (and sometimes as in beer), open, collaborative, decentralized. It’s tempting to see top-down corporate approaches vs bottom-up “free culture” approaches as mutually exclusive, but I think they can and will coexist. Rather than make value judgements about the different approaches, I want to support education and thinking about ethics, something I should discuss later.

Right now I want to point to a collaboration forming around the work of Venessa Miemis, who’s been curating trends, models, and projects associated with the decentralized Internet model. Venessa and her colleagues (including myself) have been discussing how to build a decentralized network that is broadly and cheaply accessible and that is more of a cooperative, serving the public interest rather than a narrower set of economic interests.

I’ll be focusing on these sorts of projects here and in my talks on the future of the Internet. Meanwhile, here are pointers to a couple of Venessa’s posts that are good overviews for what I’m talking about. I appreciate her clarity and focus.

There’s also the work of Michel Bauwens and the P2P Foundation, which I’ve followed for several years. The P2P Wiki has relevant pages: