Consciousness in a Box

Stumbled onto this piece I wrote in 1994 for FringeWare Review, triggered by a meeting with Hans Moravec, as I recall.

Robotics has two sides — real-world practical application and
development, and scifi mythopoetic phantasy construction — and like
most real/surreal dichotomies of the Information Age, these two sides
are blurred and indistinct within human consciousness, whatever that
might be….

A good question in this context: What is consciousness? This is hard
to answer because of the obvious blind spot inherent in self-definition
(conscious process defining consciousness), you can’t see the forest for
the trees or the neurons for the nerves, as the case may be. Because
the “conscious” part of me is as deep as I usually go, or as I need to go
in order to play the various survival games, I tend to confuse
consciousness, an interface between the internal me and the external
“thou,” as the totality of my being, as a real thing rather than a
conveniently real-seeming process. (Then again, if consciousness
defines reality, what’s real is what consciousness says is real, but that’s
a digression….)

The sages tell me I’m delusional (attached to the delusion of samsara,
of the world, in the Buddhist view), but I can’t quite figure out what
this means. That’s because “I” am as much the noun, delusion, as the
adjective, delusional. So much of what I am is filtered ouot,
inaccessible to the ego-interface.

But wait. The delusional “I am” is a convenience that facilitates
individual survival-stuff, so I’m not dissin’ it. The purpose of this rant is
to make a point, not about ego or delusion (I’ll let the sages stew in
those juices), but about robotics and AI research and the belief, often
expressed in both scifi and real-world contexts, that you, or more
precisely “your consciousness,” can be stored digitally. In most scifi
depicitons of “consciousness in a box,” the object is immortality: you
store what’s essentially you, and it “lives” forever, or until the plug’s
pulled, whichever comes first (I know where I’m putting my money).
In scifi, this is just another device for exploring the question of
immortality, which has fascinated scifi authors and the mythmakers that
preceded them as a way to come to terms with the death thing. Trying
to rationalize the inescapable. But you find other optimistic folks
(Hans Moravec, the Extropians) who are quite serious about the
potential for immortality and who consider the consciousness-in-a-box
scenario a viable means to that end.

I have a couple of problems with the scenario, myself, the first being
that, even if you digitized your consciousness and stored it in a
psychoelectronic device of some kind, it would not be you. Your
awareness would still fold when you discorporate; the thing that’s
stored might emulate your thinking or even your behavior, but it would
be a simulacrum, like you but not you.

The other problem I have is best expressed in the form of a question:
What are we storing? There seems to be a confusion between process
and object. If consiousness is indeed only a shallow process handling
the various negotiations between what we call subconscious and
external reality, what is the character of the data you’re uploading and
defining as you. Rules, implementations, stored memories —
consciousness is really a hash consisting of no single, store-able entity.
It’s like trying to package a tornado — what do you put in the package?
Do you include all the chaotic elements of weather formation and all
the applied physical rules that are manifest in the tornado’s brief life
span as a process event?

The bottom line here is that you can’t really isolate a single entity
“consciousness” and divorce it from its generative context.

Can you even simulate consciousness? Or intelligence, which
probably has a clearer rule base than the vaguer concept of
consciousness, but is still elusive. An “artificial” intelligence with
sufficient density and complexity to mimic human consciousness is the
very real goal of a particular thread of applied research, but so far no
digital simulacrum has been constructed that “thinks” as we know
thinking. The problem here resonates with the earlier argument about
stored consciousness: we don’t have clarity about the definition and
composition of human consciousness, so how can we copy it? It’s
hard enough to copy something we know.

The mythic representations of scifi robots like Robbie or Gort or
Hal9000 are like consciousness in a black box, deus-ex-machina stuff
that might serve to carry a plot forward but, to those who punch code
into dumb processors day after day, doesn’t ring any more true than a
fairy tale or myth, which is to say that it’s more about wishes and fears
than about any current or projected reality. It’s one thing to load a few
rules, even with algorithms to simulate heuristic process, into the CPUs
of this world, but it’s a real stretch to conceptualize silicon-based
thinking or awareness.

Human and animal consciousness are products of code generations
and modifications that reach `way back, perhaps to the inception of the
universe, and are driven by an unfathomable creative force compared
to which our efforts to construct artificial minds seem comparatively
short-sighted and pitiful. Then again, I suppose in our efforts to mimic
“the gods” we’re channeling that creative force, whatever its true
origins, because it must be inherent in the coce structure of the human
genome. And if that’s so, perhaps we’re destined to coevolve with our
own creations, which have themselves evolved from basic practical
and conceptual tools to today’s ubiquitous computing systems. This
coevolution may produce cyborganic life forms which, though not
created entirely by our hands, may be seen as products of an obsessive
desire to be as we imagine gods to be, creatively self-perpetuating and
therefore, as a race if not individually, immortal.

Emerging thoughts

I’ve been in conversation with a diverse group of people who are interested creating a next version of the Internet that’s more peer to peer, more open source/open architecture, less vulnerable to government or corporate restriction. Some aspects of the various threads of conversation are idealistic – not wholly unrealistic, but so far a bit fuzzy and not fully baked. However there’s substantive, useful, and promising discussion in the air, and I’m hopeful that something viable and helpful will emerge.

Coincidentally, the concept of emergence came up, via this article by Margaret Wheatley, who calls emergence “the fundamental scientific explanation for how local changes can materialize as global systems of influence” as networks evolve into communities of practice, and then systems of influence begin to emerge. This she calls the life cycle of emergence.

This resonates with the Emergent Democracy discussion and paper that Joi Ito, Ross Mayfield, and I (along with several others) worked on in the early 2000s. But what’s missing in this talk about emergence and changing the world is the role of intention. Who sets the goals for changing the world? Who catalyzes networks and drives them in a particular direction? No person or group decides to make something emerge or to make specific changes – emergence is about force and evolution, not human intention. And when you talk about changing the world, by whom and for whom, and with what force, become relevant questions.

The Tea Party and the Koch Brothers want to change the world, too. Is their vision less valid than mine or yours?

But there are forces that transcend Internet theorists and instigators, Tea Parties, partisan movements, idealistic next-net theorizers, rebels in the street, corporations, governments, etc. – forces that emerge out of control; evolution that occurs, not created or driven by some interest group, but driven by complex social physical, psychic, and social factors that have unpredictable effects.

We’re just another set of smart people who think we know how the world should work, and we probably need more humility. How can we be effective in a context where there are forces that are truly beyond our control? What intentions should we support and honor?

EFF-Austin revives meeting series

EFF-Austin has been quietly working along presenting occasional events (like the recent Texas Government 2.0 Camp), operating its email list, and publishing occasional blog posts,  but we’ve had a growing sense of a need to ramp up our activities. We can see major threats to the distributed, decentralized Internet and the expectation of a “freedom to connect,” and there are all sorts of interesting conversations and movements emerging that are relevant to EFF-Austin’s stated mission: “EFF-Austin advocates establishment and protection of digital rights and defense of the wealth of digital information, innovation, and technology. We promote the right of all citizens to communicate and share information without unreasonable constraint. We also advocate the fundamental right to explore, tinker, create, and innovate along the frontier of emerging technologies.”

A first important step in our ramping up: we’re restarting our monthly meeting series, coordinated by our new board member, Anna Kozminski. The first of the new series of meetings is June 1 at the Flying Saucer at the Triangle – information and (free) registration here. Open Source software developer Tom Brown, who among other things maintains oscurrency for the Austin Time Exchange and founded Superborrownet, Inc., will talk about his experience attending Internet Identity Workshop 12, and about the Identity Commons movement in general. Come to the meeting, express your support for EFF-Austin’s mission, volunteer to be part of our future going forward.

(Note that EFF-Austin is an independent nonprofit formed originally as a potential chapter of the national Electronic Frontier Foundation (EFF). When EFF decided not to have chapters, we went our own way as an influential organization focused on cyber liberties and more, such as digital culture and arts.)

bin Laden and the horserace

Osama bin Laden’s death is a complex event with many implications and potential repercussions, yet it’s been trivialized by media analysis (professional and social) that avoids going deep and focuses only on its meaning in the context of the 2012 campaign, or as Adam Hochberg notes, “just another lap in the political horserace.” Another Hochberg point that bears repeating: “…the Internet has removed the traditional filters and allowed the public
to immediately see and participate in Washington’s constant political
posturing.”

Amazon’s broken user experience

Doc Searls has posted a slideshow explaining how Amazon’s user experience is broken, in the context of a discussion about vendor relationship management (VRM), which is about evolving a world where customers have at least symmetry in the power relationship of customer and vendor. The slides are old (January 2010) and things might have changed, but I don’t think they’ve changed as much as they should’ve, because I still experience similar frustrations when I visit Amazon.

Filter bubbles

This talk by Eli Pariser reminds me of discussions with David Weinberger about online echo chambers. I recall that this came up as social technology became part of the political process in ~2004. I’ve been concerned that the polarization we’re seeing in the U.S. and elsewhere is exacerbated if not caused by our tendency to pay all of our attention where we agree, and none of it where we’re challenged by opposing or new ideas.

Technology, politics, and balance

When Mitch Ratcliffe and I published Extreme Democracy in 2005, the question came up whether the discussion of politics and social technology was technoutopian. Without getting into the specifics of the book (which included diverse articles, some more positive than others about the potential role of what we now call social media in our political life), I can say that I rejected the “technoutopian” label as a rather shallow dismissal of a complex question: does a technology that gives everyone the potential to have more of a voice bring us closer to a democratic ideal? Or does it turn up the noise and overwhelm the signal? Or could it do both?

I thought about this after reading Cory Doctorow’s piece about the general question of technology optimism vs pessimism. Cory says that he’s a techno-optimist, but note that his position embodies both optimism and pessimism: ” the concern that technology could be used to make the world worse, the hope that it can be steered to make the world better.” He has a great example:

To understand techno-optimism, it’s useful to look at the free software movement, whose ideology and activism gave rise to the GNU/Linux operating system, the Android mobile operating system, the Firefox and Chrome browsers, the BSD Unix that lives underneath Mac OS X, the Apache web-server and many other web- and e-mail-servers and innumerable other technologies. Free software is technology that is intended to be understood, modified, improved, and distributed by its users. There are many motivations for contributing to free/open software, but the movement’s roots are in this two-sided optimism/pessimism: pessimistic enough to believe that closed, proprietary technology will win the approval of users who don’t appreciate the dangers down the line (such as lock-in, loss of privacy, and losing work when proprietary technologies are orphaned); optimistic enough to believe that a core of programmers and users can both create polished alternatives and win over support for them by demonstrating their superiority and by helping people understand the risks of closed systems.

On the question of democracy, I feel an optimism that we can have better transparency and more participation through the Internet-based technologies we’ve created, and are still evolving. Certainly more people are engaged in conversations about politics and the ideas that inform political action. However I have worrisome concerns. One is that too many voices in the mix, too much commitment to consensus, can stall or prevent effective action in governance, and we have too many critical problems to be stalled.

I’m also concerned that “the will of the majority” is not necessarily guided by intelligence, and that it can be manipulated by effective propaganda, such as self-serving well-orchestrated astroturf email and blog campaigns founded on a memetics of fear, uncertainy, and doubt – the “birther” phenomenon is an example of this.

In my last post I mentioned the possibility of an Open Source party that leverages the kind of thinking about organization and action that has emerged from projects based on what Benkler calls commons-based peer production, “in which the creative energy of large numbers of people is coordinated (usually with the aid of the Internet) into large, meaningful projects mostly without traditional hierarchical organization (and often, but not always, without, or with decentralized, financial compensation).” I’m feeling cautiously optimistic about this idea; will be exploring it further… I’m clearly pessimistic that the current direction of politics is sustainable, so it’s time for new ideas and experimentation.

Open Source Politics and Religion

I just sent the following to an email list I’m on, and thought it would be worth sharing here:

I’ve been involved with R.U. Sirius in instigating an International Open Source Party (version 2.0 – we tried it before but it didn’t quite launch). He wrote about it here: http://hplusmagazine.com/2011/02/24/open-source-party-2-0-liberty-democracy-transparency/ This article includes the principles I came up with for Open Source politics, which I include below. Open Source is not a religion, i.e. not based on faith in something that can’t be observed or experienced. It’s about transparency: when we apply the term Open Source we’re talking about following methods and processes in production and distribution such that whatever we define as “source code” can be observed and experienced, so to me it’s the opposite of religion. Eric Hughes once explained to me, when I was new to Open Source thinking, that a particular encryption tool should be Open Source so that its source could be examined and its effectiveness and integrity verified. Politics should be like this, and if we all insisted on this approach, religion would be transformed into practice (a la Buddhism and 4th Way) rather than dogma (a la much of Christianity).

Principles of Open Source Politics:

Openness

Many of us who are tech-focused have come to understand the power of open approaches and open architectures. Even technologies that aren’t strictly “Open Source” benefit from Open APIs and exposure of operating code (kind of inherent with scripting languages like Perl and PHP). When we know how something works, we know how to work with it. And we know how to transform it to meet our needs.

Government should be as open and transparent as possible. There may be some rationales for closed doors, but few — for the most part, citizens should be able to clearly see how decisions are made. That’s a key component of our political platform: we want to see the actual “source code” for the decisions that affect our lives.

Collaboration

Open Source projects are often highly collaborative and can involve many stakeholders, not just manager and coders. The Open Source Party sees this as a great way to do government. (I’m partial to charrette methodology, personally.)

Emergent Leadership

Effective action and decison-making requires leadership. In an Open Source form of politics, leaders emerge through merit -— by providing real leadership and direction, not by appointment, assignment, or election. Nobody made Linus Torvalds the lead for Linux, or Matt Mullenweg the lead for WordPress. They saw a need, created a project, and found an effective following who acknowledged their vision, expertise, and ability to manage and lead. Emergent leaders aren’t handed authority. They earn it, and if they cease to be engaged or effective, they pass the baton to other leaders who emerge from within the group.

Extensible and Adaptable

Open Source projects and structures are agile and malleable. They can be adapted and extended as requirements changed. Governance should have this kind of flexibility, and our system of governance in the U.S. was actually built that way. We should ensure that bureaucracies and obsolete rule sets don’t undermine that flexibility.

Jasmina Tesanovic: Big Day for Italy

Read Jasmina’“Big Day for Italy”: Living in Torino, Jasmina is not far from the beatification ceremonies for Pope John Paul II – a Serbian journalist writing about the Italian response to a Polish pope, now sainted, just a couple of days after the UK pompfest royal wedding. A global culture celebrates traditions that won’t quite go away. Meanwhile for May Day the “politically excluded” took to the Italian streets “with banners of feminists, pacifist, trade unions, unemployed, refugees, minorities etc…clearly stated their distance from empty Unity of Italy celebrations, not to mention the deceased Pope.”

Three different Italies, today in Italian city squares: the Nation, the Church and the Populace, all protesting, stating, showing, claiming and counterclaiming. Like 150 years ago when this young nation was united under one flag and a royal anthem, drenched in blood amid many uncertainties, today too, the classic Italian scenario repeats itself, as a farce of course.

More fun with the DMCA

Facebook took down Ars Technica’s page on the site because of allegedly infringing content. Read about it here. The page is back after much wrangling. The problem for Ars Technica (and potentially for anyone else against whom there’s an infringement complaint) was that Facebook didn’t tell them what content was allegedly infringing or offer them an appeals process or an option to remove the content. Evidently Facebook didn’t think through a process for handling these complaints, which can easily be bogus.

I’m not an attorney (so don’t take my word for this), but I can suggest a process: the DMCA says Facebook should promptly block access or remove the material, so they really have to do something. My process would be to block the page temporarily, notify the page owner of the complaint and specify the content, give them an opportunity to take the content down or make the case that it’s not infringing. The Online Copyright Infringement Liability Act says that the counter-notification from the page owner saying that the material is not, in face, infringing would be enough for safe harbor from liability.

Attorneys, please comment if I’ve got this wrong.

Time and the brain

Burkhard Bilger in The New Yorker profiles David Eagleman, a brilliant researcher who’s studying the brain, consciousness, and the perception of time. At a personal level I’ve spent a lot of time in recent years studying and trying to comprehend my own degrees and levels of consciousness and perception. We think of our “conscious experience” as a constant, and our unconscious as inaccessible… but through attention we learn that there are gradations in the range of conscious to “un-” or “sub-” conscious experience; that perceptions can vary with context; that memory is selective and undependable; that our perception of the world is generally incomplete though we do a good job of filling the gaps. When David Eagleman was a child he fell from a roof and realized that his perception of time had changed as he was falling. Now he’s doing evidence-based research to determine how people experience the world, what are the variations, how does the brain work and how does the mind work?  Read about it here. If you know about similar studies and writings, please post in comments.

Connectivism

Have you ever thought about how completely irrelevant structured learning is? Indeed. “The illiterate of the 21st century will not be those who cannot read or write, but those who cannot unlearn and relearn.” – Alvin Toffler. The video below advocates a change in how we learn – network-centric, personal, based on your context, not based on some institution’s agenda. (Thanks to Judi Clark for sending me the link to this video.)

Transitional Internet

I continue to be focused on the future of the Internet and aware of divergent paths. In the later 2000s, following a period of digital and media convergence and given broad adoption of evolving high speed (broadband) network connectivity, the Internet has become an environment for mixed media and marketing. The Internet is increasingly centralized as a platform that serves a global business engine. It’s a mashup of business to business services and business to consumer connections, a mashup of all the forms of audio, text, and video communication and media in a new, more social/participatory context: the faceless consumer now has an avatar, an email address, and a feedback loop.

The sense of the Internet as a decentralized free and open space has changed, but there are still many advocates and strong arguments for approaches that are bottom-up, network-centric, free as in freedom (and sometimes as in beer), open, collaborative, decentralized. It’s tempting to see top-down corporate approaches vs bottom-up “free culture” approaches as mutually exclusive, but I think they can and will coexist. Rather than make value judgements about the different approaches, I want to support education and thinking about ethics, something I should discuss later.

Right now I want to point to a collaboration forming around the work of Venessa Miemis, who’s been curating trends, models, and projects associated with the decentralized Internet model. Venessa and her colleagues (including myself) have been discussing how to build a decentralized network that is broadly and cheaply accessible and that is more of a cooperative, serving the public interest rather than a narrower set of economic interests.

I’ll be focusing on these sorts of projects here and in my talks on the future of the Internet. Meanwhile, here are pointers to a couple of Venessa’s posts that are good overviews for what I’m talking about. I appreciate her clarity and focus.

There’s also the work of Michel Bauwens and the P2P Foundation, which I’ve followed for several years. The P2P Wiki has relevant pages: