Stumbled onto this piece I wrote in 1994 for FringeWare Review, triggered by a meeting with Hans Moravec, as I recall.
Robotics has two sides — real-world practical application and
development, and scifi mythopoetic phantasy construction — and like
most real/surreal dichotomies of the Information Age, these two sides
are blurred and indistinct within human consciousness, whatever that
might be….
A good question in this context: What is consciousness? This is hard
to answer because of the obvious blind spot inherent in self-definition
(conscious process defining consciousness), you can’t see the forest for
the trees or the neurons for the nerves, as the case may be. Because
the “conscious” part of me is as deep as I usually go, or as I need to go
in order to play the various survival games, I tend to confuse
consciousness, an interface between the internal me and the external
“thou,” as the totality of my being, as a real thing rather than a
conveniently real-seeming process. (Then again, if consciousness
defines reality, what’s real is what consciousness says is real, but that’s
a digression….)
The sages tell me I’m delusional (attached to the delusion of samsara,
of the world, in the Buddhist view), but I can’t quite figure out what
this means. That’s because “I” am as much the noun, delusion, as the
adjective, delusional. So much of what I am is filtered ouot,
inaccessible to the ego-interface.
But wait. The delusional “I am” is a convenience that facilitates
individual survival-stuff, so I’m not dissin’ it. The purpose of this rant is
to make a point, not about ego or delusion (I’ll let the sages stew in
those juices), but about robotics and AI research and the belief, often
expressed in both scifi and real-world contexts, that you, or more
precisely “your consciousness,” can be stored digitally. In most scifi
depicitons of “consciousness in a box,” the object is immortality: you
store what’s essentially you, and it “lives” forever, or until the plug’s
pulled, whichever comes first (I know where I’m putting my money).
In scifi, this is just another device for exploring the question of
immortality, which has fascinated scifi authors and the mythmakers that
preceded them as a way to come to terms with the death thing. Trying
to rationalize the inescapable. But you find other optimistic folks
(Hans Moravec, the Extropians) who are quite serious about the
potential for immortality and who consider the consciousness-in-a-box
scenario a viable means to that end.
I have a couple of problems with the scenario, myself, the first being
that, even if you digitized your consciousness and stored it in a
psychoelectronic device of some kind, it would not be you. Your
awareness would still fold when you discorporate; the thing that’s
stored might emulate your thinking or even your behavior, but it would
be a simulacrum, like you but not you.
The other problem I have is best expressed in the form of a question:
What are we storing? There seems to be a confusion between process
and object. If consiousness is indeed only a shallow process handling
the various negotiations between what we call subconscious and
external reality, what is the character of the data you’re uploading and
defining as you. Rules, implementations, stored memories —
consciousness is really a hash consisting of no single, store-able entity.
It’s like trying to package a tornado — what do you put in the package?
Do you include all the chaotic elements of weather formation and all
the applied physical rules that are manifest in the tornado’s brief life
span as a process event?
The bottom line here is that you can’t really isolate a single entity
“consciousness” and divorce it from its generative context.
Can you even simulate consciousness? Or intelligence, which
probably has a clearer rule base than the vaguer concept of
consciousness, but is still elusive. An “artificial” intelligence with
sufficient density and complexity to mimic human consciousness is the
very real goal of a particular thread of applied research, but so far no
digital simulacrum has been constructed that “thinks” as we know
thinking. The problem here resonates with the earlier argument about
stored consciousness: we don’t have clarity about the definition and
composition of human consciousness, so how can we copy it? It’s
hard enough to copy something we know.
The mythic representations of scifi robots like Robbie or Gort or
Hal9000 are like consciousness in a black box, deus-ex-machina stuff
that might serve to carry a plot forward but, to those who punch code
into dumb processors day after day, doesn’t ring any more true than a
fairy tale or myth, which is to say that it’s more about wishes and fears
than about any current or projected reality. It’s one thing to load a few
rules, even with algorithms to simulate heuristic process, into the CPUs
of this world, but it’s a real stretch to conceptualize silicon-based
thinking or awareness.
Human and animal consciousness are products of code generations
and modifications that reach `way back, perhaps to the inception of the
universe, and are driven by an unfathomable creative force compared
to which our efforts to construct artificial minds seem comparatively
short-sighted and pitiful. Then again, I suppose in our efforts to mimic
“the gods” we’re channeling that creative force, whatever its true
origins, because it must be inherent in the coce structure of the human
genome. And if that’s so, perhaps we’re destined to coevolve with our
own creations, which have themselves evolved from basic practical
and conceptual tools to today’s ubiquitous computing systems. This
coevolution may produce cyborganic life forms which, though not
created entirely by our hands, may be seen as products of an obsessive
desire to be as we imagine gods to be, creatively self-perpetuating and
therefore, as a race if not individually, immortal.