Kurzweil posts about a system developed for “mining the blogosphere,” i.e. BlogSum, a sophisticated listening natural language processing system for evaluating and indexing blog content developed at Concordia University. “The system is capable of gauging things like consumer preferences and voter intentions by sorting through websites, examining real-life self-expression and conversation, and producing summaries that focus exclusively on the original question.” This is a technical concept that David DeMaris and I had discussed some years ago, thinking of potential activist/political applications. It’ll be interesting to see how this technology is deployed.
Category: Technology
You are not safe online!
Compliance and Safety, a company that makes safety training videos, has a good infographic that resonates with parts of the Doc Searls discussion: who controls your data? How safe are you online? Excerpt below, full graphic at the Compliance and Safety website.
Doc Searls discussion: The Intention Economy
I’m leading a discussion on the WELL with Doc Searls about his new book, The Intention Economy: When Customers Take Charge, which explores new thinking about the power relationship between customers/consumers and vendors. Doc has been rethinking those relationships through Project VRM (via his fellowship at the Berkman Center at Harvard), which has recently led to the creation of a “customer commons.”
It’s an old saw to say that listening to customers is a way to improve and gain new market advantages. But the difference with VRM will be adapting to standards and practices set on the customers’ side — ones that work the same for all companies. There will be less and less leverage in communicating only within a company’s on communication silo. IMHO, “social” services like Twitter and Facebook are not going to provide those standard ways, because they too are privately owned silos.
Scale will only happen when everybody uses the same stuff in the same way. The Internet and its core protocols scaled because they were essentially NEA: Nobody owned them, Everybody Used them and Anybody could improve them. (Yes, some were owned in a legal sense, but in a practical sense they were ownerless. This is why, for example, Ethernet beat Token Ring. Intel, Xerox and Digital essentially released Ethernet into the public domain while IBM wanted to keep Token Ring fully private and charge or it. This bitter lesson had leverage later when IBM embraced Linux.) Email as we know it won because it scaled in exactly that way.
Are computers intelligent?
At Reality Augmented Blog, I recently posted a Storify of my live tweets from Bruce Sterling’s talk at the Turing Centenary Symposium at the University of Texas. Bruce talked about Turing’s investigation into “whether or not it is possible for machinery to show intelligent behaviour” and the Turing test, which is supposed to determine how well a computer at least seems to be intelligent by human standards. To consider this question, you might think you’d have to define thinking (cognition, consciousness, etc.), but instead of taking on that difficult task, Turing changes the question from “Do machines think?” to “Can machines do what we (as thinking entities) can do?” That’s really a different question, less metaphysical and more about comparing manifestations of thinking than comparing processes of thinking.
Bruce noted in his talk an aspect of the Turing test that doesn’t get much attention: it was originally about gender. In his paper “Computing Machinery and Intelligence,” Turing described the game as “played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman.” He goes on to say
We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”
So as Bruce notes, the actual Turing test is for a machine with a woman’s sensibility. The gist of his talk reminded me of conversations I’ve had with transgendered Sandy Stone, who spent years studying identity hacking online and off. I brought up the question of a man deciding to appear online as a woman, and how real that transformation can be. If you’re a man and decide to be a woman (or vice versa), you can’t quite make the authentic switch, because gender entails years of cultural and behavioral conditioning. If you’ve been contextualized as a male, you don’t become female by changing your name, your voice, your dress, even your body.
In the context of the conversations with Sandy, the subtext always seemed to be about liberation from the trappings of gender – you don’t have to be “a man” or “a woman,” you can just be you. But this has relevance, not just in terms of gender switching, but with any attempt at transformation. And it has implications for the discussion of machine intelligence. Machines can’t “become human” or be like humans, because they have no experience as humans, and you can’t program the embodied human experience.
In the context of the conversations with Sandy, the subtext always seemed to be about liberation from the trappings of gender – you don’t have to be “a man” or “a woman,” you can just be you. But this has relevance, not just in terms of gender switching, but with any attempt at transformation. And it has implications for the discussion of machine intelligence. Machines can’t “become human” or be like humans, because they have no experience as humans, and you can’t program the embodied human experience. You also can’t program “consciousness” – puny humans aren’t even clear what consciousness is, and we know that things like “consciousness” and “awareness” and “thinking” can be quite subjective and hard to quantify. So when we talk about “artificial intelligence” or “machine intelligence,” that word “intelligence” can be misleading. It’s not about making a machine like a human, it’s about seeing how well a machine can simulate the human. The Turing test is really about how clever we are at programming a bot that does heuristics well and can “learn” to hold its own in a human conversation. It’s interesting to bring gender into it – to simulate the human, a bot would be one or the other.
Rotwang and his lost-love simulation[/caption]Bruce: “Why not ask ‘can a computational system be a woman?'” This made me think of Rotwang’s remaking of Hel in Metropolis, and how she’s repurposed as a simulation of Maria… a robot designed to simulate the female form. Is she a mechano-electronic woman? Or just a bag o’ bytes? More compelling, I think, is the concept of the cyborg, originally described as a biological entity that’s manufactured and has some machine components. More recently, we’ve come to think of cyborgs as “ordinary” humans augmented by digital or other technology – e.g. anyone with a smart phone or a computer could be considered a cyborg. My colleague Amber Case writes about “cyborg anthropology,” acknowledging that synergies within human-machine interaction are transformative, and require new methods and fields in the study of humanity. I think cyborgization is more interesting and more real than the Kurzweil sense of “artificial intelligence” (machines “smarter” than humans that become self-aware – Hal 9000 is a mythical beast; computers may be capable of processes that seem intelligent, but back to Bruce’s point, computers are not anything like humans.)
Turing himself said “the idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer.” On the other hand, Gurdjieff said “man such as we know him, is a machine.” A very complicated machine, he noted elsewhere.
My point in all this is that humans are not machines and machines won’t become human. We’re confused on that point, likely because of a larger metaphysical confusion, a confusion about who and what we are, our place in the universe, and the nature of the various human analogs, similar but different processes, that we see in the egosystem. (That’s not a misspelling…)
Bruce Sterling: “I fear posterity will condemn us for being too clever, for failing to speak about the obvious in an immediate lucid way. We need a new aesthetic with a strong metaphysics. How we get there, I don’t know.”
Augmented cyborgs at SXSW
Another SXSW coming up; it’ll be good to see old friends and make new connections. The Austin Chronicle asked me to write something for their SXSW Interactive issue; that led to an interesing interview with cyborg anthropologist Amber Case, a longer version of which I might post here later. When “bOING bOING” was a magazine, I was an associated editor listed as “cyborganic jivemeister,” and the magazine I published, FringeWare Review, focused quite bit on “cyborging.” Originally a science fiction term, a mashup of “cybernetic organism,” the term represents a potentially huge field of study – how humans interact with, and how human experience is enhanced by, digital technology. If you’ll be at SXSW Interactive, don’t miss Amber’s keynote Sunday, March 11, 2pm at the Austin Convention Center, Exhibit Hall 5 (#SXAmberCase). Meanwhile after the interview was done she and I kept talking, and will be working on a project together, a blog on the subject of augmented reality.
Code Across America ATX: A Civic Innovation Hackathon
Google-funded Code for America was in Austin Saturday for a codeathon using data accessible via the city’s data portal. I dropped by the geek chic coworking facility Conjunctured, where the codeathon was happening, and hung out long enough to get a sense of the projects the ~40 coders were tackling. Those included a Bike Accident and Route Safety app, an app for finding miscellaneous stuff around town, and a “garden dating” app (to help people who want a community garden find a space). What was missing? For at least one project (Find It), there were fewer sources of data than the developers would’ve liked. I realized that it’s not enough to bring coders together to create apps – we should also be cultivating data sources. A project to build databases and facilitate citizen input would be a logical complement to the various codeathons.
Howard Rheingold: Net Smart
In 2009, Howard Rheingold created an excellent mini-course in network literacy, a substantial resource for those who want to learn more about the Internet. Here’s the introductory video:
Howard’s written a book on network and digital literacy called Net Smart: How to Thrive Online.
Bots can shape social interaction
Scientists experimenting with Twitter bots found that the bots could “shape” activity on Twitter to some extent. They’re continuing their studies to get a better understanding of what they’re seeing. [Link]
The origin of the study was explained by Tim Hwang, one of the authors of a research paper describing the socialbot experiments. “A lot of people you can hire now say they are really good at community engagement. Can we measure those claims?”
From the paper linked above:
… although each socialbot was able to connect only a relatively small portion of users from its target group, the findings of this study are extremely signi cant. These findings indicate the fi rst successful attempts at automatically and programmatically shaping the topology of online communities. Further, while the scale of this study was relatively small, socialbots are designed to be light, efficient, and entirely automatic { and thus, easily deployable in large swarms. We believe this study marks the rst step towards demonstrating the ability of such technologies to shape online communities at a large scale.
Wonder if this means we’ll have swarms of marketing bots flooding Twitter and other social systems?
Internet Code Ring! (Interview with Phil Zimmermann, circa 1993)
Discovered that this interview is no longer findable online, so I’m republishing it here. A version of this was published in bOING bOING (the ‘zine) in 1993 or 1994.
We were sitting in a circle on the floor at the Computers, Freedom,
and Privacy conference, March ’93 in San Francisco, St. Jude and I
with Tom Jennings, Fen La Balme, et al, discussing encryption and
other neophiliac rants when a dapper fellow wandered by with a
beard on his face and a tie hanging from his neck. He picked up
Jude’s copy of bOING-bOING number 10 and glanced through it,
clearly interested. I later learned that this was Phil Zimmerman,
creator of PGP (“Pretty Good Privacy”), so I tracked him down and
we talked for the record.
Jon: I’m fairly nontechnical, and I’m also new to encryption. I spent
some time recently on the cypherpunks’ list, and I have a pretty
good sense of what’s going on, but maybe you can tell me in your
own words how you came to write PGP, and what your philosophy
is, especially with distribution.
Phil: Well, okay. PGP, which means “Pretty Good Privacy” is a
public key encryption program, it uses a public key encryption
algorithm, which means that you can encrypt messages and you can
send them to people that you’ve never met, that you’ve never had a
chance to exchange keys with over a secure channel. With regular
encryption, the kind that everybody has heard about, you encrypt a
message, it scrambles it up, renders it unintelligible, and then you
send it to someone else, and they can descramble it, decrypting it.
They have to use the same key to decrypt it as you used to encrypt
it. Well, this is a problem, this is inconvenient, because how are you
going to tell them what that key is, what’re you going to do, tell
them over the telephone? If someone can intercept the message, they
can intercept the key. So this has been the central problem in
cryptography for the past couple of millenia. There’s been a lots of
different ways of encrypting information, but they all have this
problem.
If you had a secure channel for exchanging keys, why do you
need any cryptography at all? So, in the late 1970s, somebody came
up with an idea for encrypting information with two keys. The two
keys are mathematically related. You use one of the keys to encrypt
the message, and use the other key to decrpyt the message. As a
matter of fact, the keys have a kind of yin-yang relationship, so that
either one of them can decrypt what the other one can encrypt. So
everybody randomly generates a pair of these keys, the keys are
mathematically related, and they can be split apart like cracking a
coin in half, and the jagged edges stick together just right. They can
publish one of the keys, and keep the other one secret. Now, unlike
cracking the coin in half, you can’t look at the jagged edge, and
figure out what the other jagged edge is going to look like. In fact,
you can’t look at the published key and figure out what the secret
key is without spending centuries of supercomputer time to do it.
This means that any time anybody wants to send you a message,
they can encrypt that message with your public key, and then you
can decrypt the message with your secret key. If you want to send
them a message, then you can encrypt the message with their public
key, and then they can decrypt it with their secret key. Everybody
who wants to participate in this system can generate a pair of these
keys, publish one of them, and keep the other one secret.
Everybody’s published key can end up in a big public key directory,
like a phone book, or an electronic bulletin board, or something like
that. You can look up somebody’s public key, encrypt a message to
them, and send it to them. They’re the only ones that can read it,
because they’re the only ones that have the corresponding secret
key.
J: Are there any such directories now?
P: Well, actually, there are starting to be directories like that. For
PGP, there are some public key directories on Internet. You can just
send an electronic inquiry saying “Give me the key for
[somebody],” and it’ll send you their key back, their public key.
J: The convention I’ve seen has been the inclusion of the public key
in an email message posted to a mailing list.
P: You can do that, you can include your own public key when you
send a message to someone, so that when they send you a reply,
they’ll know what public key to use to send the reply. But the
problem…there is an achilles heel with public key cryptography, and
I’ll get to that in a minute. But first, let me explain authentication. If
I want to send you a message, and prove that it came from me, I can
do that by encrypting it with my own secret key, and then I can
send you the message, and you can decrypt it with my public key.
Remember I said that the keys are in this yin-yang relationship, so
that either one can decrypt what the other one encrypts. If I don’t
care about secrecy, if I only cared about authentication, if I only
wanted to prove to you that the message came from me, I could
encrypt the message with my own secret key and send it to you, and
you could decrypt it with your public key. Well, anyone else could
decrypt it to, because everyone has my public key. If I want to
combine the features of secrecy and authentication, I can do both
steps: I can encrypt the message first with my own secret key,
thereby creating a signature, and then encrypt it again with your
public key. I then send you the message. You reverse those steps:
first you decrypt it with your own secret key, and then you decrypt
that with my public key. That’s a message that only you can read
and only I could have sent. We have secrecy and authentication. So
you get authentication by using your own secret key to decrypt a
message, thereby signing the message. You can also convince third
parties like a judge that the message came from me. That means that
I could send you a financial instrument, a legal contract or some
kind of binding agreement. The judge will believe that the message
did come from me, because I am the only person with the secret key,
that could have created that message.
Now, public key cryptography has an achilles heel, and that
achilles heel is that, suppose you want to send a message to someone,
and you look up their public key, on a bulletin board, for example.
You take their public key and you encrypt the message and then
send it to them, and presumably only they can read it. Well, what if
Ollie North broke into that BBS system? And he subsituted his own
public key for the public key of your friend. And left your friend’s
name on it, so that it would look like it belonged to your friend. But
it really wasn’t your friend’s public key, it was Ollie’s public key that
he had created just for this purpose. You send a message, you get the
bulletin board to tell you your friend’s public key, but it isn’t your
friend’s public key, it’s Ollie’s public key. You encrypt a message
with that. You send it, possibly through the same bulletin board, to
your friend. Ollie intercepts it, and he can read it because he knows
the secret key that goes with it. If you were particularly clever,
which Ollie North isn’t because we all know that he forgot to get
those White House backup tapes deleted…but suppose he were
clever, he would then re-encrypt the decrypted message, using the
stolen key of your friend, and send it to your friend so that he
wouldn’t suspect that anything was amiss. This is the achilles’ heel of
public key cryptography, and all public key encryption packages
that are worth anything invest a tremendous amount of effort in
solving this one problem. Probably half the lines of code in the
program are dedicated to solving this one problem. PGP solves this
problem by allowing third parties, mutually trusted friends, to sign
keys. That proves that they came from who they said they came
from. Suppose you wanted to send me a message, and you didn’t
know my public key, but you know George’s public key over here,
because George have you his public key on a floppy disk. I publish
my public key on a bulletin board, but before I do, I have George
sign it, just like he signs any other message. I have him sign my
public key, and I put that on a bulletin board. If you download my
key, and it has George’s signature on it, that constitutes a promise
by George that that key really belongs to me. He says that my name
and my key got together. He signs the whole shootin’ match. If you
get that, you can check his signature, because you have his public
key to check. If you trust him not to lie, you can believe that really is
my public key, and if Ollie North breaks into the bulletin board, he
can’t make it look like his key is my key, because he doesn’t know
how to forge a signature from George. This is how public key
encryption solves the problem, and in particular, PGP solves it by
allowing you to designate anyone as a trusted introducer. In this
case, this third party is a trusted introducer, you trust him to
introduce my key to you.
There are public key encryption packages currently being
promoted by the U.S. Government based on a standard called
Privacy Enhanced Mail, or PEM. PEM’s architecture has a central
certification authority that signs everybody’s public key. If everyone
trusts the central authority to sign everyone’s key, and not to lie,
then everyone can trust that they key they have is a good key. The
key actually belongs to the name that’s attached to it. But a lot of
people, especially people who are libertarian-minded, would not feel
comfortable with an approach that requires them to trust a central
authority. PGP allows grassroots distributed trust, where you get to
choose who you trust. It more closely follows the social structures
that people are used to. You tend to believe your friends.
J: Did you make a conscious decision up front, before you started
programming PGP, that you were going to create something that
would be distributed in this grassroots way, free through the
Internet.
P: Well, there were some software parts of PGP that I developed
some years ago, as far back as 1986, that I developed with the
intention of developing commercial products with it someday. Over
the years that followed, I developed a few more pieces that I hoped
someday to turn into a commercial product. But, when it finally
came down to it, I realized that it would be more politically effective
to distribute PGP this way. Besides that, there is a patent on the
RSA public key encryption algorithm that PGP is based on. I wrote
all of the software from scratch. I didn’t steal any software from the
RSA patent holders. But patent law is different from copyright law.
While I didn’t steal any software from them, I did use the algorithm,
the mathematical formulas that were published in academic journals,
describing how to do public key cryptography. I turned those
mathematical formulas into lines of computer code, and developed it
independently.
J: Did you originally intend to license that?
P: When I first wrote the parts of it back in 1986, I did. But I began
in earnest on PGP in December of 1990. At that time, I had decided
that I was going to go ahead and publish it for free. I thought that it
was politically a useful thing to do, considering the war on drugs
and the government’s attitude toward privacy. Shortly after I stared
on the development, I learned of Senate Bill 266, which was the
Omnibus Anticrime Bill. It had a provision tucked away in it, a sense
of Congress provision, that would, if it had become real hard law,
have required manufacturers of secure communications gear, and
presumably cryptographic software, to put back doors in their
products to allow the government to obtain the plain text contents
of the traffic. I felt that it would be a good idea to try to get PGP out
before this became law. As it turned out, it never did pass. It was
defeated after a lot of protest from civil liberties groups and industry
groups.
J: But if they could get away with passing it, they would still take the
initiative and try.
P: Well, yeah, actually…it started out as a sense of Congress bill,
which means that it wasn’t binding law. But those things are usually
set to deploy the political groundwork to make it possible later to
make it into hard law. Within a week or so after publishing PGP,
Senate Bill 266 went down in defeat, at least that provision was
taken out, and that was entirely due to the efforts of others, I had
nothing to do with that. PGP didn’t have any impact, it turned out,
at all. So that’s why I published PGP.
J: Several of my friends are involved in cypherpunks, and I’ve been
on their mailing list…are you affiliated in any way with
cypherpunks? Are you getting their mailing list?
P: I was on their mailing list for a couple of days, but I found that
the density of traffic was high enough that I couldn’t get any work
done, so I had them take me off the list.
J: The reason I bring cypherpunks up is that they seem to have
almost a religious fervor about encryption
wondering if you share that.
P: I don’t think of my own interest in cryptography as a religious
fervor. I did miss some mortgage payments while I was working on
PGP. In fact, I missed five mortgage payments during the
development of PGP, so I came pretty close to losing my house. So I
must have enough fervor to stay with the project long enough to
miss five mortgage payments
religious fervor.
J: I’m impressed with the way encryption in general and PGP in
particular have caught on with the press, how it’s become within the
last year.
P: Well, PGP 1.0 was released in June of ’91. It only ran on MS
DOS, and it didn’t have a lot of the features necessary to do really
good key certification, which is that achilles’ heel that I told you
about. Theoretically, you could use it in a manual mode to do that,
but it wasn’t automatic like it is in PGP 2.0 and above. The current
release of PGP is 2.2. It’s a lot smoother and more polished that 2.0
was. 2.0 was tremendously different than 1.0, and the reason the
popularity has taken off so much since September, when it was
released, is because it ran on a lot of UNIX platforms, beginning
with 2.0. Since the main vehicle for Internet nodes is UNIX
platforms, that made it more popular in the UNIX/Internet world.
Since Internet seems to be the fertile soil of discourse on
cryptography, the fact that PGP 2.0 began running on UNIX
platforms has a lot to do with it’s popularity since that version was
released…Tthat was in September of ’92.
J: The easiest way to get PGP is through FTP from various sites?
P: Yeah. Most of them European sites. PGP 2.0 and above was
released in Europe. The people that were working on it were out of
reach of U.S. patent law…and not only are they out of reach of patent
law, but it also defuses the export control issues, because we’re
importing it into the U.S., instead of exporting it. Also PGP 1.0 was
exported, presumably by somebody, any one of thousands of people
could have done it…but it was published in the public domain. It’s
hard to see how something like that could be published, and
thousands of people could have it, and it could not leak overseas. It’s
like saying that the New York Times shouldn’t be exported, how can
you prevent that when a million people have a copy? It’s blowing in
the wind, you can’t embargo the wind.
J: And by beginning in Europe, you sort of fanned the flame that
much better.
P: Yeah.
J: It seems to have spread globally, and I’m sure that you’re hearing a
lot about it, getting a lot of response.
P: Particularly at this conference (CFP93), yes.
J: Do you plan to do more development of PGP, or are you satisfied
with where it is….
P: PGP will be developed further. My personal involvement is more
in providing design direction and making sure that the architecture
stays sound. The actual coding is taking place overseas, or at least
most of it is. We do get patches sent in by people in the U.S. who
find bugs, and who say, “I found this bug, here’s a patch to fix it.”
But the bulk of the work is taking place outside the U.S. borders.
J: Is there a Mac version as well as a DOS version now?
P: Yeah, there is a Mac version…there was a Mac version released
shortly after PGP 2.0 came out. Somebody did that independently,
and I only found out about it after it was released. People have
written me about it, and it did seem to have some problems. The
same guy who did that version is doing a much improved version,
Mac PGP version 2.2, which I believe should be out in a few
days…that was the last I heard before I came to the conference. The
second Mac development group, that’s working on a very “Mac”-ish
GUI, is being managed by a guy named Blair Weiss. That takes
longer, it’s difficult to write a good Mac application, so it’s probably
going to be a couple of months before that hits the streets.
J: Were you involved in the UNIX version, too?
P: I did the first MS-DOS version entirely by myself, but it’s not
that big a distance between MS-DOS and UNIX, so most of it was
the same. The UNIX board took place soon after PGP 1.0 was
released. After that, many other enhancements were added, and
major architectural changes took place to the code, and that’s what
finally made its way out as version 2.0.
J: You’re doing consulting now?
P: That’s how I make my living, by consulting. I don’t make
anything from PGP.
J: Do you think you’ll just let PGP take a life of its own, let other
people work on it from here out?
P: Other people are contributing their code, and other people are
adding enhancements, with my design direction. Perhaps someday
I’ll find a way to make money from PGP, but if I do, it will be done
in such a way that there will always be a free version of PGP
available.
J: I was thinking of the UNIX thing, where everybody’s modified
their versions of the UNIX Operating System so that some
[customized versions] weren’t even interoperable. I was wondering
if there was a chance that PGP would mutate, whether you’re going
to keep some sort of control over it, or whether people will start
doing their onw versions of it….
P: Well, I don’t know, that could happen. There are so many people
interested in the product now, it’s hard to keep track of everybody’s
changes. When they send in suggested changes, we have to look at it
carefully to see that the changes are good changes.
J: But you don’t have some sort of structure in place where you do
some kind of approval if somebody wants to make some kind of
mutant version of PGP….
P: There is a kind of de facto influence that I have over the product,
because it’s still my product, in a kind of psychological sense. In the
user population, they associate my name with the product in such a
way that, if I say that this product is good, that I have looked at this
and that I believe the changes made sense the last version are good
changes, that people will believe that. So I can determine the
direction, not by some iron law, not by having people work for me
that I can hire and fire, but more by my opinion guiding the product.
It would not be easy for a person to make a different version of PGP
that went in a different direction than how I wanted it to go, because
everybody still uses the version that I approved, so to be
compatible…this has a kind of intertia to it, a de facto standard. PGP
currently, I believe, is the world’s most popular public key
encryption program, so that has potential to become a de facto
standard. I don’t know what that means in comparison to the PEM
standard. PEM is for a different environment than PGP, perhaps,
although the PGP method of certifying keys can be collapsed into a
special case that mimics in many respects the PEM model for
certifying keys.
The “Internet of Things” gets real
The Internet of Things, predicted by Bruce Sterling around 2006, is happening. Steve Lohr in the NY Times explores the mainstreaming of the idea: “… the protean Internet technologies of computing and communications are rapidly spreading beyond the lucrative consumer bailiwick. Low-cost sensors, clever software and advancing computer firepower are opening the door to new uses in energy conservation, transportation, health care and food distribution. The consumer Internet can be seen as the warm-up act for these technologies.”
Across many industries, products and practices are being transformed by communicating sensors and computing intelligence. The smart industrial gear includes jet engines, bridges and oil rigs that alert their human minders when they need repairs, before equipment failures occur. Computers track sensor data on operating performance of a jet engine, or slight structural changes in an oil rig, looking for telltale patterns that signal coming trouble.
Here’s Bruce at the O’Reilly Emerging Technology Conference in 2006:
DrupalCamp Austin 2011, Day 2
Contact Summit: “It’s time to take back the net”
This week, on October 20, a diverse assortment of forward-thinking, Internet-savvy, solutions-oriented people gathered in New York City for Contact Summit, a project-focused event organized by Doug Rushkoff and Venessa Miemis. I was originally planning to attend, and was plugged into the small team of organizers. I couldn’t make the event, but have been available as a resource for organizers of related global Meetups, and will help sustain the converation following the event.
Doug had created a prologue video for the remote Meetups scheduled to occur synchronous with the main event. Here’s a summary of his comments in that video – this gives a good idea what the gathering was about:
It’s time to take back the net. Currently the Internet is much too concerned with marketing, IPOs, and the next killer app, and too little concerned with helping human beings get where we need to go. We want to use the Internet effectively to promote better ways of living, doing commerce, educating, making art, doing spirituality. To collaborate on ideas about how to use the net well. There are a lot of projects that need our assistance. From Arab Spring to Occupy Wall Street, people are rising up. We need solutions. Contact is about finding the others, and working and playing with them to find solutions to age-old problems. In New York on October 20th, we’re having unconference-style meetings plus a two hour bazaar where people will demo their projects. We’ll select projects that most need help, help them get funding and move forward. What it’s really about is planting a flag in the sand, saying the Internet is really about us, not about aiding the bottom line of a few corporations. This goes as deep and as far as we want to take it. The Summit is just a trigger point. It’s time to fold the fringes of the Internet back into the middle and re-ignite the passion and practicality of the Internet. If there were another name for Contact, I would call it “Occupy the Net.” We will collaborate to bring disparate projects with similar goals into harmony, so that anything we can dream will emerge.
Here’s a list of the winning projects from the Bazaar:
- Freedom Tower, Free Network Foundation.
- Freedom Box
- 3D Printing: Community Collaboration Catalyst at the Fayetteville Free Library
Here’s a list of winning sessions (selected by attendees):
Upgrading Democracy: Representation is a fundamental concept of our governance, but is encoded in the technology of the 18th century. The modern networked world enables a truer form of representation known variously under the names Dynamic Democracy, Liquid Democracy, and Delegable Proxy voting.
Local Foodsharing platform: I don’t have details on this yet
Kick-Stopper – Crowdsourced Unfunding: This group is dedicated to creating online organizing tools to organize large scale divestment and debt strike campaigns. Join here: http://groups.google.com/group/debt-strike-kick-stopper
Online General Assembly: This group folded itself into the Upgrade Democracy group, but has its own mandate: to create an online version of the General Assembly technique (as practiced by Occupy Wall Street) for consensus building.
Collaboration Matchmaking Application: The idea is to create an application that helps creators, particularly artists, find collaborators on projects. During the final session on this concept, participants decided that this project should grow at its own pace and with a relatively smaller circle.
DJ Lanphier shot video at the event, and has gradually been uploading those to http://www.youtube.com/contactsummit. Here’s an example, a video of Michel Bauwens of the P2P Foundation: “We are discovering together how we should be working.”
RIP Insanely Great Steve Jobs
When Steve Jobs left Apple recently, what seemed like premature obituaries started appearing, so he had the unusual opportunity to see the kind of appreciation usually published postmortem. It’s too bad he’s not around to see the best tribute, boingboing’s retro Apple interface redesign (above).
The phrase often associated with Apple and Jobs was “insanely great” (also the title of a book by Steve Levy). Gary Wolf interviewed Jobs for Wired about “The Next Insanely Great Thing”:
Having children really changes your view on these things. We’re born, we live for a brief instant, and we die. It’s been happening for a long time. Technology is not changing it much – if at all.
These technologies can make life easier, can let us touch people we might not otherwise. You may have a child with a birth defect and be able to get in touch with other parents and support groups, get medical information, the latest experimental drugs. These things can profoundly influence life. I’m not downplaying that. But it’s a disservice to constantly put things in this radical new light – that it’s going to change everything. Things don’t have to change the world to be important.
The Web is going to be very important. Is it going to be a life-changing event for millions of people? No. I mean, maybe. But it’s not an assured Yes at this point. And it’ll probably creep up on people.
It’s certainly not going to be like the first time somebody saw a television. It’s certainly not going to be as profound as when someone in Nebraska first heard a radio broadcast. It’s not going to be that profound.
Gamification of HIV Research
Online gamers playing a game called Foldit “cracked a key protein structure problem that has had scientists scratching their heads for years…in three weeks.”
Foldit invites players to predict protein structures. The game was developed by researchers at the University of Washington, as a deliberate way to get gamers to compete by solving scientific problems. The game requires they use spatial and critical thinking skills to build 3D models of protein molecules. In this case, they were invited to build models of M-PMV, a protease enzyme that plays a key role in how a virus similar to HIV replicates in cells. Few of the players had any background in biochemistry.
By solving the mystery of the 3D structure of the protein, the gamers have helped scientists move a step forward in developing a drug that could stop viruses like HIV from spreading.