What the Internet Is

This works as a manifesto. I didn’t write it — it’s by the brilliant technologist David P. Reed.

Occasionally, people ask my perspective on the Internet, since I often object to confusing it with things like the telephone or Cable TV. Recently I composed a response that captures my perspective, as one of the participants in its genesis, and as an advocate for sustaining its fundamental initial design principles. I hope these words clarify what I believe many of those who continue to create the Internet continue to do, even though most of them are not aware of it. I also hope many will see their interest in keeping the core principles of the Internet alive.

The Internet must be fit to be the best medium of discourse and intercourse [not just one of many media, and not just limited to democratic discourse among humans]. It must be fit to be the best medium for commercial intercourse as well, though that might be subsumed as a proper subset of discourse and intercourse.

Which implies interoperability and non-balkanization of the medium, of course. But it also implies flexibility and evolvability – which *must* be permissionless and as capable as possible of adapting to as-yet-unforeseen uses and incorporating as-yet-unforeseen technologies.

I’ve used the notion of a major language of inter-cultural interaction, like English, Chinese, or Arabic, as an explicit predecessor and model for the Internet’s elements – it’s protocols and subject matter, it’s mechanism of self-extension, and it’s role as a “universal solvent”.

We create English or Chinese or Arabic merely by using it well. We build laws in those frameworks, protocols of all sorts in those frameworks, etc.

But those frameworks are inadequate to include all subjects and practices of discourse and intercourse in our modern digital world. So we invented the Internet – a set of protocols that are extraordinarily simple and extraordinarily independent of medium, while extensible and infinitely complex. THey are mature, but they have run into a limit: they cannot be a framework for all forms of (digital information). One cannot encode a photograph for transmission in English, yet one can in the framework we have built beginning with the Internet’s IP datagrams, addressing scheme, and agreed-upon mechanics.

The Internet and its protocols are sufficient to support an evolving and ultimately ramifying set of protocols and intercourse forms – one’s that have *real* impact beyond jurisdiction or “standards body”.

The key is that the Internet is created by its users, because its users are free to create it. There is no “governor” who has the power to say “no” – you cannot technically communicate that way or about that.

And the other key is that we (the ones who began it, and the ones who now add to it every day, making it better) have proven that we don’t need a system that draws boundaries, says no, and proscribes evolution in order to have a system that flourishes.

It just works.

This is a shock to those who seem to think that one needs to hand all the keys to a powerful company like the old AT&T or to a powerful central “coordinating body” like the ITU, in order for it not to fall apart.

The Internet has proven that the “Tower of Babel” is not inevitable (and it never was), because communications is an increasing returns system – you can’t opt out and hope to improve your lot. Also because “assembly” (that is, group-forming) is an increasing returns system. Whether economically or culturally, the joint creation of systems of discourse and intercourse *by the users* of those systems creates coherence while also supporting innovation.

The problem (if we have any) is those who are either blind to that, or willfully reject what has been shown now for at least 30 years – that the Internet works.

Also there is too much (mis)use of the Fallacy of Composition that has allowed the Internet to be represented as merely what happens when you have packets rather than circuits, or merely what happens when you choose to adopt certain formats and bit layouts. That’s what the “OSI model” is often taken to mean: a specific design document that sits sterile on a shelf, ignoring the dynamic and actual phenomenon of the Internet. A thing is not what it is, at the moment, made of. A river is not the water molecules that currently sit in the river. This is why the neither owners of the fibers and switches nor the IETF can make the Internet safe or secure – that idea is just another Fallacy of Composition. [footnote: many instances of the “end-to-end argument” are arguments based on a Fallacy of Composition].

The Internet is not the wires. It’s not the wires and the fibers. It’s never been the same thing as “Broadband”, though there has been an active effort to confuse the two. It’s not the packets. It’s not the W3C standards document or the IETF’s meetings. It’s NONE of these things – because those things are merely epiphenomena that enable the Internet itself.

The Internet is an abstract noun, not a physical thing. It is not a frequency band or a “service” that should be regulated by one of the service-specific offices of the FCC. It is not a “product” that is “provided” by a provider.

But the Internet is itself, and it includes and is defined by those who have used it, those who are using it and those who will use it.

Infinite spectrum vs scarcity hype

David Isenberg explains that spectrum for various forms of wireless transmission and communication is treated as scarce, similar to real estate, because a scarcity model works for “cellcos” (cellular communication companies, former telcos) In fact, spectrum is infinite. [Link]

The core of the story is whether or not spectrum is a rival good. A rival good is something that when it’s used by one party can’t be used by another. The cellcos say it is. Current FCC regulation does too. But David Reed has repeatedly pointed out that physics — our understanding of physical reality — says otherwise. The article paraphrases him: electromagnetic spectrum is not finite. Not finite. In other words, infinite.

Bruce Sterling talk at ATX Hackerspace

I shot this video of Bruce at an EFF-Austin-sponsored event February 25 at ATX Hackespace. We were rallying the troops. “You will not have the Internet that you had 20 years ago, that’s not possible. But you don’t have to roll over at the site of bluster from the Internet’s increasingly desperate enemies…”

Internet Code Ring! (Interview with Phil Zimmermann, circa 1993)

Discovered that this interview is no longer findable online, so I’m republishing it here. A version of this was published in bOING bOING (the ‘zine) in 1993 or 1994.

We were sitting in a circle on the floor at the Computers, Freedom,
and Privacy conference, March ’93 in San Francisco, St. Jude and I
with Tom Jennings, Fen La Balme, et al, discussing encryption and
other neophiliac rants when a dapper fellow wandered by with a
beard on his face and a tie hanging from his neck. He picked up
Jude’s copy of bOING-bOING number 10 and glanced through it,
clearly interested. I later learned that this was Phil Zimmerman,
creator of PGP (“Pretty Good Privacy”), so I tracked him down and
we talked for the record.

Jon: I’m fairly nontechnical, and I’m also new to encryption. I spent
some time recently on the cypherpunks’ list, and I have a pretty
good sense of what’s going on, but maybe you can tell me in your
own words how you came to write PGP, and what your philosophy
is, especially with distribution.

Phil: Well, okay. PGP, which means “Pretty Good Privacy” is a
public key encryption program, it uses a public key encryption
algorithm, which means that you can encrypt messages and you can
send them to people that you’ve never met, that you’ve never had a
chance to exchange keys with over a secure channel. With regular
encryption, the kind that everybody has heard about, you encrypt a
message, it scrambles it up, renders it unintelligible, and then you
send it to someone else, and they can descramble it, decrypting it.
They have to use the same key to decrypt it as you used to encrypt
it. Well, this is a problem, this is inconvenient, because how are you
going to tell them what that key is, what’re you going to do, tell
them over the telephone? If someone can intercept the message, they
can intercept the key. So this has been the central problem in
cryptography for the past couple of millenia. There’s been a lots of
different ways of encrypting information, but they all have this
problem.

If you had a secure channel for exchanging keys, why do you
need any cryptography at all? So, in the late 1970s, somebody came
up with an idea for encrypting information with two keys. The two
keys are mathematically related. You use one of the keys to encrypt
the message, and use the other key to decrpyt the message. As a
matter of fact, the keys have a kind of yin-yang relationship, so that
either one of them can decrypt what the other one can encrypt. So
everybody randomly generates a pair of these keys, the keys are
mathematically related, and they can be split apart like cracking a
coin in half, and the jagged edges stick together just right. They can
publish one of the keys, and keep the other one secret. Now, unlike
cracking the coin in half, you can’t look at the jagged edge, and
figure out what the other jagged edge is going to look like. In fact,
you can’t look at the published key and figure out what the secret
key is without spending centuries of supercomputer time to do it.
This means that any time anybody wants to send you a message,
they can encrypt that message with your public key, and then you
can decrypt the message with your secret key. If you want to send
them a message, then you can encrypt the message with their public
key, and then they can decrypt it with their secret key. Everybody
who wants to participate in this system can generate a pair of these
keys, publish one of them, and keep the other one secret.
Everybody’s published key can end up in a big public key directory,
like a phone book, or an electronic bulletin board, or something like
that. You can look up somebody’s public key, encrypt a message to
them, and send it to them. They’re the only ones that can read it,
because they’re the only ones that have the corresponding secret
key.

J: Are there any such directories now?

P: Well, actually, there are starting to be directories like that. For
PGP, there are some public key directories on Internet. You can just
send an electronic inquiry saying “Give me the key for
[somebody],” and it’ll send you their key back, their public key.

J: The convention I’ve seen has been the inclusion of the public key
in an email message posted to a mailing list.

P: You can do that, you can include your own public key when you
send a message to someone, so that when they send you a reply,
they’ll know what public key to use to send the reply. But the
problem…there is an achilles heel with public key cryptography, and
I’ll get to that in a minute. But first, let me explain authentication. If
I want to send you a message, and prove that it came from me, I can
do that by encrypting it with my own secret key, and then I can
send you the message, and you can decrypt it with my public key.
Remember I said that the keys are in this yin-yang relationship, so
that either one can decrypt what the other one encrypts. If I don’t
care about secrecy, if I only cared about authentication, if I only
wanted to prove to you that the message came from me, I could
encrypt the message with my own secret key and send it to you, and
you could decrypt it with your public key. Well, anyone else could
decrypt it to, because everyone has my public key. If I want to
combine the features of secrecy and authentication, I can do both
steps: I can encrypt the message first with my own secret key,
thereby creating a signature, and then encrypt it again with your
public key. I then send you the message. You reverse those steps:
first you decrypt it with your own secret key, and then you decrypt
that with my public key. That’s a message that only you can read
and only I could have sent. We have secrecy and authentication. So
you get authentication by using your own secret key to decrypt a
message, thereby signing the message. You can also convince third
parties like a judge that the message came from me. That means that
I could send you a financial instrument, a legal contract or some
kind of binding agreement. The judge will believe that the message
did come from me, because I am the only person with the secret key,
that could have created that message.

Now, public key cryptography has an achilles heel, and that
achilles heel is that, suppose you want to send a message to someone,
and you look up their public key, on a bulletin board, for example.
You take their public key and you encrypt the message and then
send it to them, and presumably only they can read it. Well, what if
Ollie North broke into that BBS system? And he subsituted his own
public key for the public key of your friend. And left your friend’s
name on it, so that it would look like it belonged to your friend. But
it really wasn’t your friend’s public key, it was Ollie’s public key that
he had created just for this purpose. You send a message, you get the
bulletin board to tell you your friend’s public key, but it isn’t your
friend’s public key, it’s Ollie’s public key. You encrypt a message
with that. You send it, possibly through the same bulletin board, to
your friend. Ollie intercepts it, and he can read it because he knows
the secret key that goes with it. If you were particularly clever,
which Ollie North isn’t because we all know that he forgot to get
those White House backup tapes deleted…but suppose he were
clever, he would then re-encrypt the decrypted message, using the
stolen key of your friend, and send it to your friend so that he
wouldn’t suspect that anything was amiss. This is the achilles’ heel of
public key cryptography, and all public key encryption packages
that are worth anything invest a tremendous amount of effort in
solving this one problem. Probably half the lines of code in the
program are dedicated to solving this one problem. PGP solves this
problem by allowing third parties, mutually trusted friends, to sign
keys. That proves that they came from who they said they came
from. Suppose you wanted to send me a message, and you didn’t
know my public key, but you know George’s public key over here,
because George have you his public key on a floppy disk. I publish
my public key on a bulletin board, but before I do, I have George
sign it, just like he signs any other message. I have him sign my
public key, and I put that on a bulletin board. If you download my
key, and it has George’s signature on it, that constitutes a promise
by George that that key really belongs to me. He says that my name
and my key got together. He signs the whole shootin’ match. If you
get that, you can check his signature, because you have his public
key to check. If you trust him not to lie, you can believe that really is
my public key, and if Ollie North breaks into the bulletin board, he
can’t make it look like his key is my key, because he doesn’t know
how to forge a signature from George. This is how public key
encryption solves the problem, and in particular, PGP solves it by
allowing you to designate anyone as a trusted introducer. In this
case, this third party is a trusted introducer, you trust him to
introduce my key to you.

There are public key encryption packages currently being
promoted by the U.S. Government based on a standard called
Privacy Enhanced Mail, or PEM. PEM’s architecture has a central
certification authority that signs everybody’s public key. If everyone
trusts the central authority to sign everyone’s key, and not to lie,
then everyone can trust that they key they have is a good key. The
key actually belongs to the name that’s attached to it. But a lot of
people, especially people who are libertarian-minded, would not feel
comfortable with an approach that requires them to trust a central
authority. PGP allows grassroots distributed trust, where you get to
choose who you trust. It more closely follows the social structures
that people are used to. You tend to believe your friends.

J: Did you make a conscious decision up front, before you started
programming PGP, that you were going to create something that
would be distributed in this grassroots way, free through the
Internet.

P: Well, there were some software parts of PGP that I developed
some years ago, as far back as 1986, that I developed with the
intention of developing commercial products with it someday. Over
the years that followed, I developed a few more pieces that I hoped
someday to turn into a commercial product. But, when it finally
came down to it, I realized that it would be more politically effective
to distribute PGP this way. Besides that, there is a patent on the
RSA public key encryption algorithm that PGP is based on. I wrote
all of the software from scratch. I didn’t steal any software from the
RSA patent holders. But patent law is different from copyright law.
While I didn’t steal any software from them, I did use the algorithm,
the mathematical formulas that were published in academic journals,
describing how to do public key cryptography. I turned those
mathematical formulas into lines of computer code, and developed it
independently.

J: Did you originally intend to license that?

P: When I first wrote the parts of it back in 1986, I did. But I began
in earnest on PGP in December of 1990. At that time, I had decided
that I was going to go ahead and publish it for free. I thought that it
was politically a useful thing to do, considering the war on drugs
and the government’s attitude toward privacy. Shortly after I stared
on the development, I learned of Senate Bill 266, which was the
Omnibus Anticrime Bill. It had a provision tucked away in it, a sense
of Congress provision, that would, if it had become real hard law,
have required manufacturers of secure communications gear, and
presumably cryptographic software, to put back doors in their
products to allow the government to obtain the plain text contents
of the traffic. I felt that it would be a good idea to try to get PGP out
before this became law. As it turned out, it never did pass. It was
defeated after a lot of protest from civil liberties groups and industry
groups.

J: But if they could get away with passing it, they would still take the
initiative and try.

P: Well, yeah, actually…it started out as a sense of Congress bill,
which means that it wasn’t binding law. But those things are usually
set to deploy the political groundwork to make it possible later to
make it into hard law. Within a week or so after publishing PGP,
Senate Bill 266 went down in defeat, at least that provision was
taken out, and that was entirely due to the efforts of others, I had
nothing to do with that. PGP didn’t have any impact, it turned out,
at all. So that’s why I published PGP.

J: Several of my friends are involved in cypherpunks, and I’ve been
on their mailing list…are you affiliated in any way with
cypherpunks? Are you getting their mailing list?

P: I was on their mailing list for a couple of days, but I found that
the density of traffic was high enough that I couldn’t get any work
done, so I had them take me off the list.

J: The reason I bring cypherpunks up is that they seem to have
almost a religious fervor about encryption . I was
wondering if you share that.

P: I don’t think of my own interest in cryptography as a religious
fervor. I did miss some mortgage payments while I was working on
PGP. In fact, I missed five mortgage payments during the
development of PGP, so I came pretty close to losing my house. So I
must have enough fervor to stay with the project long enough to
miss five mortgage payments . But I don’t think it’s a
religious fervor.

J: I’m impressed with the way encryption in general and PGP in
particular have caught on with the press, how it’s become within the
last year.

P: Well, PGP 1.0 was released in June of ’91. It only ran on MS
DOS, and it didn’t have a lot of the features necessary to do really
good key certification, which is that achilles’ heel that I told you
about. Theoretically, you could use it in a manual mode to do that,
but it wasn’t automatic like it is in PGP 2.0 and above. The current
release of PGP is 2.2. It’s a lot smoother and more polished that 2.0
was. 2.0 was tremendously different than 1.0, and the reason the
popularity has taken off so much since September, when it was
released, is because it ran on a lot of UNIX platforms, beginning
with 2.0. Since the main vehicle for Internet nodes is UNIX
platforms, that made it more popular in the UNIX/Internet world.
Since Internet seems to be the fertile soil of discourse on
cryptography, the fact that PGP 2.0 began running on UNIX
platforms has a lot to do with it’s popularity since that version was
released…Tthat was in September of ’92.

J: The easiest way to get PGP is through FTP from various sites?

P: Yeah. Most of them European sites. PGP 2.0 and above was
released in Europe. The people that were working on it were out of
reach of U.S. patent law…and not only are they out of reach of patent
law, but it also defuses the export control issues, because we’re
importing it into the U.S., instead of exporting it. Also PGP 1.0 was
exported, presumably by somebody, any one of thousands of people
could have done it…but it was published in the public domain. It’s
hard to see how something like that could be published, and
thousands of people could have it, and it could not leak overseas. It’s
like saying that the New York Times shouldn’t be exported, how can
you prevent that when a million people have a copy? It’s blowing in
the wind, you can’t embargo the wind.

J: And by beginning in Europe, you sort of fanned the flame that
much better.

P: Yeah.

J: It seems to have spread globally, and I’m sure that you’re hearing a
lot about it, getting a lot of response.

P: Particularly at this conference (CFP93), yes.

J: Do you plan to do more development of PGP, or are you satisfied
with where it is….

P: PGP will be developed further. My personal involvement is more
in providing design direction and making sure that the architecture
stays sound. The actual coding is taking place overseas, or at least
most of it is. We do get patches sent in by people in the U.S. who
find bugs, and who say, “I found this bug, here’s a patch to fix it.”
But the bulk of the work is taking place outside the U.S. borders.

J: Is there a Mac version as well as a DOS version now?

P: Yeah, there is a Mac version…there was a Mac version released
shortly after PGP 2.0 came out. Somebody did that independently,
and I only found out about it after it was released. People have
written me about it, and it did seem to have some problems. The
same guy who did that version is doing a much improved version,
Mac PGP version 2.2, which I believe should be out in a few
days…that was the last I heard before I came to the conference. The
second Mac development group, that’s working on a very “Mac”-ish
GUI, is being managed by a guy named Blair Weiss. That takes
longer, it’s difficult to write a good Mac application, so it’s probably
going to be a couple of months before that hits the streets.

J: Were you involved in the UNIX version, too?

P: I did the first MS-DOS version entirely by myself, but it’s not
that big a distance between MS-DOS and UNIX, so most of it was
the same. The UNIX board took place soon after PGP 1.0 was
released. After that, many other enhancements were added, and
major architectural changes took place to the code, and that’s what
finally made its way out as version 2.0.

J: You’re doing consulting now?

P: That’s how I make my living, by consulting. I don’t make
anything from PGP.

J: Do you think you’ll just let PGP take a life of its own, let other
people work on it from here out?

P: Other people are contributing their code, and other people are
adding enhancements, with my design direction. Perhaps someday
I’ll find a way to make money from PGP, but if I do, it will be done
in such a way that there will always be a free version of PGP
available.

J: I was thinking of the UNIX thing, where everybody’s modified
their versions of the UNIX Operating System so that some
[customized versions] weren’t even interoperable. I was wondering
if there was a chance that PGP would mutate, whether you’re going
to keep some sort of control over it, or whether people will start
doing their onw versions of it….

P: Well, I don’t know, that could happen. There are so many people
interested in the product now, it’s hard to keep track of everybody’s
changes. When they send in suggested changes, we have to look at it
carefully to see that the changes are good changes.

J: But you don’t have some sort of structure in place where you do
some kind of approval if somebody wants to make some kind of
mutant version of PGP….

P: There is a kind of de facto influence that I have over the product,
because it’s still my product, in a kind of psychological sense. In the
user population, they associate my name with the product in such a
way that, if I say that this product is good, that I have looked at this
and that I believe the changes made sense the last version are good
changes, that people will believe that. So I can determine the
direction, not by some iron law, not by having people work for me
that I can hire and fire, but more by my opinion guiding the product.
It would not be easy for a person to make a different version of PGP
that went in a different direction than how I wanted it to go, because
everybody still uses the version that I approved, so to be
compatible…this has a kind of intertia to it, a de facto standard. PGP
currently, I believe, is the world’s most popular public key
encryption program, so that has potential to become a de facto
standard. I don’t know what that means in comparison to the PEM
standard. PEM is for a different environment than PGP, perhaps,
although the PGP method of certifying keys can be collapsed into a
special case that mimics in many respects the PEM model for
certifying keys.

How should the Internet be governed?

This piece hints at the politicization of the Internet and the complexity of its future. The Internet Corporation for Assigned Names and Numbers (ICANN) is the closest thing we have to “Internet governance.” It’s the organization that coordinates the standards and processes associated with Internet addresses – the assigned names and numbers referenced in the organization’s name. In “ICANN’s ‘Unelected’ Crisis” Michael Roberts write about the controversy over ICANN’s unelected leadership and multistakeholder model. “If ICANN is to maintain its quasi-independence, a hard boiled, Kissinger-like brand of pragmatic statesmanship will be necessary.” [Link]

Increase bandwidth exponentially

Internet prime mover Vint Cerf echoes what I’ve been hearing from other architects of the TCP/IP network: we should focus on building much fatter pipes, and get away from the enforced/legacy scarcity and build gigabit broadband networks. Nothing here about the cost of providing gigabit access, nothing here about the fact that much of the (rural) U.S. has no access to broadband at any speed. What policies do we need to have pervasive gigabit broadband, urban and rural, in the U.S.? Who will pay for the buildout? [Link]

EFF-Austin revives meeting series

EFF-Austin has been quietly working along presenting occasional events (like the recent Texas Government 2.0 Camp), operating its email list, and publishing occasional blog posts,  but we’ve had a growing sense of a need to ramp up our activities. We can see major threats to the distributed, decentralized Internet and the expectation of a “freedom to connect,” and there are all sorts of interesting conversations and movements emerging that are relevant to EFF-Austin’s stated mission: “EFF-Austin advocates establishment and protection of digital rights and defense of the wealth of digital information, innovation, and technology. We promote the right of all citizens to communicate and share information without unreasonable constraint. We also advocate the fundamental right to explore, tinker, create, and innovate along the frontier of emerging technologies.”

A first important step in our ramping up: we’re restarting our monthly meeting series, coordinated by our new board member, Anna Kozminski. The first of the new series of meetings is June 1 at the Flying Saucer at the Triangle – information and (free) registration here. Open Source software developer Tom Brown, who among other things maintains oscurrency for the Austin Time Exchange and founded Superborrownet, Inc., will talk about his experience attending Internet Identity Workshop 12, and about the Identity Commons movement in general. Come to the meeting, express your support for EFF-Austin’s mission, volunteer to be part of our future going forward.

(Note that EFF-Austin is an independent nonprofit formed originally as a potential chapter of the national Electronic Frontier Foundation (EFF). When EFF decided not to have chapters, we went our own way as an influential organization focused on cyber liberties and more, such as digital culture and arts.)

Transitional Internet

I continue to be focused on the future of the Internet and aware of divergent paths. In the later 2000s, following a period of digital and media convergence and given broad adoption of evolving high speed (broadband) network connectivity, the Internet has become an environment for mixed media and marketing. The Internet is increasingly centralized as a platform that serves a global business engine. It’s a mashup of business to business services and business to consumer connections, a mashup of all the forms of audio, text, and video communication and media in a new, more social/participatory context: the faceless consumer now has an avatar, an email address, and a feedback loop.

The sense of the Internet as a decentralized free and open space has changed, but there are still many advocates and strong arguments for approaches that are bottom-up, network-centric, free as in freedom (and sometimes as in beer), open, collaborative, decentralized. It’s tempting to see top-down corporate approaches vs bottom-up “free culture” approaches as mutually exclusive, but I think they can and will coexist. Rather than make value judgements about the different approaches, I want to support education and thinking about ethics, something I should discuss later.

Right now I want to point to a collaboration forming around the work of Venessa Miemis, who’s been curating trends, models, and projects associated with the decentralized Internet model. Venessa and her colleagues (including myself) have been discussing how to build a decentralized network that is broadly and cheaply accessible and that is more of a cooperative, serving the public interest rather than a narrower set of economic interests.

I’ll be focusing on these sorts of projects here and in my talks on the future of the Internet. Meanwhile, here are pointers to a couple of Venessa’s posts that are good overviews for what I’m talking about. I appreciate her clarity and focus.

There’s also the work of Michel Bauwens and the P2P Foundation, which I’ve followed for several years. The P2P Wiki has relevant pages:

Routing around suppression

The Net interprets censorship as damage and routes around it. – John Gilmore, 1993

I thought about John’s quote yesterday when I heard about attempts to block access to the Internet in Egypt. It ain’t working, per a couple of links Robert Steele sent me.

From The Atlantic’s website:

And now many Egyptians are finding ways around the cuts and getting back on the Internet, allowing them to more easily communicate with the outside world and spread information from the inside. One popular method is to use the local phone lines, which remain intact. The trick is to bypass local Egyptian ISPs (Internet Service Providers) by connecting to remote ones hosted in outside countries — many are hosted here in the United States; Los Angeles seems, for whatever reason, to be a popular site. This is easy enough for the most computer-illiterate among us to do using basic settings and a built-in ‘Help’ function, but Egyptians have a second hurdle as most homes in the country are unable to call internationally. One way that many are getting around this is by linking through a mobile phone network by establishing a connection between a cell with built-in bluetooth compatibility and a laptop with similar functionality or a computer with a bluetooth dongle.

Tim Wu and the future of the Internet

Tim Wu explains the rise and fall of information monopolies in a conversation with New York Times blogger Nick Bilton. Author of The Master Switch: The Rise and Fall of Information Empires (Borzoi Books), Wu is known for the concept of “net neutrality.” He’s been thinking about this stuff for several years, and has as much clarity as anyone (which is still not much) about the future of the Internet.

I think the natural tendency would be for the system to move toward a monopoly control, but everything that’s natural isn’t necessarily inevitable. For years everyone thought that every republic would eventually turn into a dictatorship. So I think if people want to, we can maintain a greater openness, but it’s unclear if Americans really want that…. The question is whether there is something about the Internet that is fundamentally different, or about these times that is intrinsically more dynamic, that we don’t repeat the past. I know the Internet was designed to resist integration, designed to resist centralized control, and that design defeated firms like AOL and Time Warner. But firms today, like Apple, make it unclear if the Internet is something lasting or just another cycle.

Advocating for the Open Internet

“Net neutrality” and “freedom to connect” might be loaded or vague terminologies; the label “Open Internet” is clearer, more effective, no way misleading. A group of Internet experts and pioneers submitted a paper to the FCC that defines the Open Internet and explains how it differs from networks that are dedicated to specialized services, and why that distinction is imortant. It’s a general purpose network for all, and can’t be appreciated (or properly regulated) unless this point and its implications are well understood. I signed on (late) to the paper, which is freely available at Scribd, and which is worth reading and disseminating even among people who don’t completely get it. I think the meaning and relevance of the distinction will sink in, even with those who don’t have deep knowledge of the Internet and, more generally, computer networking. The key point is that “the Internet should be delineated from specialized services specifically based on whether network providers treat the transmission of packets in special ways according to the applications those packets support. Transmitting packets without regard for application, in a best efforts manner, is at the very core of how the Internet provides a general purpose platform that is open and conducive to innovation by all end users.”

Cory Doctorow in Austin

Cory Doctorow was in Austin yesterday on a book tour for his new young adult novel, For the Win. Cory says he likes writing young adult fiction because it’s for people who use it, not just for entertainment, but to figure out the world. Cory introduced me to another science fiction writer, Steven Brust, now living in Austin. I love the information-dense, visionary, ironic and funny conversations science fiction geeks have, just casually over dinner or drinks. Cory’s not just a science fiction geek, though – he’s also an Internet maven and activist, especially focused on issues like copyfight and freedom to connect. Cory is former EFF online activist and board member of EFF-Austin, which threw an after party, called “Whuffiefest,” following his book signing. Produced by Plutopia Productions, the event had a large and enthusiastic turnout.

Fiber Fete: Google’s fiber testbed

Minnie Ingersoll of Google at Fiber Fete talking about what Google is doing with it’s fiber testbed project.

What they want to do:
1) next generation applications.
2) Experiment with new and innovative ways to build out fiber networks
3) Work with “open access” networks

Not becoming a national ISP or cable tv provider. Google had suggested the FCC needed to make this kind of testbed, but realized the Commission had other focuses. Google realized this would be within their purview based on their mission statement.

Application review process for proposals from cities wanting the testbed project has begun. Over 1,100 communities applied. Evaluating based on speed and efficiency of deployment. Understanding how the community will benefit. Much will depend on the conversation they have with the communities as they learn more about their needs.

Working now on developing the offering. Openness – Is this a white label or wholesale service? What products and service partnerships are possible. Google will also develop its own high-bandwidth offerings.

May choose more than one community with very different characteristics.

Applications are full of civic pride. You learn what makes the various locations/communities unique.

Will announce services as soon as possible.

Leverage the enthusiasm – Google to create a web site to help communities connect with other resources. Don’t want to have cities feel excluded from getting higher-end broadband services.

What policies need to be in place to support broadband now?

Brough Turner asks about middle mile networks. Something Google looks at – where do they already have fiber? Sometimes communities farthest from the infrastructure, though, are the ones that would benefit most.

Bice Wilson: enthusiastic about leverage the enthusiasm concept. All the people in the room represent communities that are inventing this new cultural process. Google is helping drive the process. Are you planning to make this useful in that way (as a model)?

Google is looking for specific ways to keep the applicant communities talking to each other. Is it an email list? A forum? A wiki? Definitely looking to Open Source, create white papers and best practices from what they do so that others can benefit.

David Olsen from Portland: what type of testbed environment? Also thanks for what Google has done to raise consciousness of cities about significance of broadband.

Urban vs rural: not sure whether it will be 1, 2, 3, 4 communities. Might be in different communities, or neighborhoods or subsets of a community. Will probably be looking for more than one community, with differences. Probably a mix.

David Weinberger wonders how raw the data Google outputs about the project will be, and how immediate. Google hopes to satisfy with the amount of data, and immediacy. Google will be responsive to feedback, so people can let them know whether they’re providing enough info.

Marc Canter brings up political issues around municipalities providing pipes. Have they heard from AT&T and Comcast, etc.?

Google is definitely inviting the other providers to use their pipes. There’s plenty of room in the broadband space, and no one company has a monopoly on innovation. Discussions are ongoing about partnerships.

How open is open? What rules will there be?

Google will advocate policies around net neutrality, e.g. no content discrimination.

Garrett Conrad asks about leveraging Google’s apps vs apps the community might come up with?

The community aspect will be key, crucial. It would be wrong for Google to tell the community what they need… will be listening, but will also be prepared to offer guidance and applications.

Leslie Nulty asks what is the business structure concept that lies behind this? It’s not completely clear. Appears that Google intends to build, own, and operate these networks – become a monopoly provider. What are the checks and balances? Will Google become an unregulated monopoly?

Some will be the published, openness principles Google will be expect to be held to. Not a monopoly. Will offer reselling opportunities.

Canter: if you’re open, it’s not a monopoly.

The openness is of the service offering on top of our pipe. We’re not trying to force people into using Google apps.

Google does plan to build, own, and operate these networks in trial communities.

Nulty: price is the question for any community that might want to partner with Google.

Services will be competitively priced. Will negotiate with the municipalities on a contract that both think is fair. Google will be as transparent as they can be, and if there’s somethng they’re missing, let them know.

State regulations preventing broadband a barrier? Google wants to learn more about regulations and policies. Ask communities to explain regulatory barriers for their specific communities as part of RFI response.

Chris: Communities United for Broadband on Facebook.

Nancy Guerrera (sp?). Wants to know what it’s like working with local communities. Refers to previous project in San Francisco to set up muni wifi. Ended up building in Mountain View after discussions with SF didn’t work out. Google learned from this, though each community is different.

Will Google’s transparency extend to documenting issues/discussions with policy organizations?

Yes, if the press doesn’t document for us, we’ll do our best to document legal and regulatory barriers we encounter.