From left, Jennifer
Rexford ’91, Larry Peterson, and Robert Kahn *64 — considered
a father of the Internet — in September at a Princeton conference
on the Internet’s past and future.
(Frank Wojciechowski)
Rethinking
the Internet Some computer scientists think it’s time for an overhaul
By Kenneth Chang ’87
Reinvent the Internet.
That’s what some computer-networking researchers would like to
do — and they’re hoping that the National Science Foundation
will invest several hundred million dollars to build a testing ground
where they can try out their ideas.
The Internet, which perhaps has transformed society more than any other
invention of the late 20th century, is far from crippled. Many millions
of people tap in every day. It has created gargantuan and growing swaths
of the economy that did not exist a couple of decades ago. So why reinvent
something that’s not broken?
One answer: What is good can always be better. Innovations that have
so changed modern life — like Skype, Napster, and even Google, for
example — reflect applications on the edges of the Internet. Larry
L. Peterson, the chairman of Princeton’s computer science department,
and other researchers believe that experimenting on the Internet’s
core could lead to a faster, more secure, more robust, and more flexible
network, giving rise to new uses that the current Internet cannot reliably
deliver. Now, data are encoded as light pulses that travel through glass
fibers rather than as electrical signals in old-fashioned copper wires
— but one can imagine that the network could be fine-tuned to take
advantage of the unique properties of light. And perhaps, some computer
scientists suggest, a more advanced Internet could better serve less-developed
parts of the world — where network connections are intermittent
— with improved handling of data.
But how to get there? The most innovative experiments cannot be done
on the existing Internet, because industry — which has played a
dominant role in its growth and direction — relies on it to operate
smoothly, all the time. “Often people kill off interesting lines
of inquiry because they aren’t compatible with the Internet as it
exists today,” says Jennifer Rexford ’91, a Princeton professor
of computer science.
Instead of merely jury-rigging fixes into the existing Internet, Peterson
and Rexford believe much can be learned about possible improvements by
designing a new network from the ground up — one that parallels
the existing Internet, on which researchers can run their most innovative
experiments. Both are key members of the planning group working to create
such a network, a project known as the Global Environment for Network
Innovations — or GENI, for short. The National Science Foundation
already has established a GENI project office, run by BBN Technologies
of Cambridge, Mass., the same firm — then known as Bolt Beranek
and Newman Inc. — that the Defense Department tapped in 1969 to
design the forerunner of the Internet, called the ARPANET. (ARPA stood
for Advanced Research Projects Agency, the branch of the Department of
Defense that funds research aimed at revolutionary breakthroughs useful
for the military. Today, it’s known as DARPA, with “Defense”
added to the name.)
Over the next few years, BBN will flesh out a proposal for what GENI
would consist of and the kinds of research it would be used for. Then
the National Science Foundation will decide whether to actually build
it, with new computers, fibers, and switching equipment — at a price
tag estimated to be between $300 million and $500 million. GENI, if it
becomes a reality, probably would take three years to build, and could
open as a research facility around 2013 for computer scientists to begin
running their experiments.
Peterson calls the project a “moon shot” for a field where
innovation often has been made on modest means, by one or two people working
in a basement or garage, not by giant, expensive collaborations. (Think
Steve Jobs and Steve Wozniak putting together their first computer in
the Jobs family garage, or Marc Andreesen writing the first graphical
Web browser while an undergraduate at the University of Illinois.) The
project would mark computer science’s entry into the arena of Big
Science, like the giant particle colliders of physicists and the genome
projects of biologists.
“Computer science has never done this,” says Peterson, who
already has initiated two smaller efforts at Princeton designed to jump-start
networking research. “If you look at the other sciences, they build
large scientific instruments all the time. There’s a pipeline of
projects to study one scientific question or another.”
Before explaining how GENI looks to reinvent the Internet, it is perhaps
necessary to explain what the Internet is. For most people, the Internet
is a magical conduit that fetches information from Somewhere Out There
and plops it on the computer screen. The Internet is routinely confounded
with the World Wide Web, the most familiar application that runs on the
Internet. That’s like confusing cars with the road beneath them.
The Internet does not work the way the other familiar communications
network — the telephone system — traditionally has worked.
When you place a phone call, there is a circuit connecting your phone
with the phone of the person to whom you’re talking. The telephone
system has an elaborate switching system for opening circuits for each
phone call.
But the Internet is, in a rough analogy, more like the U.S. mail. Write
a letter — the old-fashioned kind, using ink on paper. Then cut
it up into pieces. Place each piece of paper in an envelope, and address
the envelopes to the same recipient. Drop the envelopes into a mailbox.
A postal carrier picks up the envelopes and takes them to the post office.
Based on the postal address, the envelopes will be routed to the appropriate
truck headed to another post office. After a few more jumps, another postal
carrier will deliver the envelopes to their destination, where the recipient
can open them up and piece together your original letter.
That, in oversimplified essence, is how the Internet works. In Internet
parlance, a “packet” is the equivalent of the piece of paper
in the envelope, and the “header” of the packet provides the
address of the computer to which it’s headed. Instead of post offices,
devices called routers send the packets merrily along their way.
The genesis of the Internet is in large part due to Robert E. Kahn *64.
After receiving his master’s and doctoral degrees in electrical
engineering from Princeton, he worked at Bell Laboratories, then became
an assistant professor of electrical engineering at MIT. He took a leave
of absence from his faculty post for his first foray into designing computer
networks, working for Bolt Beranek and Newman Inc. on a proposal to build
the ARPANET. He never returned to MIT, staying at BBN when it won the
contract.
In 1972, Kahn left BBN to head the Information Processing Techniques
Office at ARPA, located in Virginia, just outside Washington, D.C., and
there he started the Internet. By then, the first segments of the ARPANET,
one of the first networks to connect computers at widely separated locations,
were up and running. Plans had been drawn up to extend the network to
Europe via satellite. Kahn started another project called Packet Radio
— a network of computers communicating with each other via radio
signals. For the military, that could enable computer links to Navy ships
and Army infantrymen carrying computers in their backpacks.
Kahn had a big idea: He imagined linking everything together, so that
the backpack computers and shipboard computers also could talk to the
military mainframes on the ARPANET. He enlisted the help of Vinton G.
Cerf, a computer scientist at Stanford who also had worked on ARPANET.
In 1974, they published a paper that described the underlying protocols
for the Internet, which are now known as TCP/IP. (TCP is “Transmission
Control Protocol,” which specifies the set of rules for two computers
to talk to each other; IP is “Internet Protocol,” which defines
an address for each computer.)
Today, the fundamental idea of the Internet — everything connected
to everything — seems quaint and obvious. But it was not self-evident
then. In 1974, the information superhighway was still just a few dirt
roads. The king of computing was the mainframe computer, the minicomputer
had just been invented, and the personal computer would not be invented
for a few more years. ARPANET was the only computer network, and computers
on the ARPANET numbered in the tens.
Just as IBM mainframes could not run software written for competing
mainframes from Burroughs, NCR, and Honeywell, IBM was not interested
in developing technologies that would make it easier for its computers
to talk to other computers. Perhaps in a different world, if computer
companies and not DARPA had laid the groundwork, there might be several
overlapping and incompatible versions of the Internet, each connecting
a different brand of computer.
In any case, when Kahn and Cerf got to work, the idea of a wider Internet
was not one that intrigued many. That made the task of inventing the Internet
easier. “We were very fortunate to be at a place and time when the
technology allowed this to happen, we had the resources, and nobody cared,”
Kahn said during a September panel discussion in Princeton. The event,
called “Re-imagining the Internet,” also featured Peterson
and Rexford, and marked Peterson’s ascension to a new endowed chair
named in honor of Kahn.
The brilliant simplicity of Kahn and Cerf’s idea can be seen in
its name: Internet. Inter Net. Interconnection between networks —
a notion that’s essential because different types of networks can
operate quite differently. How data fly across a cell-phone data network
is quite different from how Ethernet connects computers on a small computer
network. In essence, one is speaking Chinese and the other is speaking
Swahili; they cannot directly talk to each other. What Kahn and Cerf did
essentially was to create an Esperanto that all computer networks, despite
their different underlying protocols, could understand. That allowed the
ARPANET, and many other computer networks, to join the Internet later
and become an interconnected larger network.
Looking back, Kahn says that if he knew then what the Internet would
become, he would have made some changes. He would have provided for a
larger, more flexible system for assigning network addresses, because
the numbering system for computers on the Internet will run out of numbers
— in much the same way that a shortage of telephone numbers required
rule changes and additional area codes. He also would have worked on establishing
a check that the sender of data was indeed the source whom it claimed
to be, which would have made many of the hacker attacks today much more
difficult.
The fact that Kahn and Cerf’s Internet protocols made so few assumptions
about the underlying hardware has allowed the physical networks to change
entirely — from the copper wires of the 1970s to the optical fibers
and Wi-Fi of today — while the protocols, with some updating, still
work. It’s almost as if the street signs and traffic rules designed
before the invention of the automobile still worked effectively with today’s
Interstate highways. The flexibility has enabled programmers to invent
the World Wide Web, video conferencing, music sharing, and other programs
without altering the network itself.
Still, over the years, as the Internet outgrew some of its original
design, a committee of experts would gather to fix emerging problems.
It was not easy: Efforts to implement even small adjustments often ran
into opposition from one or more of the many parties that now have a stake
in the Internet — from Cisco, the maker of network-routing equipment,
to the commercial companies that now run most of the Internet, to other
nations. If Cisco declined to implement a change that could speed the
Internet, for example, then the good idea likely would fall by the wayside.
Other impediments have been geographical, with some countries complaining
that the United States still exerts too much control over the now-international
Internet. Some researchers — including Peterson, who served on one
of the task forces — wondered if this growing patchwork of small
fixes was the best approach or whether it was time for the Internet to
undergo a gut renovation.
Peterson recalls the frustration he felt as the task-force discussions
progressed. “These are the best and the brightest, who have been
thinking about the Internet over the past 20-some years,” he says.
“And we’re spending the entire meeting trying to figure out
how we can convince Cisco to reinterpret one bit in the IP header. That’s
what we’ve been reduced to, and that’s when I realized there’s
got to be a better way. My conclusion was, OK, the Internet’s done.
Now what can we do?”
So in an effort to start from scratch, about four years ago Peterson
co-founded PlanetLab, a Princeton-led project that now allows researchers
around the world to test new networking applications. There is a catch.
PlanetLab is what is known as an “overlay” network: The new
protocols are translated into the old Internet ones, and the data are
sent across the same Internet connections. If the Internet clogs up, so
does PlanetLab.
Peterson also is participating in a follow-up project called VINI, short
for VIrtual Network Infrastructure. VINI allows researchers to tap into
routers and start inserting their own programs — a sort of mini-GENI.
But that depends on the kindness of the routers’ owners. “Doing
it on a shoestring is encumbering what we can accomplish,” he says.
GENI, Peterson says, would allow innovation not just at the edges of
the network, but also in the guts. “You could ... do experiments
directly on top of the circuits instead of directly on the Internet”
— a big advantage, he says.
Kahn is not against change or improvements. A year after leaving DARPA
in 1985, he founded the nonprofit Corporation for Network Research Initiatives,
and he is still chairman, chief executive, and president of the organization.
At first glance, CNRI’s endeavors sound similar to GENI’s,
and in the 1990s the company worked on building a prototype high-speed
computer network. But its portfolio is wider, and it also has consulted
about nanotechnology, sponsored development of a programming language
called Python, looked at how best to store information on the Internet,
and financed development of software for visualizing scientific data.
Kahn supports the goals of GENI, but is not yet convinced that it is
worth its expensive price tag. He says he has not seen the research proposals
that would require GENI to be carried out, though he acknowledges that
“that doesn’t mean they’re not out there.” Beyond
the cost of building the GENI test bed, it might cost $30 million to $75
million a year to run, given the equipment and people needed to maintain
it. That is money that otherwise would go to research grants. “It’s
a big tax on the research community going forward,” says Kahn, who
wonders if much of the research can be accomplished on PlanetLab-like
overlays at much lower costs.
Even if some groundbreaking invention comes out of GENI, will governments
and networking companies be convinced to spend the money to upgrade their
equipment in a way to adopt the new idea? Kahn isn’t sure. “I
really liken it to changing the wings and the engines on a flying aircraft
without being able to ever land it,” he says.
Peterson does not prognosticate about what breakthrough application
might arise from the research that could proceed if GENI gets off the
ground. “It’s not so much solution X or challenge Y,”
he says. “It’s how do we accelerate and broaden the ability
to innovate on the Internet.” He does allow that the project could
enable experimentation to improve network security, recently telling a
writer for Princeton’s School of Engineering and Applied Science
that “if industry continues to chart the course of the Internet,
we won’t ever be able to have a national debate about privacy and
security.”
Some computer scientists say the experimentation possible with GENI
might lead to vast arrays of sensors continuously monitoring the environment.
Rexford, a member of one of GENI’s working groups, envisions the
possibility of “wreckless driving” — fleets of robotically
driven cars all in constant, instantaneous communication with each other
so that when one brakes, others behind it immediately will slow down,
too.
One developing problem, she says, is how to handle computers in motion.
The Internet was designed for stationary computers, sitting in the same
place all of the time. But in a few years, mobile devices — think
iPhones, BlackBerries, and the slew of “smart phones” —
may outnumber the stationary computers. The current solution is not an
efficient one. To return to the postal analogy, it’s a simple forwarding
address — as if all of your mail is first delivered to your home
and is then forwarded to your vacation location, which adds a delay. That
roundabout routing is one reason why it is slower for a Web page to appear
on a cell phone than on a desktop computer. And as hand-held computers
proliferate, that problem will worsen.
Rexford, who was a network researcher at AT&T before joining Princeton’s
faculty, instead would like to explore ways to inform other Internet routers
of your changing location so that the data can be routed to you more directly.
She can test her ideas via simulation and perhaps run small-scale experiments,
but the true efficiency of a new protocol would not be clear until actual
people with mobile devices started using it on a wide scale. “Real
users, and real network conditions, always stress your ideas in surprising
ways you can’t imagine in advance,” she says. GENI would allow
her to test these ideas — and others — in ways that the current
Internet or even PlanetLab and VINI cannot. That, scientists believe,
would turn ideas into innovations more quickly.
The crucial thing now, Rexford and Peterson say, is to put the foundation
down right. Others will decide what to build on top.
Kenneth Chang ’87 reports on science for The New York Times.