MIT Breakthrough Could Make Internet 1,000 Times Faster

Tech could also save power -- a virtual data center dream -- and opens the door to incredibly fast file transfers

The ever-increasing demand for data has scientists at top research centers like CERN and MIT racing to develop better technologies. An team led by Vincent Chan, an electrical engineering and computer science professor at MIT, just made a breakthrough that could eliminate the slowest component in the current internet infrastructure, bumping speeds by as much as 100 to 1,000 fold.

The majority of high-bandwidth, high speed traffic is delivered along bundles of optical cables. These signals can go a long way, but periodically they come to an intersection and have to be redirected. It's hard to reroute light, so currently these require converting the signal back to an electric signal, rerouting, and finally converting back to an optical signal. All of this requires extra power and significant slows the internet down.

What Chan figured is called "flow switching" and it sounds like common sense, but surprisingly hasn't widely been suggested or thought of before. The idea here would be to take heavy traffic zones and establish a one-way dedicated connection. For example major cities like Chicago, Miami, New York City, Detroit might have a straight path to California's Silicon Valley. And Silicon Valley might have a straight path back to them. Without the need for major rerouting, the internet would become dramatically faster and more energy efficient.

States Dan Olds, an analyst at Gabriel Consulting Group Inc., "If this can truly jack up Internet data speeds by 100 times, that would have a huge impact on the usability of the Net. We'd see the era of 3D computing and fully immersive Internet experiences come much sooner.... If this turns out to be practical, it could be a very big step forward."

Chan comments, "With bigger applications and more bottlenecks, you could buy extra bandwidth if you pay through the nose, but that's not something every user could do. Sure, you can increase the data rate, but it's expensive. With this new architecture, we can speed up the Internet but make high-speed access cheaper."

He is confident that the technology is ready to be rolled out commercially. He's establishing a startup that will facilitate the creation of these direct pipes. He states, "I think we have enough tests to know that the transport is ready and the architecture would work."

With the triumph also comes controversy. The massive speed increase could allow for much faster BitTorrent and P2P connections, offering the opportunity to fileshare more than ever before. Media watchdogs have long voiced concerned about the potential effects of faster internet.

In related news, Finland recently passed a law mandating the internet as a "fundamental" right. As of now 96 percent of the Finnish population are already online, with just 4,000 homes left to be connected. The new law would offer a free 1-Mbps internet connection to all citizens who wanted it.

Finland is adopting a less severe stance to piracy than the U.S. It's send those who fileshare letters asking them to stop (but it's unclear whether there will be any sort of consequences). However, it has said that it will not take down or ban sites that have a few illegal files on them or linked to them.

Finland's EU neighbor Britain is also looking to give citizens internet access. While its measure has no force of law, it claims it will deliver 2-Mbps connections to all citizens by 2012. However, it's also considering much harsher provisions for filesharers -- severing their internet privileges after three strikes.

News Source: http://www.dailytech.com/MIT+Breakthrough+Could+Make+Internet+1000+Times+Faster/article18913.htm

Really cool. I wonder how long it will take to filter through to the masses.
 
Only problem is, the concept of dedicated lines isn't what any sane person would call a breakthrough (it pre-dates the internet itself by like a century actually... Ever heard of the telegraph? :LOL:)

This is one of the worst articles I've ever seen on dailytech, and it gets suitably panned in the comments too I might add. Btw, whomever at MIT decided to trumpet this "breakthrough" of theirs should be fired.
 
Say what you will - If my bandwidth increases by a thousand fold because of this, it's a flippin' breakthrough.
 
I don't understand this article at all. It says converting from optical to electrical is a huge loss in efficiency, but I'm not sure how this avoids that. You have dedicated unidirectional pipes going from one city to another. You'd still have to route all of the data into those pipes. It doesn't make sense.

And the term flow routing is already taken. The concept is essentially identifying data as streams, routing the first packet in the stream, storing that routing information in a lookup table, performing lookups on the subsequent packets. The efficiency is gained in avoiding the processing intensive routing of each packet. You can still perform QoS by monitoring the data streams without doing deep packet inspection. They use algorithms to determine traffic types by analyzing the data stream rather than the contents of the packet. Anagran is the company that's pushing this concept. They're founded by one of the internet pioneers, Dr. Lawrence G. Roberts. He says the reason they couldn't do this type of routing when routing and switching were being invented was because of the price of memory. Now memory is dirt cheap, so creating huge tables of lookups for packet routing is feasible. http://www.anagran.com/

This is much more interesting to me than the MIT "breakthrough" and should allow for huge efficiency in infrastructure, if it actually works.
 
(sorry bit hazy on the details)
There was a nz schoolkid on the radio a couple of months ago, he'ld just gotten back from international physics competition where he came second, he'ld also proved some theory that had eluded scientists for decades. i.e. he's a genius.
Anyways he was talking about the possiblity of sending much more data through optical cables by using the angle of the light and not just off/on pulses
 
The news article is not written well. A paper by the MIT professor about optical flow switching can be found here:

http://www.mit.edu/~medard/papersnew/WOBS Final.pdf

About the optical cable using "angles," I don't think that's going to work very well on long distances... That's why people pay a lot of money for single mode optical fibers.
 
I don't think bandwidth is the problem. DWDM has been in use for many years because everyone tries to get the most out of those precious long haul fibres. The thing is, if you give your customers 100 times the b/w they have now you have to make a huge investment in growing your infrastructure.

Even today all IPS operate on heavily oversubscribed networks hoping/assuming that not everyone will use the full b/w at the same time.

I don't see it as a technical problem but a $$$ one.
 
I don't think bandwidth is the problem. DWDM has been in use for many years because everyone tries to get the most out of those precious long haul fibres. The thing is, if you give your customers 100 times the b/w they have now you have to make a huge investment in growing your infrastructure.

Even today all IPS operate on heavily oversubscribed networks hoping/assuming that not everyone will use the full b/w at the same time.

I don't see it as a technical problem but a $$$ one.

I'd assume its more for companys then the average consumers. At least for a long time.
 
as some one who works all day every day with IP and has a network that has a significate amount of dark fibre, im left thinking WTF mate ?

sydney aus is around 12,065,000 metres from LA

the SRTT (smooth round trip time is about 180-200ms)

at 0.65C 12,065,000 will take around 61ms each way, so 120ms for RTT

Current technologys are 100% fine in terms of latency for both video and audio with an rtt of 200ms ( depending on codec used). reducing this would still be nice but based on the discription it doesn't sound like the trade offs would be worth it.

current gen routers (cisco CRS/12000 juniper M/T) are already very fast, "wire speed" switching and are very flexible (can do lots of stuff like policy based routing ay wire speed based off different ip/tcp/udp header fields).

This sounds like creating in effect GRE tunnels (but not layer 3 tunnels but at the physical layer) and create a mesh or partial mesh, kind of like how ATM/Frame relay operates. well guess what, it was a pain in the arse to maintain doesn't scale well and everyone jumped the second MPLS and M-BGP came along because of scalability and flexability, i dont see Telco's giving that up for the sake of <60ms over 12,000kms.

that 12,000km includes 8 layer 3 hops, god knows how many other MPLS routers there are in the path, a normal MPLS layer 2 tag switch is under 1ms.

as said before with CWDM being cheap as chips for <70km runs and DWDM is getting cheaper band width isn't really an issue for anyone with there own dark fibre.

edit: i should add that MPLS TE is so powerfull and allows for all sorts of unequal cost link balancing ( the hollygrail for expensive redundant links) that it would be very hard for something ;ike to above to even have a shot at being financially viable.

assuming i have interperated that artical correctly.
 
Last edited by a moderator:
Wasn't it DARPAs express intent to have multiple/redundant nodes so that the internet would be robust enough to withstand nuclear attack? I guess this is okay as an additional layer, but it has proven handy to have multiple paths to the same places. This doesn't seem so much a breakthrough as a reduction, and a conceptual leap backwards.

It also concerns me that if you create expressways like this it seems likely that tollbooths will spring up along the way.
 
Back
Top