![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
A friend of mine, who's not terribly tech-savvy, and I got into a discussion the other day about the Internet. Basically, her assertion was that most if not all of the trans-Atlantic and trans-Pacific internet traffic was sent via satellite, not cable. She had in her lifetime seen dozens of shuttle launches and other rockets going into the sky. On the other hand, it seemed utterly inconceivable to her that we had a substantial world network of underwater cables: the distance from North America to Japan, for instance, was seven hours by flight. And she'd never heard of a significant cable-laying operation.
In some respects, her impressions aren't wrong: the water across the Pacific basin is crushingly deep, the distance is long, and the temperature and chemical composition of the deep ocean corrodes and degrades machinery. Shuttle launches are big, pretty, and expensive. In contrast, ships leave harbor every day.
But just as there's only so much bandwidth available in the US for radio, television, emergency communications, cell phones and WiFi, there's only so much bandwidth between one country and another across a satellite. Unless we were using precision lasers, in which case we could treat satellites like fiber optic cables, but the space-to-ground retargetting has never been reliably tested for commercial use, and in case no quasi-optic communication satellites are currently flying.
The military had a quasi-optic satellite constellation on the drawing board; launch of the first of five was scheduled for the second quarter of 2013; the entire constellation was supposed to have a throughput of 40GB/sec. The "Transformational Satellite Constellation," originally planned in 2003, would have handled approximately 1% of the current output of a medium-sized US city, at a budget of $14 billion dollars.
In contrast, the cable management company Global Marines Systems this week began work to lay a new fiberoptic cable between the US and the UK. The estimated cost of the project is $300 million, a bargain compared to the satellites, and at 10TB/s, 3000 times the communications density, it's also cash-efficient.
This particular cable won't be available to most of us, though; it's strictly for financial operations between Wall Street and London's financial district, to give high-speed traders the edge they need to beat those who don't have access to the cable. The company funding the effort says the new cable will reduce transoceanic transaction times from 65 down to 59 milliseconds. That's 6 thousandths of a second difference, and computational traders are already signing contracts 50 times greater than traditional communications for access to it.
To see just how much of the world depends on these cables, check out Greg's Cable Map, a clearing house website for information about submarine cables. Just clicking on a cable will show you when the cable was built, and its peak bandwidth. Most of the cables are around 4TB/s, although there are some nearly twenty years old that measure throughput in GB/s or, one from Palermo, Italy to West Palm Beach (WTF?) has only 560MB/s.
The Internet is really a set of underwater tubes.
In some respects, her impressions aren't wrong: the water across the Pacific basin is crushingly deep, the distance is long, and the temperature and chemical composition of the deep ocean corrodes and degrades machinery. Shuttle launches are big, pretty, and expensive. In contrast, ships leave harbor every day.
But just as there's only so much bandwidth available in the US for radio, television, emergency communications, cell phones and WiFi, there's only so much bandwidth between one country and another across a satellite. Unless we were using precision lasers, in which case we could treat satellites like fiber optic cables, but the space-to-ground retargetting has never been reliably tested for commercial use, and in case no quasi-optic communication satellites are currently flying.
The military had a quasi-optic satellite constellation on the drawing board; launch of the first of five was scheduled for the second quarter of 2013; the entire constellation was supposed to have a throughput of 40GB/sec. The "Transformational Satellite Constellation," originally planned in 2003, would have handled approximately 1% of the current output of a medium-sized US city, at a budget of $14 billion dollars.
In contrast, the cable management company Global Marines Systems this week began work to lay a new fiberoptic cable between the US and the UK. The estimated cost of the project is $300 million, a bargain compared to the satellites, and at 10TB/s, 3000 times the communications density, it's also cash-efficient.
This particular cable won't be available to most of us, though; it's strictly for financial operations between Wall Street and London's financial district, to give high-speed traders the edge they need to beat those who don't have access to the cable. The company funding the effort says the new cable will reduce transoceanic transaction times from 65 down to 59 milliseconds. That's 6 thousandths of a second difference, and computational traders are already signing contracts 50 times greater than traditional communications for access to it.
To see just how much of the world depends on these cables, check out Greg's Cable Map, a clearing house website for information about submarine cables. Just clicking on a cable will show you when the cable was built, and its peak bandwidth. Most of the cables are around 4TB/s, although there are some nearly twenty years old that measure throughput in GB/s or, one from Palermo, Italy to West Palm Beach (WTF?) has only 560MB/s.
The Internet is really a set of underwater tubes.
And of course, Neal Stephenson wrote the primer.
Date: 2011-09-29 05:16 pm (UTC)That article was so good, it just about put me off fiction. Details outdated, writing timeless.
Re: And of course, Neal Stephenson wrote the primer.
Date: 2011-09-29 11:09 pm (UTC)no subject
Date: 2011-09-30 04:23 am (UTC)no subject
Date: 2011-09-30 06:02 am (UTC)Anyone want to talk to the (US-financed) Israeli navy about attacking boats in int'l water?
no subject
Date: 2011-10-02 03:34 am (UTC)Sure, the "public Internet" could be built up so that it had the bandwidth to carry financial data with 10 millisecond latency (which is the going-lower-limit these days). But consider just how much financial data travels around, just in the U.S., just during market hours (9am-4pm).
20 Gbps lines were commonplace in financial data centers 8 years ago. Just one 20Gbps line would send ~419 Gigabytes/hour, or 2.8 Terabytes during market hours. And that's for just one line. Data centers now probably have multiple 100 Gbps lines.
In short, even if we made the "public Internet" beefy enough to handle that kind of load, financial traffic would swamp everything else!
There's also a second, possibly more important, reason why the financial markets have dedicated fiber networks.
Security.
Keep dedicated lines just for institution-to-institution communication, and the only security issues you have to worry about are physical access to the network-provider and those institutions.