Skip to main content

O Broadband, Broadband, Wherefore Art Thou Broadband?

The FCC Chairman wants faster broadband. Perhaps as much as 100Mbps to 100 million households (out of about 115 million total households).

Google wants to see what happens if we have Gigabit to the home. They could ask University students. Gigabit to the dorm room isn’t unusual. Instead they’ll wire a community or two and try to figure it out themselves. (What they’ll find is that when you have gigabit to your residence, you plug in a wireless access point, step it down to 50Mbps and share it with your friends).

Broadband deployment is rising, but only 2/3rds of households have it.

Some people don’t want broadband. Others want it but can’t afford it.

Some people can’t have it. I’ve taught network management courses at a nearby community college the last couple years, and each semester I have at least one student who can’t get terrestrial service at ‘better than dial-up’ speeds at any price. The students live within an easy commute of a  metro area with 2.5 million people. Something’s wrong there.

I have a relative that lives 2.5 miles from the city limits of a community with a significant higher than average income, brand new police cars and fire trucks and a community theater, whose only non-dialup connectivity is 3G from Verizon. There is no DSL and the cable company wants a couple grand to extend their infrastructure.

I’m not really sure what broadband is, other than it’s faster than dialup. I’ve heard that some people think broadband is 768Kbps. I think that’s a bit on the slow side. On the other hand, having daily access to network speeds from 200Kbps(EDGE) to gigabit, for general browsing I don’t think that there is a use case for Internet speeds much greater than 4Mbps or so. I’ll argue that running a fast browser with a smart Javascript interpreter, combined with noScript and AdBlock+ makes browsing at any speed above 768Kbps or so as good as any other speed, and I’ll argue that my significant other and I can watch two different ordinary media streams at a reasonable quality at the same time on 6Mbps; so that speed or something similar should be a floor (not a ceiling). High def is nice, but even Cisco’s TelePresence at 1080p is only a 15Mbps stream.

I’ll also argue that the Internet is essential form of communications and will replace all other forms of electronic communications and most mail/paper based communications, and therefore must be ubiquitous. Network access today is comparable to rail access in the 19th century, to electricity in the early 20th century and interstate highways in the mid 20th century. If you are bypassed, your community will die. If you do not have access, you cannot compete.

Assume that a society is willing to spend resources on universal network connectivity. Where should the resources be focused?

  1. Medium speed (4mbps) to all of the population (think electricity).
  2. High speed (100mbps) to 85% of the population?
  3. Gigabit to .1% of the population?

I think that:

  • Network access should be ubiquitous.
  • Moderate speeds and ubiquitous coverage is more important than high speeds with 85% coverage.
  • Low access costs are essential - under $40/month, for example.
  • Broadband should be national policy, supported by something similar to the US’s 1930’s Rural Electrification Act.
  • There will have to be REA like government ‘participation’.
  • There have to be reasonable quotas. Comcast’s 250GB/month quota is quite reasonable. Others are not.

In other words, the focus should be on coverage and cost, not bandwidth.

High Definition streaming television is a luxury. Basic 4Mbps internet access is as much a necessity today as electricity was in the 1940’s.

Let’s stay focused on necessities.

Comments

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…