A lot of people are concerned that the FCC will “destroy the Internet” (or something similarly hyperbolic) by loosening so-called net neutrality rules. But the fact is that today’s system is actually a lot less stable than you realize, and legislating net neutrality would actually make it even more precarious.
That traffic moves in a “fair” way across the internet is not at all guaranteed by net neutrality. “Fairness” is an extremely fuzzy concept as any economist will tell you — there are more bits that want to move through the Internet than there are pipes to carry them. So how is it the system works at all right now? The answer today is not because of ISPs being nice to each other. It’s mostly because of carefully designed protocols, and complex algorithms built into them. Most important is the transmission control protocol’s congestion avoidance algorithms — that’s the TCP part of TCP/IP which is how almost all content moves on the Internet today. This set of algorithms tries really hard to maximize the amount of bandwidth used to deliver your content, without using too much bandwidth, and to do so in a fair way. Researchers have spent decades optimizing these algorithms to be fair. This research is what makes the internet work well today, much more so than agreements between ISPs.
How does TCP congestion avoidance work? As an analogy consider a crowded party with lots of conversations going on. This is like the internet. The people at the party are computers — laptops and phones and server. The conversations are content moving around between them. TCP includes an agreement that you always start a conversation at a moderate volume, and if you’re being heard perfectly, you can talk louder. But as soon as your conversation gets mixed up with somebody else’s (packet loss) you quiet down. It’s a feedback mechanism that results in conversations being quiet when the room is full, but when it’s empty you can yell and get your point across really fast — it’s fair and efficient. The analogy is far from perfect, but it works for this discussion.
Now consider a little thought experiment. What if a hypothetical content provider created a new protocol instead of TCP — let’s call it GooseCast — to shove content down the pipes as fast as they could without any regard to fairness. By analogy, GooseCast servers would just yell at the top of their lungs regardless of how full the room was. They would very practically drown out the polite TCP servers. GooseCast content would get delivered faster than TCP content because the TCP servers would keep getting quieter trying to avoid messing up other people’s conversations, whereas GooseCast would just keep yelling. Clearly this would be a problem.
Since GooseCast content gets to customers faster, with higher bandwidth and quality than TCP, many content providers would switch to GooseCast. In fact we’d probably see an arms race develop. GooseCast2 would be even more aggressive than GooseCast and then FatGooseCast, etc. Good old-fashioned web browsing using classic HTTP over TCP would lose out, as would the slow to change and anybody trying to be polite or fair. Some content providers would probably stand on principle and stick with polite protocols, but many wouldn’t.
Non-evil ISPs (color me naive, but yes I think they exist) would naturally try to protect their users who want equal access to all content. They might prioritize network traffic that was using good old fashioned TCP to ensure that content was delivered fairly. This would mean explicitly deprioritizing content that used aggressive protocols like GooseCast. But what if net neutrality rules were in full effect? Then ISPs would have their hands tied, and would be legally unable to prioritize the polite content over the impolite content. This sounds like exactly the world that net neutrality advocates would like to avoid. Well guess what — it’s the world we already live in today! We are only protected from this bad outcome by convention, not legislation.
If you think this possibility is remote and implausible, you have to look no further than this sensationalistic article in MIT Technology Review from 2012. These folks made this “amazing” new protocol which boosted wireless download speeds in a big city from 0.5 Mbps to 16 Mbps. Wow! Amazing breakthrough! How? It’s pretty much exactly GooseCast — they just drown out everybody else. There is no magic “algebra” that can extract 30x more bandwidth from existing cellular networks with a new protocol. But you can hog the network and frustrate everybody else connected to the same cell tower as you. Fortunately for all of us, the company promoting this technology appears to be going nowhere — I’m hoping because the scientists who worked on this are more realistic/honest about what they’ve developed than the reporter who wrote about it. But I hope you appreciate that the “GooseCast” scenario I describe here is not just hypothetical.
So what is keeping Internet content delivery fair? Mostly protocols, and somewhat agreements. If we legislate agreements, then we are truly at the mercy of protocols. And I think we can all agree that we don’t want the government telling tech companies what protocols they can and can’t use to move data around on the Internet.
Fairness on the Internet is not nearly as simple as you’d like it to be. Legislation would be easily defeated by innovation, and between the two, innovation moves much faster.