Down with P2P, Part 2Posted: November 23, 2007
Mr. Cuban has another post up about P2P. I want to refine his model a bit and propose one that I think would work far better. First, there’s something fundamental that most people don’t get about the Internet business. While the Internet was designed to be a P2P medium, with end to end connectivity between all nodes, largely it’s become a publish and subscribe model, more like television and less like the phone system. Since primarily people want the content that’s available “out there” and they’re not so interested in sending things “out there”, the technology and the service offerings have been designed to offer bandwidth asynchronously to the home user. This means that instead of a something like a T1, which offers 1.544 megabits per second synchronously (meaning you can transfer and receive at the full rate, all the time), home internet usage is sold asynchronously (for example, I have 8 megabits downstream and 2 megabits upstream). However, at the provider level, bandwidth is sold synchronously. These providers are buying large pipes (OC48, 2.4 gigabits, OC192, 10 gigabits, etc), which provide for as much upstream as they do downstream, but since their customers buy asynchronously, they generally have large amounts of upstream capacity available.
The problem with the unlimited model is that people will use more on an unlimited plan than they would normally. Think about the people that feel the need to gorge themselves at a buffet “to get their money’s worth.” This isn’t necessarily a problem. The company I work for sells unlimited wireless. We can do this because there is a significant amount of cost that can be removed as well as a significant amount of profit that’s embedded in the wireless business that we eschew in favor of servicing an underserved customer base. It’s working well for us now. However, we don’t work in a business where any given customer can use 100 or 1000 times more of what we’re selling than another. This makes for an incredibly difficult problem to manage for ISPs.
Mr. Cuban posits that it would be best to start charging for upstream bandwidth, which would limit the amount of seeding done from P2P users. However, it’s not the seeding that’s slowing down the network, it’s the downstream. Most protocols are setup to allow more transfer for the more you seed. So, while his model would work, I think there’s a far simpler model that would work for everyone, although it would surely piss off the net neutrality folks. Basically, the idea would be to create two tiers of service. One would be a metered model, which is what the providers would primarily be selling. The metered model would offer something like 100 to 200 gigabytes of transfer per month, which is far more than the average customer users. It’s enough to do some P2P transfers without blowing outside your bucket, but it limits the network abusers (the ones downloading terabytes a month) from falling into this plan. This plan will be a premier plan. For giving up your unlimited plan, you will be placed into a QoS bucket that has a higher drop priority than unlimited customers. The second plan is the existing unlimited plan. This plan could charge more than the rated plan or charge the same, either way has pluses and minuses, and it will offer truly unlimited service. No letters from the ISP about abuse etc. The customer is made aware that they are being offered the same max downstream and upstream rates, but they that are receiving a lower class of service. They will be placed into the lowest QoS bucket. Without a congestion scenario, no one notices any difference. During peak times, when the unlimited users are filling up the pipes, the metered users are still receiving high quality always on Internet access, and the unmetered users still get to download to their heart’s content.
This will require the same shaping devices the ISPs are already using to control inbound bandwidth, but rather than shaping at the protocol level, they will shape at a subscriber level. There is technology already in place to accommodate this (we have a couple of devices from Cisco which will do exactly that). The primary problem to implementing this strategy for most ISPs will be on the billing and provisioning side, but the software to do this is readily available.
The freeloaders will still get pissed off. They think if they’re paying for 10 megabits of downstream bandwidth, they should get it, all the time. They don’t understand the technology problems with actually filling a pipe (TCP wasn’t designed for fat pipe, high latency networks), and they don’t understand the business model of trying to provide high bandwidth connections when there’s no business feasible way of selling the service to where everyone can light up at once and have it work. Hell, not even the telephone network can accommodate it, which is why during emergencies people are asked to minimize their phone usage, since the phone system can run into capacity issues. The average consumer might be upset as well, thinking they’re getting less for their money than they used to (“I used to have unlimited, now I’m metered”), but I think this can be solved by education and marketing (“For the same price you’ve always paid, you will now be a premium customer and always have access to all the bandwidth you want, so long as you’re willing to limit your monthly transfers.”) However, both are being offered the alternative to choose the other plan should they think that the downsides of the plan they’ve chosen outweigh the benefits of the other plan. Everyone has options.
This will piss off net neutrality folks who think that the network should always be best effort, but this is a pretty justifiable position. The ISPs have a right to frame their service to their customers how they so choose, and it does not affect how services on the Internet are delivered on a per site basis, merely on per subscriber per plan basis. This is a legitimate business case which does not affect the ability of customers to have equal access to Internet resources.
In the end, I think it’s a compromise everyone can live with. The technology is already in place, and I think the missing pieces would be relatively inexpensive to implement given the upsides to the business. What do you think, Mark?