Down with P2P, Part 2

Mr. Cuban has another post up about P2P.  I want to refine his model a bit and propose one that I think would work far better.  First, there’s something fundamental that most people don’t get about the Internet business.  While the Internet was designed to be a P2P medium, with end to end connectivity between all nodes, largely it’s become a publish and subscribe model, more like television and less like the phone system.  Since primarily people want the content that’s available “out there” and they’re not so interested in sending things “out there”, the technology and the service offerings have been designed to offer bandwidth asynchronously to the home user.  This means that instead of a something like a T1, which offers 1.544 megabits per second synchronously (meaning you can transfer and receive at the full rate, all the time), home internet usage is sold asynchronously (for example, I have 8 megabits downstream and 2 megabits upstream).  However, at the provider level, bandwidth is sold synchronously.  These providers are buying large pipes (OC48, 2.4 gigabits, OC192, 10 gigabits, etc), which provide for as much upstream as they do downstream, but since their customers buy asynchronously, they generally have large amounts of upstream capacity available.

The problem with the unlimited model is that people will use more on an unlimited plan than they would normally.  Think about the people that feel the need to gorge themselves at a buffet “to get their money’s worth.”  This isn’t necessarily a problem.  The company I work for sells unlimited wireless.  We can do this because there is a significant amount of cost that can be removed as well as a significant amount of profit that’s embedded in the wireless business that we eschew in favor of servicing an underserved customer base.  It’s working well for us now.  However, we don’t work in a business where any given customer can use 100 or 1000 times more of what we’re selling than another.  This makes for an incredibly difficult problem to manage for ISPs.

Mr. Cuban posits that it would be best to start charging for upstream bandwidth, which would limit the amount of seeding done from P2P users.  However, it’s not the seeding that’s slowing down the network, it’s the downstream.  Most protocols are setup to allow more transfer for the more you seed.  So, while his model would work, I think there’s a far simpler model that would work for everyone, although it would surely piss off the net neutrality folks.  Basically, the idea would be to create two tiers of service.  One would be a metered model, which is what the providers would primarily be selling.  The metered model would offer something like 100 to 200 gigabytes of transfer per month, which is far more than the average customer users.  It’s enough to do some P2P transfers without blowing outside your bucket, but it limits the network abusers (the ones downloading terabytes a month) from falling into this plan.  This plan will be a premier plan.  For giving up your unlimited plan, you will be placed into a QoS bucket that has a higher drop priority than unlimited customers.  The second plan is the existing unlimited plan.  This plan could charge more than the rated plan or charge the same, either way has pluses and minuses, and it will offer truly unlimited service.  No letters from the ISP about abuse etc.  The customer is made aware that they are being offered the same max downstream and upstream rates, but they that are receiving a lower class of service.  They will be placed into the lowest QoS bucket.  Without a congestion scenario, no one notices any difference.  During peak times, when the unlimited users are filling up the pipes, the metered users are still receiving high quality always on Internet access, and the unmetered users still get to download to their heart’s content.

This will require the same shaping devices the ISPs are already using to control inbound bandwidth, but rather than shaping at the protocol level, they will shape at a subscriber level.   There is technology already in place to accommodate this (we have a couple of devices from Cisco which will do exactly that).  The primary problem to implementing this strategy for most ISPs will be on the billing and provisioning side, but the software to do this is readily available.

The freeloaders will still get pissed off.  They think if they’re paying for 10 megabits of downstream bandwidth, they should get it, all the time.  They don’t understand the technology problems with actually filling a pipe (TCP wasn’t designed for fat pipe, high latency networks), and they don’t understand the business model of trying to provide high bandwidth connections when there’s no business feasible way of selling the service to where everyone can light up at once and have it work.  Hell, not even the telephone network can accommodate it, which is why during emergencies people are asked to minimize their phone usage, since the phone system can run into capacity issues.  The average consumer might be upset as well, thinking they’re getting less for their money than they used to (“I used to have unlimited, now I’m metered”), but I think this can be solved by education and marketing (“For the same price you’ve always paid, you will now be a premium customer and always have access to all the bandwidth you want, so long as you’re willing to limit your monthly transfers.”)  However, both are being offered the alternative to choose the other plan should they think that the downsides of the plan they’ve chosen outweigh the benefits of the other plan.  Everyone has options.

This will piss off net neutrality folks who think that the network should always be best effort, but this is a pretty justifiable position.  The ISPs have a right to frame their service to their customers how they so choose, and it does not affect how services on the Internet are delivered on a per site basis, merely on per subscriber per plan basis.  This is a legitimate business case which does not affect the ability of customers to have equal access to Internet resources.

In the end, I think it’s a compromise everyone can live with.  The technology is already in place, and I think the missing pieces would be relatively inexpensive to implement given the upsides to the business.  What do you think, Mark?


Down with P2P

Strangely, I find myself agreeing with Mark Cuban.  I’ve spent some time thinking back to what I’ve downloaded via P2P applications.  I’ve used BitTorrent and previous P2P technologies to download many things over the years, but I can only think of one legitimate application, and that’s Blizzard using BitTorrent for WoW client distribution.  The potential for this is immense, however, the only reason legitimately for Blizzard to use BitTorrent for distribution is to save on bandwidth costs on their end.  A company like Akamai could easily provide a similar or superior experience for most users, but it would cost Blizzard significantly more than their current distribution model.  I find it ironic that one of the most successful users of legitimate P2P is primarily using it to offload costs from them out to the ISPs when they are probably one of the most successful pay services on the Internet.  The only thing I’d miss about losing various P2P applications is the ability to download television seasons during the summer for viewing.  Mainly this is because there isn’t a suitable for-pay alternative.

Honestly, the striking fact is that 60% of Internet traffic is P2P, and that was from a report from last year.  It’s certainly not going down, if anything it’s increasing.  That means that every bit of traffic most normal users do (web browsing, email, etc) is fighting for bandwidth on networks that are largely congested simply because as soon as the ISPs provision more bandwidth, the P2P users fill up the pipes.  We can get into the oversubscription arguments, but frankly oversubscription is the only way the business model works.  If ISPs had to provision enough bandwidth for everyone to fully light up their last mile pipe to the home, they’d go out of business.  What this means, and what I’ve specifically been noticing more in the past few weeks as I’ve traveled, is that my service is starting to suffer.  Every time I get to a hotel, the damn pipe is filled and I can barely VPN into work to get email.  Even as I come back to Arkansas, I’m noticing that my mother-in-law’s Internet connection with Cox appears to be slow out in Greenwood.  It’s almost impossible without access to the various places I’ve been’s network management systems to fairly diagnose exactly why they’re slower than I expect, but a safe bet would certainly be on lack of bandwidth at the upstream (especially during peak hours) due to P2P users.

If I’m starting to get the feeling like my service is suffering, then shape all the damn P2P traffic down to 0.  Honestly, if I get better service, I’ll probably not lament the loss of my ability to make 250 TCP connections at once to pull down files in little increments at 10KB/sec per connection.  Maybe without the ability to go to the alternative and get the content for free, this will force consumers to start demanding acceptable for pay alternatives for the things they’re getting illegally currently.  I just don’t see anything getting much better in the current stalemate we’re in without some sort of drastic measures.  I just never thought I’d be siding with the providers on this particular issue.