P2P – Boon, Boondoggle, or Bandwidth Hog? – Is Metering the Answer?
Reader Aswath posted a comment on Thursday’s post suggesting that charging users explicitly for both upload and downloads pricing is “an equitable solution” to the congestion problem ISPs claim is caused when peer to peer (P2P) services use some of each user’s “unlimited” Internet capacity to serve other users and as a substitute for the service provider having to buy more Internet bandwidth itself. Aswath’s a smart guy and knows his telecom; he’s right that this is a solution to the problem; but I think it’s the wrong solution.
P2P does involve shifting bandwidth usage (and usage of other resources) from the service provider to the users. If users choose to let Skype or BBC or Napster use some of their CPU power or disk storage to serve other users (as they actually do when they sign up for Skype calling, BBC iPlayer, or Napster sharing), that’s a personal choice (as long as they’re informed that’s what they’re doing).
You could say the same thing about “their” access bandwidth; they are paying for it. They should be allowed to use it as they please. But ISPs do rely on the fact that most access bandwidth isn’t being used most of the time. If they assumed it would all be used all the time, backbone networks would have to be many times larger than they are today and Internet access would cost us all many times what it costs today (especially in markets which are more competitive than the US).
If, as Aswath suggest, we all paid for what we actually used, we’d be very careful of what was done with “our” access bandwidth and not place more load on the backbone than we’re willing to pay for. We’d be very sure that applications we choose to run, especially P2P applications, don’t run up huge bills behind our back
“Granted,” Aswath writes, “users need to be retrained from the flat-rate pricing, but it eliminates ISPs' cries of wolf.” Seems to me that usage pricing for Internet use is to draconian a cure to the problem, particularly if the problem IS just ISPs crying wolf.
I think unmetered use constrained only by the size of the access pipe one chooses to buy (and the actual capacity you can obtain over that pipe in true life) has helped to give the Internet the enormous utility it has. Several reasons for this:
- usage billing systems are themselves expensive and drive up the cost of service. Lots of records have to be kept of lots of little things. Detailed bills need to be produced; expensive conversations have to be had to resolve billing disputes. What if I download 100meg of a 101meg file and my Internet connection blinks (or I think it did) and I have to start all over? Am I entitled to a credit? Do I have to pay for spam?
- people are afraid of accidentally running up a large bill so refrain from many activities. How do you know how many bytes is a VoIP call is going to be? Are you gonna tell your kids not to send you big pictures? Videos?
- people hate complexity. As long as you’re going to be billed at all, there’s nothing simpler than having your bill be the same amount each month. You don’t have to reconcile it; you don’t have to worry about being overcharged. There are no unknowns in your budget. My experience is that people will pay a premium for simplicity (see this old post).
Moreover, simply charging per byte transferred doesn’t reflect the load each user places on the network as a whole. In general, data transferred during non-peak periods cause no incremental cost to the ISP nor do they cause congestion for other users. It’s only when PEAK loads grow that congestion is experienced and ISPs need to buy more backbone. Those users who transfer data to other users offpeak are actually increasing the productive use of the network at not cost to the ISP and making the network more useful to other users. Should they be penalized or rewarded for providing this service using their own machines and bandwidth?
Even during peak periods, it’s not clear that users running P2P applications actually add to total network load. Take the example of BBC iPlayer whose use involves copies of TV shows being legally transferred from one user to another. If this service really is popular, many of the transfers will traverse only a small distance on a single network. If the transfers were all being done from central BBC servers, all of those people who do not use the same ISPs as BBC will cause long cross-network transfers. ISPs would argue, however, that BCC wouldn’t accept the cost of running the servers and buying the access necessary for running all of these transfers directly so they simply wouldn’t happen.
Net, as always I appreciate Aswath’s thinking. Don’t agree this time with the conclusion and will try to present a solution to the possibility of P2P arbitrage in a future post.
The first post of this P2P series is here.
Comments