BitTorrent – Anti-Social Networking ?

Today The Register carried a story on Virgin’s decision to target BitTorrent users in the wake of it’s announcement of the new 50Mb/s service they are offering:

The move will represent a major policy shift for the cable monopoly and is likely to anger advocates of “net neutrality”, who say all internet traffic should be treated equally. Virgin Media currently temporarily throttles the bandwidth of its heaviest downloaders across all applications at peak times, rather than targeting and “shaping” specific types of traffic.

The firm argues that its current “traffic management” policy allows it to ensure service quality at peak times for 95 per cent of customers while still allowing peer-to-peer filesharers to download large amounts of data.

The details and timing of the new application-based restrictions are still being developed, Virgin Media’s Kiwi CEO Neil Berkett said in an interview on Monday following the launch of his firm’s new 50Mbit/s service. They will come into force around the middle of next year, he added.

This isn’t very surprising and is something I’ve been expecting for sometime, as I’ve mentioned before there is an imbalance in who is paying for what and when.

What’s more interesting is why this needs to happen.

P2P is a good thing isn’t ?

Well… yes… and no. The key thing that you’re trying to do is get the same content to lots of people at once, “cheaply”. More on cost later. In principle the logic is that the more people in a localised area have a file, the easier it is to distribute the content, i.e. asking a centralised server for a file will tell you that your “neighbour” Joe has it, so get it from him. Of course this assumes Joe lets you do that, and doesn’t mind you (or anyone else) chewing up his upstream bandwidth.

This all assumes a number of factors:

  1. A load of people in your “local” area want the same content
  2. A load of people feed back into the process

There are other issues, like mass synchronised events aren’t good for this – imagine everyone watching TV p2p, everyone would make the initial request at the same time and the central server would have to provide the content to everyone… which defeats any of the benefits of p2p.

So What’s Wrong with it ?

Ok, so looking at it from a network viewpoint, a Cable operator’s Network looks something like this:

Local Network Public

While the diagram is a little simplified it has the basic parts that we care about:

  1. A Service Provider – This could be google, BBC, MSN, etc… for our purposes this is where we get the content from.
  2. The Internet – yay – wires and stuff, connect us to everyone and so on. In this cloud lies all the peering arrangements between all the ISPs and Corporations that actually own the cables.
  3. The Cable Operator – the scope of their operation includes a backbone (often an Optic Ring). Coming off the optic fibre backbone, Cable Modem Termination System (CMTS) boxes act as gateways to the copper wire that comes down the street to your house. I’ll talk about fibre to the home later.

It’s the CMTS that’s important here, what this does is convert IP into a particular transport for Cable Modems, which is DOCSIS or EuroDOCSIS depending on where you are. DOCSIS is a frequency based protocol implementing ethernet, however the connection over the top of this is point to point and encrypted. This means that every device in the diagram (households A-D) has it’s own connection to the CMTS if it wants to send data.

Hang on! What about TV ?! Ok, TV does not use DOCSIS, it is sent at a different frequency over the copper. The Cable scheme is not so very different to ADSL, where the voice carrying signals are at a different frequency to the data (the filters you plug in your BT socket – in the UK – split these frequencies). The BIG difference between ADSL and Cable, is that in ADSL you get your own pair of wires to the exchange, in Cable, you share (as in the diagram) the cable with all the people on a particular linecard in the CMTS. The linecard drives the frequencies on a particular set of wires.

Depending on your make of CMTS, there can be upto 16 linecards. Each linecard can drive upto 900 devices. Now assuming your cable operator offers Interactive TV, Voice (telephone) and Data (Internet), you have 3 different kinds of devices needing different levels of quality of service and bandwidth. Now the total amount of bandwidth available to the devices is ultimately limited by the frequency bandwidth it has, typically around 6 x 55Mbit/s streams. The implication is that the most you can put through a CMTS is (in this case) 330 Mbit/s.

Coming back to our idea of using P2P, imagine someone in Household D was downloading a piece of content that someone in Household C wanted. If C started downloading from D, the traffic from D has to go all the way back to the backbone before it can be routed to C. This means you are now using two “streams” for the content, and this doesn’t count D still downloading!

So why is this an issue ? Well apart from the traffic duplication, you’re taking a lot of bandwidth that is also needed by other devices. Put it this way, you’d be very unhappy if the phone didn’t work because your PC was hogging all the bandwidth, much in the same way as an anti-social download manager can clobber your Skype call on your PC. Now suppose it wasn’t your PC ? This is bandwidth contention on a single cable, as it used to be in the old 10base-2 days. Essentially the bandwidth available is the “maximum”/”the total number of people on the line”. The more traffic that is repeated, the greater the inefficiency. On a cable network, P2P has no advantage over direct download for the end consumer, the only saving is on the operator’s gateway.

If the linecard is fully populated then this problem becomes a lot bigger when spread across 900 devices (maybe 300 households with TV, Internet and Voice).

The Cost of the Issue

Now here is where we visit the cost aspect. Suppose I create a network that allows you to have every device have constant bitrate at the max possible – 50Mbit/s say – I’d need to be able to cater for 50Mbit/s x Total number of Customers all the time. Clearly that’s unrealistic. In the same way as if every single person in the UK tried to call at the same time the network would not cope. So, what operators do is build a network that has sufficient capacity to cope with a working “maximum” and that drops to a particular utilisation level in the “off-peak” periods. This level is determined by cost/operational aspects.

This means that during the “quiet” periods, typically during the day, the network is not as heavily used, and the traffic is maybe 40-50% of the total capacity. Now as any TV marketer knows when people come home, and settle down, there is a “primetime”, and this is also true for domestic Internet use. The total amount of traffic at this point will likely exceed the network capacity which means some traffic shaping will take place as part of the system/equipment tolerances.

What about ADSL ?

ADSL has a different contention pattern that ironically could favour P2P as it doesn’t share cables between subscribers. The contention point is actually the Back-haul links from the Exchange to the ISP, so for example: 50 connections using the back-haul connections will only have a share of 55Mbit/s if they have to use the BT Openreach infrastructure (UK based), maybe more if the Exchange is unbundled. This is the chief value add of using an Unbundled Exchange, however this is analysed more here. ADSL Linespeed is however determined by the physical properties (length, cable quality, local electical noise) which Cable is less prone to, being (in most cases) a limited run of copper, running underground from the green box in the street outside your house 🙂

Potential Solutions

The first is obvious, buy more equipment! The only drawback is the corner that the all-you-can-eat tariffs have put the ISPs in. The traffic has gone up, but the income hasn’t, so what do you use to pay for the new equipment ? This is part of the argument that Service Providers are passing transport cost onto the ISPs and thus consumers. After all Google only needs to pay for it’s gateways … ?

Or you can target the most intensive users that inhibit your core services (Telephony for e.g.) by traffic policies that restrict particular services or protocols. This option is going to be preferable if you can’t afford to upgrade your equipment or the overheads don’t make financial sense to do so. Existing equipment will have enough capability to enforce these policies without much change, and any change will be less then building new infrastructure in the main.

You could provide Fibre to the home. Personally I regard this as a White Elephant, it gets you a faster connection to the same backbone. So if you and all the people on your street (let’s say 50 of you) have 100Mbit/s, but the backbone is only 1000Mbit/s the most you could get is around 20Mbit/s if you all used it at the same time. In reality it would appear to be faster based on usage patterns (i.e. actual use at an instant in time), but it has the same issues as before especially if the cost of the package is fixed. For the operator it does have the benefit of removing most of the bandwidth issues that copper has as well as ensuring you have a new cable.

The actual cost of the data packages is – if we are to examine all angles – too low. Why aren’t using Pay As You Go ? Pay a base rate which includes our basic level of activity and then pay for additional bandwidth when needed, or used. We’ve come to expect really low prices because of the increased competition, but unlike most commodities, a data connection isn’t a one-off cost. If you want to use it more, you need to pay more. We have that with cars (petrol/diesel) so why not our data ? It might make people consider what is important out there which in a wider context can only be good in terms of judging “good” Internet services. Would you pay to get to Amazon ? I mean you do now, but do you value that ?

Why are we trying to get around the ISP ? Why not work with them ? After all a Cable network has as part of it’s core function a very well made broadcast network which you use for TV, suppose we used that to distribute the content ? Which after all is what P2P is trying to do. This would mean savings for the operator by being able to use “off-peak” efficiently and also cache requests which means better use of the network and lower gateway/peering costs.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s