Category Archives: broadcast

TV Personalisation

I see this has cropped up again on the various roadmaps out there, and it’s high time I got my two pennies out there.

This is probably the 4 or 5th time in two (three) companies that this has come up as a user feature for a Digital TV (DTV) service… and nobody has built it yet, well they have, but I’ll come onto that.

The chief issue that I have with this idea is that no-one is entirely sure how it’s actually going to work!

Ok, hold up, from a technical viewpoint it’s entirely doable, with the emergence of technology that has it’s roots in the Internet world appearing on the DTV platform it’s perfectly possible to start having some form of user session to target content (and ads!)  to you.

If it’s that easy you ask, why haven’t we done it ? Well it comes down to something a bit more fundamental: How do people use a TV ?

The TV is not the Computer

That’s an obvious statement isn’t it ? When you use the computer or more accurately, sit down in front of the keyboard, mouse, and monitor, there is only one person using the computer. You. The websites you go to that you login into are all making the assumption (logically) that as you are the only person at the keyboard and screen, that you are indeed the person you say you are. I’m not planning on going into a treatise on security here, but it is a basic tenet of personalisation that you might have some token representing “you” that is then used to tie your content to “you”. When you then logoff and leave the keyboard, screen and wired rodent, the next person logs into “their” websites/accounts/etc and the user has changed to them.

When it comes to TV we behave differently, for a start, it’s an implied group activity, although it might  not be. The interaction from the user is less, much less, the TV delivers things to you, you don’t look for content. The searching you do is done in the EPG, with a view to finding something that you then watch for a period of time. The input is different, you don’t have a keyboard (with apologies to the 3 Sky subscribers who do… 🙂 )

This is the basis of the “sit forward/lean back” model (the words appear interchangeable, sit back/lean forward is another name for it as is lean back/lean forward), it is broadly based on our interactions with the system in front of us and how they shape our experience and expectations.

But people have done personalisation on TV ?

They have, and two examples spring to mind, HomeChoice and BARB. Used for different purposes but with equal issues.

Ok, HomeChoicewhich is now Tiscali – was a fairly novel idea, it was the first large commercial attempt in the UK to deliver TV and VoD over ADSL i.e. not on a traditional broadcast network. When you got the service, you logged in using your account details and you then set up a profile, you repeated this for every person in your house. In theory this means it can then “learn” about what you want and tailor its content (much like TiVO) however there was a subtle flaw…

…if the users are anything like the people I know who had it, one person logged their profile in and never logged out.

So we’re back to square one, a TV with n people in front of it and no idea who they are.

Why BARB ? Well they work on a similar principle, although slightly less sophisticated (or more depending on your viewpoint) model where each person in the household has to log their id into a box to indicate who is watching at any time. This is then fed into a system to interpolate the viewing figures.

The issues with this are fairly obvious and can be the subject of many posts, but they do indicate a need for some physical interaction between the user and the TV to say who is in the room.

This is not part of the current TV experience!

So how do you get personalisation to work ?

The first point is to recognise that this is not entirely a technical issue. First and foremost it’s a User Experience or Human Interaction challenge.

The solution to this will come when someone brings the kind of thinking that produced the iPhone to the TV. It will need a new way of interacting that uses a familiar interaction, but allows you to enable the new functionality.

The second point is that adding user ids to broadcast systems that are broadly ignorant of them is HARD. This is not something that can be bolted on, it will be a long and difficult project which will need some very clear direction on what personalisation means. Thankfully we can look to our brethren in the Enterprise sector who have had to do similar exercises before now.

The third is that this will open up a can of worms. Will you use User tracking ? Do you have sticky profiles ? how much data have you captured ? What the regulations around it ? What do you do about Adult content ? Targeted ads ? Who controls this ? Will you have targeted content with DRM ? How will you interact with third parties ?

The last point is that the UI challenge is also there for convergent platforms, where the idea of Live, Time-Shifted, On-Demand and Shared/User Generated content has come together, and this would have to also cope with these functions.

To take this forward, it’s time to see what people can do with the UI, it will need to be ambitiously creative and be a ubiquitous part of the TV. I’d really like to see people push this. Imagine your STB was more like the XBox or PS3 as a functional device (albeit without their quirks!), imagine what we might see ?

So, if you think you’re ‘ard enough, come make it happen!

Currently Listening to: Oakenfold – Ready, Steady, Go


When is a TV program the same ?

We’ve recently hit a problem based around a lack of clarity around the “equivalence” of assets. For people in the broadcast industry, this is an old chestnut, but I think it’s worth exploring here.

Lets say you have a program, for the sake of argument: “Vertical City” (1st Episode). Let us also assume it has a total of 6 episodes. (For those with Deja-Vu, it is a real program on Channel 4, but I’m just using this as an example).

Let us also suppose (this is not the case as far as I know) ZDF buys this series, as does NED1, and then RTE.

Let’s introduce a viewer, Bob, who has decided that he wants to watch the entire series. He gets all these broadcaster/channels: RTE, NED1, ZDF, and C4 but he’s come late to the party and C4 is at episode 4 (I’m ignoring the iPlayer/4oD part here :)).  Now if you had a Tivo, and told it to get the whole series, it would make this assumption:

A program is the same regardless of channel

So it could reconstruct the series as follows:

  • Ep1 – from RTE
  • Ep2 – from RTE
  • Ep3 – from ZDF
  • Ep4 – from NED1
  • Ep5 – from C4
  • Ep6 – from C4

And that’s the same as the original series on C4 ? Isn’t it ?


If you did this, you’d probably find that NED1 burns subtitles in (it’s a Dutch channel), ZDF might dub the program so the sound track is now German, and RTE might burn a graphic into the program.

The point is that the original asset produced (in this case by Electric Sky) by the program maker is the only part that you can make any assertion of equivalence on. This is how a broadcaster like the BBC can say that an asset they buy to put out on BBC1 is the same as the instance they put out on BBC3 a week later/earlier. Once it’s is on the channel and consumed, it is a different program, even if the viewer might say its a equivalent. BBC3 has the channel logo for instance, the time of showing might be different.

This difference is even more marked in a multi-language environment because of the differing audio/video tracks.

From the point of view of a Broadcast Network owner, such as UPC or Sky, this is frustrating, because you cannot assume equivalence, so each program is different unless the channel operator tells you explicitly that they are equivalent. This affects your approach to PVRs, network PVR, and meta-data. Essentially only the people who fed the tape into the ingest at the transmission stage can tell you if the program is the same.

From an architectural stand-point, this is something to watch where a development view will tell you that there is multiple copies of the same data in the system, even though the “copies” are actually different based on the business rules.

Just because you can run a string ‘==’ on the data with a result of ‘true’, doesn’t make it the same.

Broadcast Content References

I’ve been doing a whole bunch of work on referencing in the TV/iTV area, so I thought I’d share, plus it means I look this back up later 🙂

Over the past 10 or so years there has been various activities to create a URI scheme for referencing and locating content on a broadcast network. This has become particularly useful in the Digital TV area. Digital TV (DTV) has also brought together interaction layers (Interactive TV or iTV) together with the broadcast content. This has meant the emergences of applications that allow additional functionality to the DTV world. From an application developer’s point of view it is useful to be able to refer to a particular programme or “event” on a broadcast channel.

Various groups have worked on this over those years, and there have been a number of proposals. The common element linking those groups however is that the work has come to an end and most have moved on or closed down.  The frustrating thing is picking up the paper trails…

Anyway, the use of the locator is required by meta-data schemes such as TV Anytime which need a physical location of the media to be indicated, other schemes maybe used and are also also required by Tapeless Production systems, DVRs, Media Centers and the like.

The two main schemes that I can find specifications for are:

The main reason I’ve been looking at this is because of the requirement of TV Anytime have some way of resolving content references down to a location. Most references in TV Anytime (now ETSI standards TS 102 822-1…TS 102 822-7) are actually using the Content Reference Identifier (CRID) scheme ‘crid://’, which now specified by RFC4078 (Informational – NOT standard). This level of references is high level and does not take into account any physical asset location. Eventually you have to determine if your content is on a broadcast stream or on a disk somewhere.

The TVWeb idea

Personally, I quite like this idea, it’s obvious (‘tv:’) and simple, albeit with some issues around name collisions. It was possible to specify “tv:4” for instance, which was the “4th” channel. This doesn’t make a lot of sense in the current world. The history merges with that of SMIL, and in fact the Television and the Web group has since closed and been folded into the Device Independence Activity. Which in turn has moved on to be part of the Ubiquitous Web Applications group. So apart from the RFC2838 for this scheme, there isn’t much going for it, as it seems that SMIL is the plan for the future. Which as a function of the W3C, does make sense, as they are after all looking at the web. The only cross-over here is the pre-dominance of XML and thus URI structures to reference content, which brings us to…

The DVB locator

This was created by DAVIC, and proposed in their 1.3.1 standard, which is now an ISO standard (ISO/IEC 16500). This is slightly worrying as people have felt the need to change and extend the original so there was a 1.5 release from DAVIC which covered the extensions, building on those introduced in 1.4.1.

So what are you getting in the ISO document ? Unfortunately I won’t know as I’m not currently furnished with spare Swiss francs to buy it, but if we take the 1.5 version then a DVB locator is of the following form:

   1: dvb://<original_network_id>.[<transport_stream_id>][.<service_id>[.<component_tag>]][;<event_id>]@<start_time>D<duration>][/<...>]

(Unwrapped form below)


Implementing this

Now for the more interesting aspects of this specification when we try to implement this:

  1. There are a lot of examples where people try to use a dvb locator that looks like this: dvb://123.5ac.3be;3e45~20011207T120000Z–PT02H10M. technically this is a valid URI, but does not match the DAVIC (ETSI) specification. IT also cause the .Net validation to break as it thinks ‘~’ is invalid. This example is prevalent in the TV Anytime documents. Which is a pity as the TV Anytime group came from DAVIC
  2. The other set of examples are of this form: dvb://233a.4000.4700;b0cb@2007-07-25T05:00:00ZPT03H00M, this is NOT a valid URI (the colon fails on the ‘port’ rule, and the address is not IPv6 so this is not open to alternatives) according to RFC3986. The solution is to change the timestamp to use the simplified form of ISO 8601 which is just the timestamp and no separators.
  3. Validating the TV Anytime schema under .Net threw an error for dvb locators in point 1… which is not correct (bugs to be filed before people look). To figure this out I had to build my own URI grammar to determine if the dvb:// locator was invalid or the schema was invalid. It turns out that xs:anyURI does not actually have to be validated by any processor implementing Schema Validation – for sensible reasons. So quite what .Net is complaining about, we’re not sure. Validating in oXygen simply checks that it’s a sensible string. Hmm, anyway, examples such as those in 1, aren’t correct, move on 🙂

I also found out the wonderful things you can do to host names, most of which should result in capital punishment.


A couple of things:

  1. I’d have preferred to see the Timestamp before the location, so a format of: dvb://20081223T140000:DT01H05M@5f.8ec.6df1;345 – which to me reads better from a semantic point of view, i.e. “On 23/12/2008, at 14:00 for 1 hour and 5 minutes on/at (@)  this location; with this event.
  2. dvb is kind of specific to a particular broadcast system, thankfully the prevalent one.
  3. It’s seems to have taken a while for people to pick up on some of this work, and it feels like I needed to do similar work to a patent lawyer to figure out where it all is and what state it’s in, there must be a better way.

BitTorrent – Anti-Social Networking ?

Today The Register carried a story on Virgin’s decision to target BitTorrent users in the wake of it’s announcement of the new 50Mb/s service they are offering:

The move will represent a major policy shift for the cable monopoly and is likely to anger advocates of “net neutrality”, who say all internet traffic should be treated equally. Virgin Media currently temporarily throttles the bandwidth of its heaviest downloaders across all applications at peak times, rather than targeting and “shaping” specific types of traffic.

The firm argues that its current “traffic management” policy allows it to ensure service quality at peak times for 95 per cent of customers while still allowing peer-to-peer filesharers to download large amounts of data.

The details and timing of the new application-based restrictions are still being developed, Virgin Media’s Kiwi CEO Neil Berkett said in an interview on Monday following the launch of his firm’s new 50Mbit/s service. They will come into force around the middle of next year, he added.

This isn’t very surprising and is something I’ve been expecting for sometime, as I’ve mentioned before there is an imbalance in who is paying for what and when.

What’s more interesting is why this needs to happen.

P2P is a good thing isn’t ?

Well… yes… and no. The key thing that you’re trying to do is get the same content to lots of people at once, “cheaply”. More on cost later. In principle the logic is that the more people in a localised area have a file, the easier it is to distribute the content, i.e. asking a centralised server for a file will tell you that your “neighbour” Joe has it, so get it from him. Of course this assumes Joe lets you do that, and doesn’t mind you (or anyone else) chewing up his upstream bandwidth.

This all assumes a number of factors:

  1. A load of people in your “local” area want the same content
  2. A load of people feed back into the process

There are other issues, like mass synchronised events aren’t good for this – imagine everyone watching TV p2p, everyone would make the initial request at the same time and the central server would have to provide the content to everyone… which defeats any of the benefits of p2p.

So What’s Wrong with it ?

Ok, so looking at it from a network viewpoint, a Cable operator’s Network looks something like this:

Local Network Public

While the diagram is a little simplified it has the basic parts that we care about:

  1. A Service Provider – This could be google, BBC, MSN, etc… for our purposes this is where we get the content from.
  2. The Internet – yay – wires and stuff, connect us to everyone and so on. In this cloud lies all the peering arrangements between all the ISPs and Corporations that actually own the cables.
  3. The Cable Operator – the scope of their operation includes a backbone (often an Optic Ring). Coming off the optic fibre backbone, Cable Modem Termination System (CMTS) boxes act as gateways to the copper wire that comes down the street to your house. I’ll talk about fibre to the home later.

It’s the CMTS that’s important here, what this does is convert IP into a particular transport for Cable Modems, which is DOCSIS or EuroDOCSIS depending on where you are. DOCSIS is a frequency based protocol implementing ethernet, however the connection over the top of this is point to point and encrypted. This means that every device in the diagram (households A-D) has it’s own connection to the CMTS if it wants to send data.

Hang on! What about TV ?! Ok, TV does not use DOCSIS, it is sent at a different frequency over the copper. The Cable scheme is not so very different to ADSL, where the voice carrying signals are at a different frequency to the data (the filters you plug in your BT socket – in the UK – split these frequencies). The BIG difference between ADSL and Cable, is that in ADSL you get your own pair of wires to the exchange, in Cable, you share (as in the diagram) the cable with all the people on a particular linecard in the CMTS. The linecard drives the frequencies on a particular set of wires.

Depending on your make of CMTS, there can be upto 16 linecards. Each linecard can drive upto 900 devices. Now assuming your cable operator offers Interactive TV, Voice (telephone) and Data (Internet), you have 3 different kinds of devices needing different levels of quality of service and bandwidth. Now the total amount of bandwidth available to the devices is ultimately limited by the frequency bandwidth it has, typically around 6 x 55Mbit/s streams. The implication is that the most you can put through a CMTS is (in this case) 330 Mbit/s.

Coming back to our idea of using P2P, imagine someone in Household D was downloading a piece of content that someone in Household C wanted. If C started downloading from D, the traffic from D has to go all the way back to the backbone before it can be routed to C. This means you are now using two “streams” for the content, and this doesn’t count D still downloading!

So why is this an issue ? Well apart from the traffic duplication, you’re taking a lot of bandwidth that is also needed by other devices. Put it this way, you’d be very unhappy if the phone didn’t work because your PC was hogging all the bandwidth, much in the same way as an anti-social download manager can clobber your Skype call on your PC. Now suppose it wasn’t your PC ? This is bandwidth contention on a single cable, as it used to be in the old 10base-2 days. Essentially the bandwidth available is the “maximum”/”the total number of people on the line”. The more traffic that is repeated, the greater the inefficiency. On a cable network, P2P has no advantage over direct download for the end consumer, the only saving is on the operator’s gateway.

If the linecard is fully populated then this problem becomes a lot bigger when spread across 900 devices (maybe 300 households with TV, Internet and Voice).

The Cost of the Issue

Now here is where we visit the cost aspect. Suppose I create a network that allows you to have every device have constant bitrate at the max possible – 50Mbit/s say – I’d need to be able to cater for 50Mbit/s x Total number of Customers all the time. Clearly that’s unrealistic. In the same way as if every single person in the UK tried to call at the same time the network would not cope. So, what operators do is build a network that has sufficient capacity to cope with a working “maximum” and that drops to a particular utilisation level in the “off-peak” periods. This level is determined by cost/operational aspects.

This means that during the “quiet” periods, typically during the day, the network is not as heavily used, and the traffic is maybe 40-50% of the total capacity. Now as any TV marketer knows when people come home, and settle down, there is a “primetime”, and this is also true for domestic Internet use. The total amount of traffic at this point will likely exceed the network capacity which means some traffic shaping will take place as part of the system/equipment tolerances.

What about ADSL ?

ADSL has a different contention pattern that ironically could favour P2P as it doesn’t share cables between subscribers. The contention point is actually the Back-haul links from the Exchange to the ISP, so for example: 50 connections using the back-haul connections will only have a share of 55Mbit/s if they have to use the BT Openreach infrastructure (UK based), maybe more if the Exchange is unbundled. This is the chief value add of using an Unbundled Exchange, however this is analysed more here. ADSL Linespeed is however determined by the physical properties (length, cable quality, local electical noise) which Cable is less prone to, being (in most cases) a limited run of copper, running underground from the green box in the street outside your house 🙂

Potential Solutions

The first is obvious, buy more equipment! The only drawback is the corner that the all-you-can-eat tariffs have put the ISPs in. The traffic has gone up, but the income hasn’t, so what do you use to pay for the new equipment ? This is part of the argument that Service Providers are passing transport cost onto the ISPs and thus consumers. After all Google only needs to pay for it’s gateways … ?

Or you can target the most intensive users that inhibit your core services (Telephony for e.g.) by traffic policies that restrict particular services or protocols. This option is going to be preferable if you can’t afford to upgrade your equipment or the overheads don’t make financial sense to do so. Existing equipment will have enough capability to enforce these policies without much change, and any change will be less then building new infrastructure in the main.

You could provide Fibre to the home. Personally I regard this as a White Elephant, it gets you a faster connection to the same backbone. So if you and all the people on your street (let’s say 50 of you) have 100Mbit/s, but the backbone is only 1000Mbit/s the most you could get is around 20Mbit/s if you all used it at the same time. In reality it would appear to be faster based on usage patterns (i.e. actual use at an instant in time), but it has the same issues as before especially if the cost of the package is fixed. For the operator it does have the benefit of removing most of the bandwidth issues that copper has as well as ensuring you have a new cable.

The actual cost of the data packages is – if we are to examine all angles – too low. Why aren’t using Pay As You Go ? Pay a base rate which includes our basic level of activity and then pay for additional bandwidth when needed, or used. We’ve come to expect really low prices because of the increased competition, but unlike most commodities, a data connection isn’t a one-off cost. If you want to use it more, you need to pay more. We have that with cars (petrol/diesel) so why not our data ? It might make people consider what is important out there which in a wider context can only be good in terms of judging “good” Internet services. Would you pay to get to Amazon ? I mean you do now, but do you value that ?

Why are we trying to get around the ISP ? Why not work with them ? After all a Cable network has as part of it’s core function a very well made broadcast network which you use for TV, suppose we used that to distribute the content ? Which after all is what P2P is trying to do. This would mean savings for the operator by being able to use “off-peak” efficiently and also cache requests which means better use of the network and lower gateway/peering costs.

Interesting things

Just had a chat with George about his new project. Very interesting… will write more when I’ve given it a go.

The cost of IPTV

I’ve just been over to the BBC Internet blog to have a catchup and saw this from Ashley Highfield. In summary iPlayer has been quite a success to date, but there is still some debate about the cost to the network infrastructure.

On this point I’m glad as this debate needs to be had, both from the commercial point of view and also to inform the users of the Internet within the UK so that this is some transparency about this. are also looking at this and have put out some figures based on measurements from I’m not going to reproduce the figures here, so I’d advise having a look over them directly, but they do illuminate one thing: the cost of carrying all this content is going up, dramatically.

This is not a surprise really, it’s been pointed out before that the cost of providing the service is not fully transparent and that the consumer is not necessarily keeping up with the cost.’s breakdown of the data is very illuminating, especially the illustration about where the ISPs get their service from, which is essentially BT’s IPStream product. The bit I find fairly horrific is the scaling, which is done in steps of 155MBs (implying an ATM network). This has highlighted a detail in the network topology that may in fact be the weaker link.

In essence, the fact that you or I might download Gigabytes of data over our ADSL has a corresponding effect on the ISP’s connection to whichever network(s) it is peered with. If you are peered directly to the content provider, such as the BBC for example, then the amount that can be transferred between your ISP and the BBC is largely dependent on the links between them and the BBC, the cost also being negligible. So far, so good.

Now for ISPs their connectivity to you comes in via IPStream, which essentially gives them a link to the outside world and the exchanges, and for a lot of places is possibly the only option for connectivity. IPStream has  a fixed amount of bandwidth and will connect you to the backbone of BT’s network, which is being upgraded to be 21CN in anticipation of increased IP network use (!!!).

So here is the crunch, how do people pay for this ? Well largely it appears that the “Pay as You Go” model appears to be de rigueur, use more, you pay more. Easy enough. However there is the issue of scaling, looking at the figures it is fairly likely that an sustained increase would out-pace new development of the network which would impact all the users of it. This is not just the people at home, but those at work, as many businesses use the network as well. All this would appear to open the door to traffic shaping, diverting bandwidth to premium or localised services which would have the effect of cutting the Internet up into so many fiefdoms. Anyone not already on a premium connection might not get the full experience as everyone else. But then, are they now ?

One question that strikes me is how on earth are we stuck with this idea of 155MB increments ? Cable networks in Europe are looking at installing (D)WDM networks in order to cope with increased content distribution caused by iPlayer like offerings via the STB. In a lot of cases they already exist. Is this the real issue behind the scenes, lack of a decent network backbone with increased competition ? But then, do we want to repeat the experiences of NTL, telewest, Cable and Wireless, and Colt as they struggled to get their networks into the ground ?

Certainly the Cable networks will cope better by offloading these services to their VoD systems, but as yet I haven’t seen a network that can cope with every user hooked up to the VoD offering at the same time. Still, this is all services that are offered via your TV which is still a better viewing medium for most people.

There are more questions then answers, but one thing is for sure, the Internet is probably remarkably good value for the consumer at the moment, better get it while they’re still working out the bill!

Dodgey networks

I have to say this is a nice little toy although I haven’t got my current VM Image of choice working fully yet.

The reason for using this at the minute is that we can use it to run a VM Image of Ubuntu with NistNet running on it. More when I figure out its exact intricacies. For now we are trying to simulate what happens if you are trying to use a server interactively across a dodgey Cable HFC connection, and NistNet seems to be the ticket.

You can model the bandwidth, packet loss, packets dropped and delay within the network. It does this by using two network connections and using the simulator as a bridge between the two.

It’s a nice use of a VM, but it has highlighted the lack of space on my laptop when I had to find stuff to clear out to fit the 5Gb Image on the laptop. 🙂