Category Archives: broadcast

Painful memories

I came across this post whilst catching up on UA. Reading has brought back some of the more painful and frustrating times of once working at the BBC.

I used to work over in BBC Interactive TV, which is part of the same Division as the  Online folks responsible for I remember being incredulous that when I started they were having to use Perl 5.003, no Java, and as is well described in the blog post above, having to perform sleight of hand to get things done.

Now I believe they have at least moved on a version, but are still hamstrung by using legacy systems.

Part of the reason that this was the case was an instance of “not using the bleeding edge”. A perfectly sensible reason. Also security concerns, again sensible reasons. But I feel it was all taken to extremes. This was then coupled with no ability to handle migration and integration to new software versions. In that statement I also include Operating Systems. When I left I believe there were some old Redhat 7.0 systems still in place as production platforms within iTV (these have now been decommissioned).

As a result it is very difficult to be able to move forward onto any newer technologies or use newer systems. Each decision seems to be made about the present situation without considering any longer term implications. The total future view of the infrastructure seems to be about 3 months, if that. At this rate the amount of technical debt is building so high that it will be very hard to bring things back up to date when eventually a critical point is reached. But couple that with a bureaucratic system with consistency more glacial then molasses that permeates every orifice of the organisation to govern it’s changes, and it is all going to grind to a halt. People will leave, talent and knowledge will be lost.

iTV was lucky, we were (fairly) divorced from the Web Infrastructure and could control our own destiny to an extent, we still hit brickwalls fairly regularly. Chief among them no integration or infrastructure support to allow use to integrate and test new platforms. Decision making about use of technologies was another issue.

There is a great need for a shake-up, a big shake-up, and maybe the Move to Salford is a blessing in disguise but it will be too late for many. Meanwhile this situation is causing a lot of grief to people still there, and whichever division you are in, you will see it. To those who are still there, good luck, and I hope you make it.


The Internet isn’t going to break… yet.

So this morning I went through my usual list of suspects to find out what’s going on, and I came across this.

Some of the comments from the various industry players I expect, after all, it is in no-one’s interest to say the Internet is going to break. Nor is it true. It may get a lot slower, but it will still work.

As is pointed out:

I thought that most ISPs were firmly set against charging content owners to stop their pipes filling up. They were using the convincing logic that their subscribers would feel that the ISP has already charged once for this content to be delivered via the monthly broadband subscription fee.

Which is true, no-one charges more if you want to watch TV over your ADSL, even though you will undoubtedly use more bandwidth, which is potentially passed on as cost to the ISP as part of the agreement for their supplier.

This make me wonder, what is going to motivate the ISPs to upgrade their lines when they can’t tap into the content revenue stream. After all why upgrade your wires at cost when you aren’t getting anything for it ? Is this all dependent on the idea that equipment gets cheaper and therefore the costs are the same ? This assumes that installation and operation is zero-cost, which is not true.

It’s hard to see a parallel with other business models. For example the PC (Macs are a similar story but bear with me here) you are reading this on. The main drivers for upgrade are in the software – whether that is the operating system, games, development tools. However the costs are passed onto the consumer by them buying new hardware, software etc. In other words, spending money on a fairly regular basis to “keep” current. This is helped by limiting the amount of hardware available at any time, and also by having a fairly short lifetime.

However for the lifetime of your PC, while the bandwidth requirements have gone up, your Internet connection has stayed largely the same. The equipment on either side of the wires, at your PC, and the service Providers you use – such as Google,, – has been upgraded and improved. The wires have not. Why ? Because so far there are 3 sets of people in this equation, the consumer, the service provider, and the ISP. Out of all this, the ISP gets “nothing”, as the service provider is taking sales from the customer. Unless this changes there will always be arguments for traffic shaping.

Traffic shaping isn’t new, it is one of the component parts of IMS, which has been designed by the 3GPP. This is a system devised primarily by the Telcos in order to allow them to define the boundaries of their networks for service provision and billing purposes amongst other things.

Of course there are potential technologies that can be deployed right now without needing “fibre to the home”. See if you can persuade your local cable provider to give you a DOCSIS 3.0 based link… for a 100Mbits connection 🙂