The IT industry is massive compared with the broadcast industry – broadcasters would be foolish not to take advantage of the pace of developments in IT, says Paul Wallis
We are seeing unprecedented change in the world of broadcasting and electronic media at present. This change is genuinely disruptive, as it offers the potential to have a radical shift in the way we work, bringing new efficiencies, new productivity and new ways of serving our audiences.
As always happens when shifts in technology occur, there is a clamour to make dramatic changes as quickly as possible. Conference sessions are dedicated to the new possibilities. Vendors are keen to sell you new solutions.
My first piece of advice to all those being bombarded with information about change, though, is to very carefully evaluate what your future plans are. Define your goals and the timeframe in which you hope to achieve them. Then you can develop the detailed plans to make it happen.
Those plans will be unique to you. The speed of transition and the level of business continuity and security during the transition, as well as the ultimate end state, will be different for every enterprise going through today’s transitions.
Make sure your objectives are clear: that will define your transition planning.
Let us consider just a couple of the significant changes facing the industry today, and how we might plan to address them.
The most obvious shift, and the foundation for the change in workflows, is the move to a radically new technology platform. Up until now, broadcasting has depended upon bespoke hardware and connectivity, because that was the only way to achieve the performance we need for broadcasting and production.
All the devices we need – encoders, switchers, graphics engines and so on – used specially-designed hardware. There was no other way to create, manipulate and deliver 25 new pictures every second. We connected these black boxes through SDI, a serial digital interface created for broadcast because there was nothing else suitable.
That has all changed. Modern CPUs and GPUs are now fast enough to handle real-time video. Gigabit Ethernet is giving way to 10 GB and 40GB Ethernet switches, with 100GB capabilities coming out of the lab and into commercial deliverables, and 400GB Ethernet on the near horizon.
In short, we can do much of what we need using clever software running on standard hardware, and connected over standard networks. The IT industry is massive compared with the broadcast industry – Apple revenues in a single quarter are three times the whole of the broadcast product market in a year. We would be foolish not to take advantage of the pace of developments in IT.
Using COTS – commercial off-the- shelf IT hardware – gives us obvious cost savings and simplicity in purchasing. A best-of-breed system might have software from multiple vendors, but it would all run on common hardware, like HP servers.
But it does bring a new challenge. Implementing a major broadcast project in the past meant procuring a set of hardware, integrating it, installing it and then effectively setting it in concrete for seven to 10 years, the lifetime of the system. Then you replaced it with a new system.
The IT industry does not work like that. Try to recall the capabilities of a computer of 10 years ago! We are expected to replace core IT hardware every couple of years or so, along with the operating systems which run on it.
For broadcasters, it means we have to get used to a much more fluid infrastructure. Replacement cycles will be different for hardware, for operating software and for applications software.
Procurement cycles will change, too. For a major broadcast installation, a year from initial request for information to contract signing was not uncommon, with another year for integration, commissioning, testing and training to follow. In two years, the IT landscape will have changed – and now, so too will the capabilities of broadcast technology.
The move to a software-defined architecture for content creation, management and delivery is a major challenge, but so too is the decision on what form that content will take.
Many are still working through the move from standard definition to HD; others are considering what is to follow. 4K channels are being announced, but there is growing concern that this is just one of many ways to improve the quality the viewer perceives, and it may not be the most effective or cost-effective route.
For action channels, many consider high frame rates to be the best way forward. But, like 4K, this implies a lot more data. Moving from HD to UHD quadruples the amount of raw data to be processed and encoded: so too does moving from 25 frames a second to 100 frames a second. But HFR is even harder to compress than UHD.
2015 has seen a rapid rise in interest in high dynamic range (HDR) for television – better pixels, not more pixels.
The theory is that if you can more closely define colours, you can create perceived sharpness without actually increasing the screen resolution. A move from eight bits per colour to 12 bits has a stunning effect: more detail in the shadows, brighter highlights and an almost 3D-like clarity. And all for only a 50% rise in the bit budget.
How will the industry move forward? It is too hard to tell at the moment. What is clear is that your new architecture is going to have to be flexible enough to cope with 4K/UHD (or 8K, or more), HFR and HDR, even though you may never need all of those capabilities.
This brings us back to how best to take advantage of software-defined architectures.
In the past, for instance, we have always handled signals at their native resolution – SDI – because that was the only option. But IP connectivity and Ethernet switches are measured in bytes, not in ports, so perhaps there is value in mixing compressed and uncompressed content. We could determine, on a signal by signal basis, how to balance image quality, latency and bandwidth. A set of business rules could automate this decision-making in an instant, giving priority where needed and sacrificing quality where it is not going to be noticed.
Processes too are likely to be virtualised in a data centre, and maybe the cloud, rather than running on application-specific devices. Why would you allocate a standalone computer to graphics generation which is only needed for an hour a day, when the graphics software could grab as many processor cores as needed, when needed, from a pool in the data centre?
If you know you have a peak in encoding traffic, for example – for the main evening news programme, perhaps – then ship that out to the cloud to create capacity. Or maybe put other encoding work into the cloud to make space for this key task in the data centre.
These are just some of the ways in which the management of broadcast and media technology will look very different in the future. As I said at the start of this article, the really key thing is to determine where your business and operational requirements lie, then create an architecture that will support it.
You are unlikely to be able to implement this new architecture all in one go: most of us have existing infrastructures which must be amortised over time. So your planning process must also determine how you will run a hybrid infrastructure and the consequent hybrid workflows, all the while moving towards your final goals.
This is a tremendous opportunity to create an architecture fit for the future. Ensure that your goals can be achieved at your pace.
Paul Wallis is Sales Director Middle East & Africa at Imagine Communications.