The move to file-based architectures allows broadcasters to completely change both the way they build and the way they use technical facilities for content creation and delivery, says James Gilbert We talk a lot about disruptive change, but actually there has been relatively little in our industry. We started out shooting on film, then on […]

The move to file-based architectures allows broadcasters to
completely change both the way they build and the way they use
technical facilities for content creation and delivery, says James Gilbert
We talk a lot about disruptive change, but actually there has been relatively little in our industry. We started out shooting on film, then on analogue camcorders and now digital, but the process of acquisition has hardly changed. When Avid showed its first non-linear editor (in a whisper suite at NAB in 1988), it showed the prospects of greater simplicity and productivity, but essentially editing remained a solitary, standalone process: we even still talked about managing the content in bins.
Today, though, we are on the verge of a genuine disruptive change if we choose to accept the opportunity. The move to file-based architectures allows us to completely change both the way we build technical facilities for content creation and delivery and the way we use those architectures.
The underlying enabler for this opportunity is the continuing growth in processing power. Writing in an anniversary edition of Electronics magazine in 1965, Gordon Moore (then R&D director at Fairchild Semiconductors, later one of the founders of Intel) suggested that processor complexity was doubling each year, and that this rate could be expected to continue.
This idea of continual improvement became known as Moores Law, although it was Intel colleague David House who popularised the best-known version, that processors would double in performance every 18 months.
Moores Law means that today we have standard, off-the-shelf and affordable computers capable of processing broadcast quality video in real time and performing extremely sophisticated transformations. As the IT industry is a couple of orders of magnitude larger than the broadcast industry, we are now in the comfortable position of being able to take advantage of someone elses investment in hardware development, reducing that part of the cost of the products we need to make television.
File-based workflows
Computers deal in files, and if you tell them what to do, they will be as happy rendering 3D graphics as calculating Excel spreadsheets. So the first part of the revolution is that we now have to handle our content as files, not as the real-time video and audio streams that we have used up until now.
There are some challenges here: we need to learn some new networking skills, and we are a bit short of standards for exchanging files at the moment. But these are solvable problems. The advantages far outweigh the short-term challenges, not least that we can move content through inexpensive Ethernet cabling and switches, and look to a future without cumbersome co-axial cables and expensive specialist routers.
So the next generation of broadcast products could be clever software for editing, encoding, graphics and so on running on standard computers, and linked over Ethernet. But that risks simply replacing like for like, the 2016 equivalent of moving from Betacam to DigiBeta. We still have a set of discrete boxes performing individual tasks.
I argue that we should be thinking bigger than that. We should be seizing the file-based revolution as the chance to do something really disruptive.
Virtualisation
If broadcast products are to be software applications which run on standard computers, why does it have to be one process = one piece of software + one piece of hardware?
In the wider IT industry, this would be seen as hopelessly inefficient and inflexible. Unless that process needs to run 24 hours a day, you are not making the best use of the hardware it is standing idle for significant lengths of time.
Best practice has the software applications capable of being virtualised. That means they can run on virtual machines in a data centre when they need hardware resources they take them to create a virtual computer, complete the task, then release the resources again for other tasks.
That data centre can also be host to other processes, from other vendors.
There might be a transcoder operation, for instance, to take broadcast content and prepare it for online and mobile delivery.
In a virtualised world, those two very different tasks co-exist, sharing resources and getting the job done without any manual intervention. But there will be times when one or other task needs more resources than usual.
By mutual agreement, the process under pressure takes more hardware resources, with the other slowing or even stopping until the busy period passes.
This saves capital cost, energy and cooling because there are no processors sitting idle on standby. More importantly, though, it creates huge flexibility to meet peak demands.
Software-defined architecture
This is the essence of the software-defined architecture. Processes run concurrently, taking the resources they need at the time. When the data centre runs close to capacity, you simply buy more standardised components, which can take on any task at any time.
Workflows are no longer defined by physical architectures. You do not move projects from device to device, getting delayed whenever a particular black box is busy. The content and its metadata sits in one place the data centre and the processes in the workflow morph around them. There is a logical architecture, but it is defined by the business rules and technical requirements you dictate.
Those rules and requirements will grow over time. Broadcasters today may be looking at Ultra HD, for example, but be unclear about whether that means 4k or higher resolutions, extended colour gamut and high dynamic range, higher frame rates or some combination of them all. In a software-defined architecture, you can add new functionality by changing the business rules and updating some parameter tables.
This idealised view of software-defined architecture depends on each individual software product having flexibility built in. Each package needs to be scalable, extensible and sustainable.
Given all that, our technical capabilities will not be baked in, and the way we work will not be dictated by the necessity to pass content from one device to another. Instead, we can define the architecture we need, and if demands change we can redefine it.
In turn, that means that the current round of technology development and deployment could be the last we will ever need to undertake. Processors will get more powerful, but we can simply pull one set of cards out of the data centre and put new ones in to gain the benefit. The demands on processing, and possibly the leading broadcast vendors, will inevitably change, but all we need do is load new software into our standardised environment to maintain best-of-breed performance.
For efficiency, power, flexibility and control, I believe we should accept the disruptive challenge of the software-defined network.
James Gilbert is CEO of Pixel Power.