Terms like ‘cloud-native’ and ‘cloud-enabled’ are being used increasingly within broadcast and often interchanged by people who assume they mean the same thing. Glodina Lostanlen explains the difference while offering greater clarity on the recent developments in cloud infrastructure
The last few years have seen constant change in the philosophy of broadcast and media architectures. Where once we relied on specialist hardware and SDI for connectivity – because it was the only way to deliver the performance we needed – media companies now recognise commercial off-the-shelf (COTS) IT hardware and IP connectivity as their future destinations, and many are already working on a graceful transition of their operations to next-generation architectures.
In large part, we can do this because we benefit from the much larger IT industry and its colossal investment in R&D. This has ensured the continuing compliance with Moore’s Law, that the power of processors doubles every 18 months. Couple that with vastly improved data connectivity, and it is clear that we are ready to move to new architectures.
What is important is that this be a managed transition, not a headlong rush. There is no need to throw out traditional broadcast hardware if it is still doing the job and it is not life-expired. But equally, if you are looking for future investment, then it is important to understand not just the shift to software-defined products and IP connectivity, but also the new options in how your infrastructure is hosted, and indeed financed.
The traditional architecture of broadcasting had to be on-site, so every broadcast facility had a machine room or a central apparatus room. There was no alternative.
The new paradigm sees most broadcast functionality implemented in software running on standard computing architectures. It is perfectly possible to host this software-defined functionality as a single-purpose appliance – a box, like traditional hardware – and put it in a rack on your premises. During the transition, a lot of systems will certainly grow like this. If the device is providing a function which is in constant use, then this may be the future solution, too.
But increasingly, we are identifying functionality that is important to the capabilities of the media enterprise, or maybe adds to the capabilities, but which is only needed some of the time. This makes it ideal for virtualisation to be run as required, as a virtual machine, on shared hardware in a data centre.
On a simplistic level, virtualisation is seen as the route to the cloud, and indeed it is a critical design requirement. But the cloud, with its effectively limitless resources, offers further capabilities than simple virtualisation in your data centre.
You hear people talking about ‘cloud-native’ and ‘cloud-enabled’, and you could be fooled into thinking they are the same, and simply another way of saying virtualised processes. But we are talking about very different propositions and you need to be clear about precise meanings.
A cloud-enabled application is a software-in-a-box appliance from which the software has been removed and adapted to run on a virtual machine rather than a physical one. Such applications can of course run in the cloud, but they struggle to benefit from the infinite scalability and other inherent capabilities of a geo-dispersed environment. Cloud-native applications, on the other hand, are written with the cloud in mind, ready to seize all its opportunities.
And while software-in-a-box applications may not get the best out of the cloud, the converse is not the case. Good cloud-native software performs extremely well in local virtualisation or in dedicated devices.
When designing a broadcast infrastructure, engineers have always adopted a best-of-breed approach, or at least a best fit to requirements and budget. A recent IABM survey of broadcasters found that 80% still regard best-of-breed as important in their decision-making.
The nature of the cloud allows product designers to move away from monolithic devices and break their offering down into much smaller applications. These applications can be called as necessary, released when finished. Because they each provide a small component of a workflow, rather than a major block, they are known as microservices.
The elasticity of the cloud makes it ideal for microservices. You define in a workflow precisely what you need to achieve; the appropriate microservices are called; the necessary processing power is reserved. The user, whether designing or implementing a workflow, does not need to know what resources are required, just to have the confidence that the cloud will scale to provide it.
An important question to ask when you are evaluating whether an offering from a vendor is genuinely cloud-native is whether its internal design is based on microservices. If it is, then you can have a degree of confidence that it will not only perform well in the cloud, but that it will perform efficiently.
And to bring us back to the best-of-breed point, in a well-designed architecture the microservices need not come from a single vendor. An open source solution even encourages engineers to develop their own microservices, if the specific functionality they need is not readily available.
The microservices approach to software development increases the efficiency as well as the power and flexibility of an application. Each part of the process is now effectively a separate unit, which can be upgraded or replaced if your requirements change.
If, say, you need to move from H.264 to HEVC encoding, in the traditional approach you unbolted a big box labelled ‘encoder’ from the rack and – having justified the capital expenditure – you bolted a new one in. In a cloud-native microservices architecture, you simply update the encoder application with the new codec.
This also plays into another of the key shifts from traditional to software architectures. Using broadcast hardware, systems were defined, implemented, installed and then largely left untouched for seven or ten years. In software, we expect continuous improvement and regular updates, adding new functionality and boosting performance. Again, by compartmentalising the architecture into microservices, such continuous improvement can happen at the modular level, minimising risk and making improvements immediately available.
One of the biggest benefits put forward for cloud infrastructures is the ability to try new things – to deliver to new devices, to launch pop-up channels, to experiment with ultra HD (UHD). Being able to instantly start services in the cloud also makes it ideal for disaster recovery; media companies on multiple continents are working with Imagine Communications to build a business continuity playout centre in the cloud.
What makes these applications practical is the ability to spool up the right microservices to achieve your goal, as you need them. You only pay for the processor cycles you use in the cloud, so the solution is both highly cost-efficient and directly tied to the usage of the functionality. In addition, and despite some assertions to the contrary, microservices-based solutions are able to deliver deterministic service.
Microservices design is now an essential platform for software coders and engineers, and understanding is growing through systems designers and integrators. The other group which needs to understand the concept, if not the detail, is senior management. Without microservices, the true efficiency and flexibility of the cloud can never be achieved.
In the Focus Forward 2017 Media & Entertainment Industry Survey, sponsored by Imagine Communications, improved agility was identified as the top benefit of cloud-based services by more than 60% of respondents. The IABM survey of broadcasters found that 85% of those responding thought they would be using the cloud in the next two to three years, with 28% already doing it.
But the cloud is not a magic cure-all. Only with the right software products, cloud-native and using microservices, will the benefits of flexibility, scalability and efficiency be fully realised.
Glodina Lostanlen is CMO at Imagine Communications.