The key to designing a facility that can successfully implement an IP-based system lies in fully understanding the technology, says Chuck Meyer
Video signals need bridges to, well, get to the other side. However, the “other side” is less clearly defined than ever. Any signal traffic must be able to enter, or exit, the bridge at any point. Moreover, existing traffic lanes traversing those bridges are already heavily congested, with the near-term prospect of huge additional volumes, each carrying heavy loads, poised to descend upon those same creaking infrastructures.
It’s a lot to get your head around.
The situation has given rise to a great deal of discourse related to uncertainty over the presumed heavy lifting of large video files – we’re talking current HD standards as well as the advent of 4K UHDTV and beyond – across existing and planned infrastructures and how best to expedite their movement, unencumbered, in any direction.
An increasing number of voices are postulating that IP technology is the way forward (as well as up, down, across and around) but others are more circumspect, not least because they don’t quite know where to find the onramp. That’s, in part, because the IP onramp can be anywhere you want it to be, which is actually one of its numerous advantages. It is also one of the reasons it should be approached with caution – and an experienced guide – if you’ve never been there before.
Let me explain the IP technology transition path, the onramp if you will, for broadcast television production.
We all know that Ethernet and IP underpin the internet and, let’s be honest, form one of the most disruptive technologies in the history of mankind. Insatiable consumer demand for content continues to fuel the development of ever faster networks. As it stands today, the bandwidth required for data networking equals – and in many cases, exceeds – the requirements for full bandwidth, real-time video. It’s like funneling rush hour traffic onto a suburban street.
It can be argued that SMPTE SDI signals are antiquated, inflexible and difficult to repurpose, and for live video production, the aggregate bandwidth required to move those signals far exceeds that which can be affordably managed with Ethernet and IP. However, assuming that Moore’s Law holds up in broadcast applications, Ethernet and IP, or variants thereof on the horizon, should remove those limitations. It will happen soon but, unfortunately, not overnight.
In the meantime, the key to designing a facility that can successfully implement an IP-based, packet-video approach lies in fully understanding IP technology, which is directly related to the desired workflows and ultimate purpose of the facility. (Although the terms are interchangeable, to avoid confusion between networks and protocols the term “packet video” is used more often than “IP.”)
Another essential consideration for most businesses is capital equipment preservation. New facilities invariably incorporate legacy equipment, or more accurately, new facilities tend to be installed on top of an existing infrastructure. When making a phased transition to IP, decisions about how best to merge SDI with IP, particularly for production applications, require a thorough understanding of the technology behind IP, not least because there are differing standards and models based on what the end user of each facility wants to achieve.
Assuming that everyone has done their homework and fully understands not only what IP can do in general but what it can do in highly specific terms for their application, you’re ready to start the IP transition.
Entering the Onramp
Data network technology has advanced to the point where packet-video facility infrastructure can now be realistically considered based on channel bandwidth and workflow. SMPTE 2022-6, the standard for transporting uncompressed video encapsulation over IP/Ethernet, has been successfully demonstrated and, by doing so, strengthens the case for making the transition from SDI to packets. While encapsulation alone does not address all networking issues, it does provide levels of flexibility, extensibility and interoperability not otherwise available with existing SDI baseband video standards.
Based on video signal count and workflow, overhead costs associated with encapsulation can currently be prohibitive for HD-SDI, and 4K UHDTV data rates could potentially push costs even higher. However, those overhead costs are falling and will continue to do so, which will steadily remove the cost barrier.
All of this means that there are now demonstrably effective options for successfully implementing packet video technology within certain aspects of a facility to make a phased transition to packet video. I hasten to add that this is a transition, not an overnight sensation.
There are still some limitations that must be factored into the transition decision-making, but a phased approach is a very logical place to start.
That transition begins with an understanding of current and projected bridge support – the Ethernet, and the traffic lanes it provides.
Ethernet ports are composed of multiple lanes, each operating at a common data rate. Ten Gbps per lane over 10 lanes, which provides a 100 Gbps Ethernet structure is the norm today and can be moved over copper cable, although they are increasingly being moved over more efficient fibre optic strands.
The Ethernet Alliance and IEEE 802 Taskforce currently project bandwidth requirements for core network and network access to reach 1,000 Gbps and 40 Gbps respectively by the year 2015. Based on current technology, the ability to deliver services at those bandwidths by 2015 is probably not very likely.
However, 400 Gbps for core networks and 40 Gbps for network access are the current target for industry standardisation activity and are almost certainly achievable by 2017.
Both advances will open up a vast array of video transport possibilities, which is why it is essential to start planning for them now.
The challenge is control
Audio, video and data traffic at high volumes require an extraordinary amount of control. Standardisation helps, but control is also a function of equipment design. Most current switches have limitations and cannot readily expand to accommodate more traffic, which means they can frequently become congested and, ultimately, blocked.
What solves the problem with the advent of higher volume packet video traffic is optimised timing of the switching — “traffic control”, if you will — within a router, although major advances in switch timing optimisation within a router will soon be announced.
Packet video is already here, and with technology advances over time, will soon leave SDI parked in the garage. The emergence of reliable national and international IP networks will create a cost-effective, open architecture option for transporting real-time, uncompressed video over long distances. But the transition to packet video is currently hindered by capital equipment budget cycles and limited availability of fully standardised formats, hardware and software. Although some of the standards I’ve mentioned are new or in the late stages of ratification, some of the key technologies needed to incorporate those standards are not yet commercially available, or cost effective. But, by planning ahead for the transition, using equipment currently designed to bridge the gap between current installed capital assets and the packet-video future, business models can be adapted over time to get, and stay, in the fast lane.
This is why it’s important for facilities to start putting the right technology in place now, so they’ll be able to cross the packet-video bridge when they come to it.
Chuck Meyer is Chief Technology Officer, Core Products at Miranda Technologies.