Scalability and efficiency have been definitive as cloud transforms the way media companies run operations, generating swiftness, maintaining robust protection in the wake of unforeseen challenges and doing away with extensive investment. Amidst all of that, AI leads a disruptive charge of technology springboarding ideas, trends and patterns to augment this dynamic space. Vijaya Cherian and Kalyani Gopinath bring you discussions from a roundtable BroadcastPro ME organised with the Alibaba Cloud team.
How do businesses safeguard against disruption? Are cloud strategies sustainable? Do AI-driven innovations pledge business continuity? And what are DR best practices?
Aus Alzubaidi, CISO, Director of IT and Media, MBC Group; Eyad AlDwaik, Director of Engineering Operations, Intigral; Dr Naser Refat, former CTO, Rotana Media Group; Manish Kapoor, EVP – IT and Broadcast, Zee Entertainment Middle East; Melvin Saldanha, VP Technology and Products, OSN; Prasanjit Singh, Principal Architect – Enterprise Architecture and AI Practice, StarzPlay; and Sherif Zaidan, CTO, BusiNext, exchanged views at a cloud discussion moderated by James Wang, Country Manager of United Arab Emirates, Alibaba Cloud Intelligence.
In an industry vulnerable to change, media companies recalibrate their cloud migration pathways to benefit their organisational IT infrastructure, whether in part or in whole.
As one of the early adopters of cloud, moving petabytes of content wasn’t easy, began Melvin Saldanha, OSN’s VP of Technology and Products. OSN realised early on that a hybrid model works.
“The base principle was to keep it in two different places – on-prem and in the cloud. It is debatable how much you want to do in the cloud and how much you want to do on-prem; we have a mix, and it works for us. We modernised as we moved, and currently our workflows are running in a microservice architecture using dockers, containerisation and Kubernetes, so we can move between clouds if need be.”
With the advantages and challenges of running a legacy IT including cost savings and scalability, the complexity of moving to the cloud involves availability, multi-regional challenges, media services without bespoke solutions to build and maintain them, and security. When an outage can impact the globe, security is paramount.
Cloud computing can often be viewed as utilising someone else’s data centre. Typically, system outages are not due to the infrastructure or architecture itself, but rather because the services operate on a public cloud model. This model introduces shared resources and multi-tenancy, which can lead to vulnerabilities and dependencies that may not be present in private or dedicated environments.
“Soon after the CrowdStrike outage, another incident hit the globe. It was thought to be a DDoS attack; however, the data centre of the provider was hit by its cooling. They shut down a lot of systems and routed traffic elsewhere, but bringing it back took a couple of hours. So, in spite of a top cloud infrastructure, we were at the mercy of a cloud provider,” stated Saldanha.
Multi-CDN vendors are an option, though they can be complicated, especially when the CDN is the primary source supplier, as switchovers can have significant impacts. Content distributors aim to be a single point in the cloud from which they can distribute content across multiple locations. Multi-CDN solutions offer resilience in terms of risk management, help mitigate vendor lock-in, and provide control over pricing.
“Hosting data on a local data centre reduces egress costs. Aside from scalability and the ability to implement AI and ML features dynamically, the primary driver for us was decentralising operations. This new architecture enabled us to launch new services within days or hours instead of months,” said Aus Alzubaidi, CISO, Director of IT and Media Management, MBC Group.
MBC Group operates a complex hybrid cloud environment with workloads spread across multiple cloud hyperscalers, ensuring both efficiency and a fit-for-purpose set-up.
“Relying on a single cloud provider creates a complete monopoly, which limits flexibility and control. The future of infrastructure cannot be solely on-premise or entirely cloud-native; it must embrace hybrid multi-cloud environments to enable full flexibility, innovation agility and cost optimisation. While outages and downtimes are inevitable, having the right controls in place can reduce their impact to a bare minimum. For instance, most of our TV channels are currently on-premise, with the exception of FAST.”
Factoring in profitability is key to a company’s cloud journey. Comparing cloud with on-prem is not about a server on the rack versus a virtual machine in the cloud. Racks that run server zones require a lot of power and cooling, which means that when it’s 50 degrees outside, the temperature inside is 17 degrees. Before cloud, each company had to build their own tier-four data centre, adding to total cost of ownership and translating to more people, resources, maintenance and systems. “The moment you are in public cloud, you secure yourself in terms of DR, given cloud offers multi-region availability,” said Saldanha.
“We can’t fully rely on being cloud-native,” Alzubaidi explained. “For 24/7 news operations and ports, the combination of technology, total cost of ownership and last-mile connectivity isn’t quite there yet. We’ve tested it with our own POCs and POVs, and while it works for FAST, managing complex transmissions and multiple contributions makes it challenging to ensure consistent reliability and avoid latency.”
Manish Kapoor, Zee Entertainment Middle East EVP – IT and Broadcast, agreed. Receiving content from a large number of studios daily for different channels with about 300 hours of content, Zee has all its systems built on-premise, given the nature of its hi-res editing.
“Our data centres are outsourced; we use cloud for OTT content storage, archiving and DR,” he said. “An important part of broadcast is syndicating content, which is roughly about two to three thousand hours of transcoding a day. For that kind of distribution from your content library to third parties and customers, the cloud is not ready. When you’re editing, you need to be as real as possible, and with cloud edit there’s a mix of latency, codec, multitrack audio and discrete tracks being down in real time.”
Dr Naser Refat predicted a gradual shift towards cloud-based solutions, based on feasibility. “As content creators on FAST, security is our primary concern with RAW 4K or 8K film material. We currently utilise on-premise solutions to maintain control over security and distribution.”
Having multiple clouds is a good thing, but it is important to strike the right balance between various cloud environments and mechanisms, to establish an architecture that endorses a company’s workflows.
“From the outset, StarzPlay adopted a strategy of not putting all our eggs in one basket when it came to cloud infrastructure,” said Prasanjit Singh, Principal Architect – Enterprise Architecture and AI Practice, StarzPlay. “This approach led us to adopt multiple clouds as well as custom-built platforms that we developed in-house. We never used on-prem infrastructure, except for our edge servers and custom-built CDNs in countries where we lacked cloud service provider coverage or where network connectivity was subpar. This allowed us to maintain flexibility, resilience and optimal performance across regions with varying infrastructure capabilities.
“Born in the cloud, our engineering strategically explored multiple cloud providers to optimise both costs and efficiency. We segmented our environments by leveraging provider-specific strengths such as pricing discounts, data analytics, AI capabilities or regional coverage. Although this introduces certain challenges, particularly around complexity, it has proven invaluable for scalability, especially in the context of live streaming. Each time we integrate a new cloud provider, our platform teams are tasked with mastering the provider’s distinct services and terminologies, ensuring seamless deployment and management across the multi-cloud ecosystem.
“While most core functions are consistent across cloud providers, the key differentiator lies in the specific terminologies and nuances of their services. To avoid vendor lock-in, we strategically steer clear of proprietary solutions in favour of more portable, open technologies, from container orchestrators like Kubernetes to AI frameworks like Tensorflow. This ensures our applications remain consistent, irrespective of the underlying cloud infrastructure. We also prioritise hosting infrastructure and pipelines as code, which not only simplifies maintenance but also enhances the portability and scalability of our deployments.
“Our core engineering team primarily operates within one cloud environment for day-to-day operations, while a secondary cloud provider acts as a hot standby. This allows us to rapidly restore infrastructure from a frozen state using pre-built scripts in the event of a failure. Although we’re still refining this process, the end goal is a robust failover architecture that seamlessly shifts traffic across clouds during critical outages.”
Eyad AlDwaik, Director of Engineering Operations, Intigral, said, “Going to cloud with a huge static workload is expensive when talking about 200-plus channels. When you have an event-based requirement, it doesn’t make sense to build it inhouse if it’s available in cloud. With on-premises CDN, we utilise both our edges within the STC network and third-party providers. This ensures lower cost for content delivery in our primary market with the high availability and wider reach of third-party CDNs, noting that successful execution depends on a well-planned architecture and having the right resources.”
Singh added: “As an OTT company, our priority is delivering top-tier streaming services, not just building cloud platforms. We prefer leveraging available technology that can reliably support our operations without downtime, allowing us to focus on enhancing the viewer experience.”
While transitioning to the cloud offers clear advantages, latency continues to be a major challenge. “Given that we cover multiple regions and continents, if we have a reporter in one location and the cloud resource is hosted in another distant region, there’s no way to completely avoid latency unless there’s a cloud region within that same country,” Alzubaidi explained.
From an implementation perspective, the future is heading towards complete cloud adoption, except for tasks like studio work that may pose challenges. However, what promises to be truly disruptive is generalised AI, said Sherif Zaidan, CTO of BusiNext. “We’re developing generalised AI from the ground up, not just as a code generation tool but to enhance customer experiences with highly personalised and interactive capabilities.”
“It’s shaping our future, impacting personalisation, life and events,” agreed Refat. “But technology is merely an enabler. Without naming any specific cloud provider, I can say that no single provider has all the solutions.”
Subtitling and dubbing are among the basic AI innovations that media houses employ. Today, companies use AI to moderate broadcast, OTT and user-generated content, from compliance through to ad sales, opening multiple streams of opportunity. It helps improve recommendation and personalisation, but harvesting all of that means a lot of cloud and a lot of processing.
“The problem with using AI for content compliance is you cannot sue it for an error,” said AlDwaik. “We looked into a custom-built application for censorship where the AI learned from our censorship library and censorship history, but eventually somebody had to review it before it went on-air, so we dropped the idea.”
Cloud providers work with companies to identify objectionable or non-confirmed content and, with periodic censorship and regular feedback, optimise a model that gradually brings down nearly 95% of the manpower used.
“There are cases where AI is not yet ready, but we use a term called ‘commercial-ready’ which indicates the accuracy has reached a level acceptable for commercial use,” said Alzubaidi. “AI algorithms are designed to drive attention and engagement with content, which can lead to polarisation. Large media houses have a responsibility to regulate how AI is used and the impact of the content it promotes.”
AlDwaik added: “When AI begins to generate most of the content, human input used for training diminishes and AI may start learning from synthetic data rather than real-world data. Over time this can lead to decreased accuracy and bias, known as model collapse. This synthetic data can make the resulting insights less representative of reality.”
Which leads to the philosophical question: what is reality?
“We define reality based on what is human. But when everything is generated by AI, maybe that’s reality,” countered Zaidan.
AI is only going to get better, the group concurred, agreeing that there are two ways of viewing it – from a limitation and a potential point of view, about finding opportunities and evolution, and about saving time by helping optimise workflows. “Apple and Google’s machine learning algorithms collect data from each of the apps you use to give better recommendations, effectively bringing the app world into one super app to cater to a person’s hyper-personalised needs,” said Saldanha.
AI helps with the big data, added Kapoor. “Earlier, when we were displaying channels, the data was not coming back. Now, thanks to AI, it comes fast and we at Zee are learning to build that database, learning when we have maximum peaks, when people are watching a particular content, etc. It’s like talking to your viewers.”
“The next few years will witness an unprecedented surge in AI automation,” predicted Singh. “Reflecting on recent advancements, it’s astonishing how rapidly AI agents have evolved. These breakthroughs have not only met but surpassed expectations, transforming industries and reshaping the future. And the trend will prevail!”
As more companies adopt cloud technologies and AI develops, prospects lie in collaboration, Kapoor pointed out, so that everyone can co-create capabilities that will boost the industry in the region and beyond. “AI is the talk of the town; it will help us optimise approaches and give audiences a better experience,” agreed Refat.
As industry insiders continue to experiment, fail, learn and move on with AI, it has been hugely groundbreaking, stated Alzubaidi. “We are not there yet, but with time things will improve.”
“From a potential framework perspective, it is about building on use cases that will be future-proof. And when it comes from use cases, then we’re ready to implement that and not remain limited to what we have built,” said Zaidan. “And this limitation perspective is a moving target, because every day there are innovations.”
As broadcasters continue to be pushed on cost, vendors must reevaluate cloud provider relationships, pave the way forward with effective solutions, and forge partnerships where the broadcaster and vendor train the machine to craft an improved version that benefits everyone. While the possibilities of this super algorithm are endless, its resource consumption and carbon footprint cannot be overlooked, and players must endeavour to integrate models that do less damage to the planet.