An AI discussion at the BroadcastPro Summit KSA looked at AI-driven video production automation, virtual production, piracy protection, metadata tagging, watermarking and real-time content takedown solutions. We bring you a summary of what some of the experts had to say.
A 2025 report by Canadian video streaming tech company Haivision identifies AI and machine learning as the top emerging trends in the broadcast industry, with 64% of respondents naming them as the most transformative technologies for broadcast production. The report also notes a sharp rise in the adoption of AI in broadcast workflows, with 25% of participants using it, up from last year’s 9%. Further, 41% of respondents anticipate using AI within the next two years.
This growing reliance on AI was the focal point of a panel discussion at the BroadcastPro Summit in Saudi Arabia, moderated by Jody Neckles, Principal Solutions Architect at Backlight and founder of Technative. Titled ‘Unlocking AI in Broadcast and Streaming: Innovations and Challenges’, the session featured Eyad Al Dwaik, Director of Engineering at stc/ Intigral; Kathey Battrick, Head of Media Management & AI Taskforce Leader at Asharq Network; and Alaa Ali, Director of Anti-Piracy and Technology at RightsHero.
Jody Neckles opened the discussion by emphasising how AI is transforming broadcast operations from content production to distribution, driving innovation and efficiency while raising important questions about fair use and ethics.

Kathey Battrick shared how Asharq uses AI to manage its vast and ever-growing content libraries: “Our main challenge is to ensure that the content is easily and quickly discoverable by all the production teams, regardless of the platform they use to produce it. Along with this, we must also quicken the process of understanding content received in foreign languages. The transcription and translation must be done very quickly. Currently we are using AI to create metadata for nearly 1,600 hours of incoming content each month. This includes mostly raw video content, along with the live channel output that is recorded for Asharq News.”
“Initially we used AI for more traditional functions such as face detection, object recognition, transcription and translation,” she continued. “Recently we’ve added a generative multimodal AI solution to create human-like scene descriptions, making it easier to highlight key moments in content. Since we have both Arabic and English content, we are also creating multilingual metadata to meet the search preferences of our users.”
For Asharq, AI’s facial recognition capabilities have been particularly valuable. “When we launched, there were no existing models capable of identifying Arab business leaders and public figures, which are essential for our content. Over the years, we have built our own dataset to address this gap. Every individual we interview is documented with their name, job title and aliases in both English and Arabic. This has been particularly useful for standardising name spellings across platforms, as Arabic names often have multiple English transliterations,” explained Battrick.

Elaborating on the workflow, she noted: “Once the content arrives, it’s sent for AI-powered metadata processing. The processed metadata is then distributed across all our production and asset management platforms. Our archivists review and refine the data before it is reintegrated, improving accuracy and boosting user confidence in AI-generated metadata.”
This content indexing process is applied to both raw footage and live video streams relayed to Asharq, in addition to the 1,600 hours of incoming content mentioned earlier.
Developments such as these are useful for producers across both broadcast and digital media. The processed data also has the potential to filter through to owned and operated platforms, making content more discoverable on them, noted Neckles.
Intigral, a part of Saudi Telecom Group (stc), uses AI for content analysis and delivery. “We are working on a project using AI to analyse our word library to create metadata around the genres, mood and settings. This data will feed into our recommendation system, providing viewers with more relevant content suggestions,” remarked Eyad Al Dwaik.
Over the past couple of years, Intigral has also been working with one of its partners on an AI model to enhance its encoding. “After deploying the new model, we saw a 67% reduction in our video encoding bitrates. It significantly improved efficiency while maintaining quality. With regards to ultra-HD, achieving high-quality video at around 3-5MB per second is remarkable. Our customer satisfaction went up, as did the KPIs for the engineering department. It was a win-win situation,” he added.

AI also serves as an assistant to Intigral’s engineering team, enabling it to code faster and more accurately. “Some scripts and code used to take me a day to write. With AI, I can do it in less than an hour. If I’m using a new language and I don’t know the syntax, I just have to input what I want and AI generates the code for me. If there are errors, I provide feedback and the AI corrects itself,” he said.
At RightsHero, AI is being injected into crawling mechanisms to enhance pirated content detection and monitoring on various platforms, including social media, search engines and website aggregators. “We’ve integrated AI into the enforcement process to detect and remove content more accurately. Our goal is to implement AI across the entire workflow, from searching and detecting to taking down content, ensuring a more efficient and streamlined process,” said Alaa Ali.
Additionally, RightsHero leverages metadata gathered from aggregator sites for faster pirate content identification, he explained. “We’ve also trained the AI to decide the best takedown path – whether to contact the platform or alert the hosting provider. So, depending upon the data available, the AI chooses the fastest route to disable that content.”
Social media platforms such as YouTube and Facebook have their own anti-piracy measures. “Most social media platforms have their own content ID systems, which automatically scan uploaded videos against a database of copyrighted material. However, content pirates are now armed with tactics that allow them to evade detection. To counter this, we use automated software that can identify content that the content ID system may have missed. This software detects unauthorised content, reports it to the social media platform and ensures it is taken down immediately. Third-party companies such as RightsHero are not involved in this process. The platforms develop their own tools and use AI and machine learning to improve the process.”

Social media platforms have also enhanced their detection process with the use of AI. “Five years ago, the detection was much slower than it is now. Now, once the reference file is uploaded, the detection and takedown is immediate,” added Ali.
Elaborating on content protection and fighting piracy, Al Dwaik said: “Piracy is becoming easier by the day. Earlier, it was harder to get pirated content. Now, for as little as $50 viewers can stream premium content platforms like NBC, Shahid and stc. Fighting piracy is a never-ending battle. Here, AI can help with anomaly detection. For instance, pirates’ behaviour is different from normal users when it comes to content consumption. If a user consumes a week’s worth of content in 24 hours, that signals suspicious activity. AI can find such anomalies with the right training. Without AI, identifying piracy sources takes a couple of days; with AI, it just takes a few minutes.”
Intigral still currently relies on a manual watermarking process, but AI could streamline this process in the future.
Interestingly, pirates are also leveraging technology to infringe copyrighted content. While AI has not yet permeated piracy rings, automation is widely used. “Previously, piracy involved manually capturing and uploading content. In case of a takedown, they would have to start from scratch again. Now, automation allows them to re-upload the content within seconds. This makes automation and AI doubly important – we now have to adapt and match the pirates’ sophisticated set-up, else we lose the piracy battle,” said Ali candidly.
Ali suggested integrating AI with blockchain technology for better piracy control: “Blockchain can serve as a decentralised identifier for copyright holders, preventing content duplication and ensuring uniqueness.”
Despite the rapid advances in AI, all the panellists agreed that human oversight remains essential. “The human factor can never be eliminated. Its role, however, has changed. AI will never be 100% accurate and human intervention will always be needed to train, supervise and correct AI. For example, sometimes AI is unable to identify copyright infringement, so human intervention is needed,” said Ali.
Battrick echoed this sentiment, highlighting that Asharq still employs archivists despite AI integration: “We don’t feel that AI will ever replace our archivists. In fact, they have become more important than before. Their role has shifted from metadata creation to supervising AI-generated metadata. AI, especially in facial recognition, is not 100% accurate. When a face is detected and indexed by the system, a human reviewer is still needed to verify and correct any errors or biases. For instance, Boris Johnson and his brother look quite similar, and the AI repeatedly misidentified the brother as Boris. So we need to remain vigilant in such cases.”
Al Dwaik revealed how Intigral’s attempt to use AI for censorship was unsuccessful: “Our goal was to develop a system where AI could handle censorship by learning from past catalogues and censorship decisions. However, it didn’t work as expected – we realised that AI cannot take responsibility. If AI makes an error and inappropriate content is aired, we can’t simply blame the technology. A human reviewer is essential to ensure accuracy. Since we couldn’t fully trust AI for this task, we decided not to deploy the system.”
Al Dwaik shared his vision for the future of AI in data centres: “The future of AI is to have an AI assistant available to engineers, where the engineer will ask the AI assistant to test some changes on the staging environment, run tests and provide the results, and assist with deploying these changes to the production environment.”
As the discussion concluded, the panellists agreed that while AI is revolutionising broadcast workflows, it cannot function independently of human expertise.