Âé¶¹´«Ã½

What does the rise in online video mean for brand safety?

Learn how to ensure brand safety with this conversation that explores evolving digital advertising and its implications for brand safety. Tim Finn, Head of Partnerships at Âé¶¹´«Ã½, shares 20 years of adtech experience on content moderation & contextual ad targeting.

A conversation with a brand safety specialist

With over 20 years of experience in the adtech world, Tim Finn is now Head of Partnerships at Âé¶¹´«Ã½. He shares with us his knowledge of contextual advertising and online brand safety. Our conversation explores the evolution of internet advertising from a text- and image-based open web to a rapidly changing video world, and what this means for brand safety.

Video is evolving and increasingly popular. How has this affected advertising and the way we think about brand safety?

Social media platforms have managed to bottle up the ability to super engage us with their content. It’s not just that we love consuming video, but platforms have actually found a way to make us want to engage and consume much, much more. This consumer obsession presents a real opportunity for advertisers, but it also comes with complexity, especially when thinking about brand safety.

Individuals super engaged with social media content. Photo by Âé¶¹´«Ã½.

It’s important to recognise how the increased popularity of video presents a change in both the type and way that we consume content. These changes require new methodologies for how we manage brand safety. Conventionally, the open web was somewhat easier to navigate because advertisers could build a sort of universal brand safety model applicable to any website. Now, however, with these individual platforms, you’ve got to nuance your brand safety strategy around the platform’s own specific content types and way of operating, and that’s different.

By looking at each platforms’ brand safety policies, you begin to realise how different and tailored they all are. And things like are good at highlighting these differences.

What challenges do brands face when thinking about video advertising across multiple and differently structured platforms?

Video, and in particular user generated video, presents totally new brand safety challenges. The first thing to be aware of is the plurality in video distribution. If a particular video gets picked up and shared, it can get huge distribution extremely quickly. This can be problematic for advertisers, as they lack the opportunity to truly understand exactly what it is they’re being associated with.

Another issue is the . No longer is the question, ‘Is the content safe?’ or ‘How safe is it?’ . With adjacency, you also consider where the ad appears in relation to the content. This might be different on every platform.

Lastly, video does change things for brand safety because it is computationally far more complex to process and understand than text.

So is it because of this complexity that legacy brand safety tools are not adequate for today’s video content?

Legacy brand safety tools are predicated on a text-and-image, open web environment. Historically speaking, most brand safety tools and software are keyword and category-led. This is impractical when dealing with a 22 second long TikTok video, which likely has no text associated with it.

These legacy systems also imply human moderation, which was manageable 20 years ago. At the time, you had teams of , flag it and block it from ads being served. Considering the scale of current video-based platforms, using human moderation is just not possible. Ultimately, the speed and scale of video uploads onto current platforms calls for the use of more technology-led solutions.

Amount of digital data created each day. Photo by .

It seems that with the increase in online content also comes an increase in the diversity of content, which is being produced faster than what old methods can control.

Yeah, it’s interesting because , if you wanted to publish on the internet you had no choice but to learn HTML, which was very time consuming. And so more were developed, capable of supporting different formats and ease of use. It was only in the 2000s that . Yet, even at this point content is still published by just a small number of individuals and 50 or so media organisations. Fast forward 20 years, and we’ve got 20, 30, 40 million individuals publishing swathes of content in all sorts of formats.

Individuals posting content. Photo by Âé¶¹´«Ã½.

Why is it convenient for brands to adopt tools that are tailored to their needs, allowing them to apply their own policies at scale?

Each brand has a unique understanding of brand safety, and there is no one-size-fits-all approach. This is why, now more than ever, we need a common industry language through which we can communicate our differences. And the (GARM) is a first step in this direction.

However, to truly benefit from the we need tools and software that recognise and support these different types of risk approaches. Work needs to be done on creating tools that incorporate standards around adjacency and account for complexity because ultimately of brand safety. We must accept that software must solve this problem because the volume of content and the speed at which it spreads is not a human-solvable problem right now.

And in relation to AI brand safety tools, what features do you think these tools should have? What should they give advertisers?

Understanding 20 seconds of video is far more computationally complex than understanding 500 words of text. With video, you must understand the frames, audio, associated text metadata, and on screen graphical and textual content. Often, this means using a combination of computer vision and NLP methods to understand meaning and context.

Consequently, a whole new set of technology must be developed to understand video content, and crucially, video content in context. And we are starting to see some sophisticated AI foundation models being developed. My guess is that soon products and services for brand safety, which are built on some of these big foundational models will emerge. These models are trained on vast quantities of data that really improve our ability, not just to understand the video content itself, but to also understand the video’s context in a way that was not previously possible.

Is there something preventing the industry from adopting these tools?

Not per se, however I do think it’s about having more companies specialised in building brand safety models using AI and machine learning. Obviously, dedicated companies are not ‘required’, however, going forward they are much needed.

As people spend more time online and consume more video content, what are your thoughts on the future of brand safety?

Brand safety will become increasingly important for advertisers because, regarding content creation, the genie is out of the bottle. I believe we’re going to see a bigger requirement for tools that can protect a brand’s reputation. We’re only just beginning to tap into some of the live streaming areas, and that’s going to present a massive advertising opportunity. However, live streaming also means that response, speed and latency become critical for both moderation and brand safety tools. Again, we need new AI-driven solutions because not only live streaming, but also gaming, virtual reality and the metaverse are gaining traction and the brand safety needs are not effectively addressable through human moderation.

This scenario presents a big opportunity for AI to tackle the necessary scale and latency required for a brand safe environment. In the future, I believe that by leveraging commonly available AI solutions, advertisers will increasingly have the ability to guarantee brand safety according to their own guidelines.

For more posts like these follow us on , and . Stay tuned for more interviews and discussions with people working across the Trust & Safety and Brand Safety space.

At Âé¶¹´«Ã½ we build technology that enables safe and positive online experiences. Our goal is to understand the visual internet and create a transparent digital space.

For more information on what we do you can check out our website or email us at contact@unitary.ai.

Download the white paper

A practical guide to implementing a hybrid AI-human model for maximum impact and minimum risk.
Download now

Book a consultation

Find out more about Virtual Agents and what they could do for you
Book a consultation