How MFA and AI-generated content are reshaping brand safety
This article was authored by Paul Bannister, Chief Strategy Officer at Raptive and was originally published on the ANA (Association of National Advertisers) website. To read the original article on the ANA website, click here.
The internet is awash with Made-for-Advertising (MFA) sites and AI-generated content, creating a crisis of authenticity with practical implications for brand safety. The ANA’s bombshell Programmatic Media Supply Chain Transparency Report, and Adalytics’ scathing research on the ineffectiveness of traditional brand safety tools have brought about a paradigm shift.
Today, ensuring safety is no longer just about avoiding inappropriate content; it’s about taking active measures to ensure that ads run alongside trustworthy, authentic material that fosters real engagement. Historically, brand safety has been about excluding pages and properties using keyword blocklists or avoiding entire content categories. But in a landscape increasingly flooded with low-quality content manufactured to get through the filter, this exclusion-based approach is no longer enough. The focus has shifted, and brands are beginning to prioritize where their ads should appear, not just where they shouldn’t.
The question then becomes: How do you create an effective inclusion list?
The Core of an Effective Inclusion List
The decision of what to include brings other issues into consideration, such as how to define quality content and brand safety in the first place. There’s no single answer, but the exercise should help an advertiser devise a plan that focuses on true performance over high-scale and low CPMs. At Raptive, we use a three-pillar framework — focused on identity, content, and traffic — that can serve as a solid guide:
- Identity: But not the one you usually think about in digital advertising. Understanding who is behind a site is critical. Credibility starts with knowing the publisher or content creator, particularly in sectors that require expertise like finance, health, or news. Verifying identity ensures that ads are placed alongside reputable, reliable voices, reducing risks and elevating trust with audiences. If you can’t do this directly, find a partner who can do it on your behalf.
- Content: The quality of content is just as important as its safety. Brands should ensure that sites produce original, meaningful, and valuable material. While AI and automated tools can flag basic risks, human oversight is needed to assess whether the content adds real value and supports the brand’s standards.
- Traffic: The origin and quality of a site’s traffic matter. AI tools can detect fraudulent traffic patterns, but human judgment is needed to discern whether a site’s audience is genuinely engaged. Organic, engaged audiences offer far better environments for ads than click-farms or artificially inflated traffic numbers.
As AI-generated content continues to grow, rejection rates for inclusion lists should naturally rise, and acceptance rates should narrow in turn. This is a natural consequence of maintaining a consistent standard in a world where authentic content represents a smaller percentage of overall inventory.
The Right Mix of Automation and Human Oversight
Implementing the three pillars — identity, content, and traffic — requires a careful balance of automation and human oversight. Automated systems are invaluable as a first line of defense, filtering vast amounts of content, flagging risks, and highlighting suspicious traffic. However, human judgment is crucial for making final decisions. Evaluating the credibility of site owners, ensuring content quality, and assessing true audience engagement are tasks that require the nuanced understanding only people can provide.
Given the sheer scale of online content, and the increasing prevalence of AI-generated material, this process likely exceeds what any single advertiser can reasonably handle. This is where trusted partners come in. Sellers specializing in curating authentic, high-quality content effectively represent turnkey inclusion lists. These partners handle heavy lifting, ensuring that even lesser-known or unrecognized sites have undergone thorough vetting. This allows brands to scale their efforts while maintaining a high standard of brand safety.
This may seem similar to what brands already think they are doing, but clearly, the has level of scrutiny, quality standards and human oversight that’s needed to truly focus on quality content has been lacking. Prioritizing quality doesn’t have to mean a smaller inclusion list that sacrifices reach. Overly narrow inclusion lists limit inventory, drive up costs, and exclude high-quality but lesser-known publishers and creators. That’s why it’s critical to work with sellers or networks that can serve as turnkey inclusion lists
Brands that choose to support real, human-driven content help sustain a diverse and vibrant content ecosystem, which is great. But brands don’t need to be altruistic or mission-driven to focus on authenticity, because authenticity is now a critical dimension of brand safety.
With the rise of low-quality, AI-generated content, advertisers risk having their brand associated with “slop,” which has a similar effect as being associated with controversial headlines or toxic UGC. Consumers are smart; they can quickly sense when they’re reading low-quality AI-gen content and will certainly not react well to brands that support that type of content.
The future of brand safety isn’t just about where ads don’t go — it’s about where they belong.
`
`
`