DigitalAdvertising

AdTech

VideoUnderstanding

ContextualTargeting

BrandSafety

Brand Safety Summit New York Recap: From Brand Safety to Brand & AI Safety

2025. 11. 17.

The AI Paradox We Now Face

PYLER has consistently emphasized the mission to overcome the AI Paradox—the inherent duality where AI delivers overwhelming productivity while simultaneously becoming the systemic source of threats like deepfakes and misinformation—and the broader Crisis of Trust created by the explosive rise of AI-driven content. Our goal has always been to help build the emerging Infrastructure of Trust. The recent Brand Safety Summit New York confirmed that this philosophy is no longer unique to a few early voices—it is rapidly becoming the shared vision of the global industry.

The Summit was a cooperative forum that moved beyond competition, bringing together leaders from platforms such as TikTok, Meta, and Snap; the world’s largest advertising companies—WPP, Publicis, IPG, and Omnicom—and key verification partners including IAS (Integral Ad Science) and Channel Factory. PYLER holds deep respect for the organizers' efforts in convening such a high-caliber group to share insights and build community around an increasingly complex challenge. The event served as a powerful catalyst for awareness and alignment, reinforcing our conviction that the industry’s overall standard for safety is set to rise significantly.


From Brand Safety to Brand and AI Safety: AI Changes the Scale of Safety

The core theme of this year’s Summit was the revolutionary expansion of the safety discussion. Where traditional Brand Safety centered on the limited context of “next-to” adjacency—whether an ad appears beside safe content—we have now entered an era where safety must account for the entire system in which AI participates in content creation, distribution, and recommendation.

Industry leaders made it clear that AI is fundamentally transforming the Brand Safety paradigm. And this shift is not merely a broader scope; AI is reshaping the very form, logic, and meaning of digital content itself. As a result, the question of safety and trust has moved far beyond placement management and into a deeper need to ensure the transparency and authenticity of content and the algorithms that govern it.

Brand Safety used to be about adjacency. Now, it’s about provenance, authenticity, and algorithmic integrity.


Key Insights from the Summit: Translating Safety into Tangible Performance

The sessions at the Summit made one point unmistakably clear: Brand Safety is no longer a cost or a regulatory burden — it is a strategic investment that directly drives performance.

  • The End of the Trade-Off and Contextual Intelligence:

    The era when advertisers had to choose between “safety” and “performance” is over. Speakers emphasized moving beyond the over-blocking caused by keyword-based exclusion and instead selecting brand-positive content through advanced Contextual Intelligence. When combined with audience data targeting, this approach significantly reduces wasted impressions and delivers measurable improvements in marketing performance. Safety and performance converge when context — not keywords — drives decisions.


  • Closed-Loop Measurement and the Growing Role of RMNs:

    Closed-Loop Measurement demonstrated that advertising in verified, trusted environments leads not only to higher engagement but to actual transactions. The advancement of consumer ID tracking, especially through RMNs (Retail Media Networks), marks a shift toward a world where trust directly converts into revenue. In trusted environments, impressions become outcomes.


  • Agentic AI and Irreplaceable Role of Human Judgment:

    As we enter the era of Agentic AI — systems capable of autonomously handling large parts of media buying and optimization — the Summit underscored a crucial point: human oversight remains essential. Contextual understanding and brand values still require human interpretation to ensure AI-driven decisions are aligned with brand identity. AI can optimize the path — but only humans can define the destination.


  • Auditable AI, Transparency, and Creative Leeway:

    Auditable AI — systems whose decisions can be tracked, reviewed, and verified — was highlighted as a core component of ethical governance. Speakers also emphasized that intentionally curating and managing a brand’s “Aura” while giving creators sufficient Creative Leeway strengthens both performance and authenticity. Accountability builds trust; authenticity sustains it.



PYLER's Observation: The Industrial Context Behind the Safety Discussion

While the Summit discussions focused on the ongoing challenges of the AI era, PYLER noted that these issues closely mirror past industry experiences. The UGC (User Generated Content) era of the 2010s—driven by smartphones and YouTube—democratized content creation but also led to the proliferation of low-quality content and misinformation. Brands being placed next to unsafe material and the inherent limits of human review were early warning signs that were already visible at the time.

The crisis of trust defining the AIGC (AI-generated content) era today can be seen as a reproduction of those same challenges—only now multiplied by the explosive complexity and accelerated speed introduced by AI. This historical parallel provides an essential framework for understanding the current AI safety problem and why the stakes are higher than ever.

The crisis hasn’t changed — only the acceleration has.


What PYLER Saw on the Ground: The Transparency Gap in Video Safety Technology

The overall message of the Summit indicated that the industry is well aware of the trust challenges posed by the AI era and is actively discussing potential paths toward a solution. However, one area remained conspicuously absent from clear answers: Video Safety.

While text- and image-based verification technologies are already being used at scale, there was a noticeable lack of practical examples—or even introductions—of the AI technologies and transparency frameworks required to meaningfully address safety for video content, which remains the center of digital media and the most powerful driver of emotional impact.

This gap is closely aligned with what PYLER observed at TrustCon 2025. Although participants universally acknowledged the need for AI-driven approaches to video content moderation, the absence of concrete sessions, case studies, or references to the underlying technology highlighted a major, unresolved challenge for the industry.

The industry acknowledges the problem — but video remains the unaddressed frontier.


Ultimately, the Core is 'How to Build the Infrastructure of Trust'

Observing the entirety of the Summit’s discussions, PYLER solidified one conviction: the AI era requires more than regulations or platform-level self-governance. The real imperative is to build a technological foundation capable of sustaining trust — the Infrastructure of Trust.

The accelerating speed of deepfakes and the growing sophistication of manipulated video content clearly expose the limits of existing systems. This is why the need for Multi-Modal Video Understanding AI felt even more urgent throughout the Summit.

This technology can simultaneously interpret and evaluate all elements of video — visual signals, audio cues, speech, text, behavior patterns, and narrative progression — as a unified context. In the video era, trust must begin with AI that understands video.

Trust is no longer a policy choice — it is an engineering challenge.


PYLER's Direction Post-Summit is Clearer

The Brand Safety Summit New York was not a venue for providing all the answers. However, it served as a crucial milestone — one that revealed what the industry is truly grappling with, where the pain points lie, and what critical tasks remain unresolved.

What became unequivocally clear is that AI must ultimately solve the challenges created by AI, and that Video Understanding AI holds the final key to trust. Based on the problem awareness and technological gaps underscored throughout the Summit, PYLER will focus even more intently on addressing the industry’s most critical and unaddressed challenge — Video Safety — and in doing so, help build the next stage of the Infrastructure of Trust.

The next era of trust will belong to the companies that can understand video, not just detect it.


The Message from Rob Rasko, President of the Brand Safety Summit Series

Rob Rasko, President of the Brand Safety Summit Series, emphasized that given the current challenges surrounding Brand Safety, it is time to reframe and expand the conversation into the broader discourse of “Brand and AI Safety.” This more comprehensive approach reflects the major shift in digital advertising driven by the rapid introduction of AI.

While traditional Brand Safety focused on the constraints of ad placement, adjacency, and keyword-based evaluation, Brand and AI Safety extends the discussion to all elements of media: campaign placement, suitability, target audiences, and performance. Importantly, this reframing is not an abandonment of Brand Safety but an evolution toward a more unified and effective framework—one that aligns with today’s data-driven, AI-enhanced advertising environment.

AI presents both risks and opportunities as it reshapes media placement and content creation, and the Summit’s mission is to help the industry navigate beyond the risks toward the opportunities.

His message was clear: Brand Safety isn’t being replaced — it’s being redefined for the AI era.

© 2025 PYLER. All rights reserved.

pylerbiz@pyler.tech

© 2025 PYLER. All rights reserved.

pylerbiz@pyler.tech

© 2025 PYLER. All rights reserved.

pylerbiz@pyler.tech