Blog

Read more about our approach to building safe intelligence for video.

All

TrustAndSafety

DigitalAdvertising

PYLER

AdTech

VideoUnderstanding

ContextualTargeting

BrandSafety

Jaeho’s Lens

Tech

Brand Safety Summit New York Recap: From Brand Safety to Brand & AI Safety

The AI Paradox We Now Face

PYLER has consistently emphasized the mission to overcome the AI Paradox—the inherent duality where AI delivers overwhelming productivity while simultaneously becoming the systemic source of threats like deepfakes and misinformation—and the broader Crisis of Trust created by the explosive rise of AI-driven content. Our goal has always been to help build the emerging Infrastructure of Trust. The recent Brand Safety Summit New York confirmed that this philosophy is no longer unique to a few early voices—it is rapidly becoming the shared vision of the global industry.

The Summit was a cooperative forum that moved beyond competition, bringing together leaders from platforms such as TikTok, Meta, and Snap; the world’s largest advertising companies—WPP, Publicis, IPG, and Omnicom—and key verification partners including IAS (Integral Ad Science) and Channel Factory. PYLER holds deep respect for the organizers' efforts in convening such a high-caliber group to share insights and build community around an increasingly complex challenge. The event served as a powerful catalyst for awareness and alignment, reinforcing our conviction that the industry’s overall standard for safety is set to rise significantly.


From Brand Safety to Brand and AI Safety: AI Changes the Scale of Safety

The core theme of this year’s Summit was the revolutionary expansion of the safety discussion. Where traditional Brand Safety centered on the limited context of “next-to” adjacency—whether an ad appears beside safe content—we have now entered an era where safety must account for the entire system in which AI participates in content creation, distribution, and recommendation.

Industry leaders made it clear that AI is fundamentally transforming the Brand Safety paradigm. And this shift is not merely a broader scope; AI is reshaping the very form, logic, and meaning of digital content itself. As a result, the question of safety and trust has moved far beyond placement management and into a deeper need to ensure the transparency and authenticity of content and the algorithms that govern it.

Brand Safety used to be about adjacency. Now, it’s about provenance, authenticity, and algorithmic integrity.


Key Insights from the Summit: Translating Safety into Tangible Performance

The sessions at the Summit made one point unmistakably clear: Brand Safety is no longer a cost or a regulatory burden — it is a strategic investment that directly drives performance.

  • The End of the Trade-Off and Contextual Intelligence:

    The era when advertisers had to choose between “safety” and “performance” is over. Speakers emphasized moving beyond the over-blocking caused by keyword-based exclusion and instead selecting brand-positive content through advanced Contextual Intelligence. When combined with audience data targeting, this approach significantly reduces wasted impressions and delivers measurable improvements in marketing performance. Safety and performance converge when context — not keywords — drives decisions.


  • Closed-Loop Measurement and the Growing Role of RMNs:

    Closed-Loop Measurement demonstrated that advertising in verified, trusted environments leads not only to higher engagement but to actual transactions. The advancement of consumer ID tracking, especially through RMNs (Retail Media Networks), marks a shift toward a world where trust directly converts into revenue. In trusted environments, impressions become outcomes.


  • Agentic AI and Irreplaceable Role of Human Judgment:

    As we enter the era of Agentic AI — systems capable of autonomously handling large parts of media buying and optimization — the Summit underscored a crucial point: human oversight remains essential. Contextual understanding and brand values still require human interpretation to ensure AI-driven decisions are aligned with brand identity. AI can optimize the path — but only humans can define the destination.


  • Auditable AI, Transparency, and Creative Leeway:

    Auditable AI — systems whose decisions can be tracked, reviewed, and verified — was highlighted as a core component of ethical governance. Speakers also emphasized that intentionally curating and managing a brand’s “Aura” while giving creators sufficient Creative Leeway strengthens both performance and authenticity. Accountability builds trust; authenticity sustains it.



PYLER's Observation: The Industrial Context Behind the Safety Discussion

While the Summit discussions focused on the ongoing challenges of the AI era, PYLER noted that these issues closely mirror past industry experiences. The UGC (User Generated Content) era of the 2010s—driven by smartphones and YouTube—democratized content creation but also led to the proliferation of low-quality content and misinformation. Brands being placed next to unsafe material and the inherent limits of human review were early warning signs that were already visible at the time.

The crisis of trust defining the AIGC (AI-generated content) era today can be seen as a reproduction of those same challenges—only now multiplied by the explosive complexity and accelerated speed introduced by AI. This historical parallel provides an essential framework for understanding the current AI safety problem and why the stakes are higher than ever.

The crisis hasn’t changed — only the acceleration has.


What PYLER Saw on the Ground: The Transparency Gap in Video Safety Technology

The overall message of the Summit indicated that the industry is well aware of the trust challenges posed by the AI era and is actively discussing potential paths toward a solution. However, one area remained conspicuously absent from clear answers: Video Safety.

While text- and image-based verification technologies are already being used at scale, there was a noticeable lack of practical examples—or even introductions—of the AI technologies and transparency frameworks required to meaningfully address safety for video content, which remains the center of digital media and the most powerful driver of emotional impact.

This gap is closely aligned with what PYLER observed at TrustCon 2025. Although participants universally acknowledged the need for AI-driven approaches to video content moderation, the absence of concrete sessions, case studies, or references to the underlying technology highlighted a major, unresolved challenge for the industry.

The industry acknowledges the problem — but video remains the unaddressed frontier.


Ultimately, the Core is 'How to Build the Infrastructure of Trust'

Observing the entirety of the Summit’s discussions, PYLER solidified one conviction: the AI era requires more than regulations or platform-level self-governance. The real imperative is to build a technological foundation capable of sustaining trust — the Infrastructure of Trust.

The accelerating speed of deepfakes and the growing sophistication of manipulated video content clearly expose the limits of existing systems. This is why the need for Multi-Modal Video Understanding AI felt even more urgent throughout the Summit.

This technology can simultaneously interpret and evaluate all elements of video — visual signals, audio cues, speech, text, behavior patterns, and narrative progression — as a unified context. In the video era, trust must begin with AI that understands video.

Trust is no longer a policy choice — it is an engineering challenge.


PYLER's Direction Post-Summit is Clearer

The Brand Safety Summit New York was not a venue for providing all the answers. However, it served as a crucial milestone — one that revealed what the industry is truly grappling with, where the pain points lie, and what critical tasks remain unresolved.

What became unequivocally clear is that AI must ultimately solve the challenges created by AI, and that Video Understanding AI holds the final key to trust. Based on the problem awareness and technological gaps underscored throughout the Summit, PYLER will focus even more intently on addressing the industry’s most critical and unaddressed challenge — Video Safety — and in doing so, help build the next stage of the Infrastructure of Trust.

The next era of trust will belong to the companies that can understand video, not just detect it.


The Message from Rob Rasko, President of the Brand Safety Summit Series

Rob Rasko, President of the Brand Safety Summit Series, emphasized that given the current challenges surrounding Brand Safety, it is time to reframe and expand the conversation into the broader discourse of “Brand and AI Safety.” This more comprehensive approach reflects the major shift in digital advertising driven by the rapid introduction of AI.

While traditional Brand Safety focused on the constraints of ad placement, adjacency, and keyword-based evaluation, Brand and AI Safety extends the discussion to all elements of media: campaign placement, suitability, target audiences, and performance. Importantly, this reframing is not an abandonment of Brand Safety but an evolution toward a more unified and effective framework—one that aligns with today’s data-driven, AI-enhanced advertising environment.

AI presents both risks and opportunities as it reshapes media placement and content creation, and the Summit’s mission is to help the industry navigate beyond the risks toward the opportunities.

His message was clear: Brand Safety isn’t being replaced — it’s being redefined for the AI era.

AdTech

BrandSafety

ContextualTargeting

DigitalAdvertising

VideoUnderstanding

2025. 11. 17.

Why Real-Time Context is a Critical Bottleneck for AI – PYLER's Take from SingleStore Now 2025

Attending SingleStore Now 2025 felt less like visiting an event and more like a conversation about the future we’re actively building. This event matters to us because its central theme—that AI's success hinges on a highly responsive, real-time context layer—is not just a future trend, but the core technical problem PYLER has been solving at petabyte scale.

The industry is now waking up to a problem we've long understood: AI models are "context-blind." Statistics from the opening keynote by SingleStore CEO Raj Verma (like the fact that fewer than 26% of AI projects achieve ROI) prove that legacy architectures are reaching their limits. They cannot deliver the ultra-low latency, high concurrency, and complex query support needed to provide real-time context.

This challenge is exponentially harder for unstructured video data—the most complex, data-heavy, and high-velocity medium. The keynote discussion resonated deeply with our obsessive focus on solving this high-latency data problem. Real-time context is the difference between preventing brand risk and merely reporting it after the damage is done.


Key Innovations Intersecting with Our Architectural Approach

PYLER team members join SingleStore CEO Raj Verma (front, center) and C-level executives to celebrate the SingleStore Nasdaq closing ceremony in New York City.


Each new capability introduced at the event reflected the same principles we’ve built our platform on — a shared belief in real-time, context-rich data as the foundation of AI. This alignment was clear across three key announcements, which together form a powerful toolkit for the next generation of AI applications.

1. AI/ML Functions: Bringing AI to the Data

SingleStore is embedding AI capabilities (like AI_SENTIMENT(), AI_SUMMARIZE(), AI_CLASSIFY()) directly into the database, accessible via simple SQL. This eliminates the slow, costly ETL pipelines traditionally required to move data to an external AI model and back. For PYLER, this directly aligns with our philosophy of 'bringing compute to the data.' Moving petabytes of video data for external inference is architecturally unfeasible and slow. In-database functions allow us to enrich our contextual analysis in-place, drastically reducing latency. It’s a move from complex, costly ETL pipelines to instant, on-the-fly intelligence—a core principle of our 'Pride-worthy Engineering.'

2. Zero Copy Attach: Agile Development on Live Data

SingleStore's new “Zero Copy Attach” feature allows developers to instantly attach smaller, isolated compute clusters to a production database without copying any data. This provides complete workload isolation and independent scaling. For PYLER's R&D, this is a massive enabler, as it allows our engineers to experiment fearlessly. We can now test new validation models or AI agents on live, production-scale data without risking performance isolation or incurring massive data duplication costs. This accelerates our innovation cycle from weeks to days, allowing us to test new trust models without compromising our core service.

3. Aura Analyst: The Natural Language Data Agent

The speed is only half the battle. The other half is accessibility. Aura Analyst is a new "context-aware" natural language agent system that allows business users to query data using plain English, with all queries being auditable and governable. While we may not embed this in customer-facing dashboards immediately, we see immense value in Aura Analyst for our internal operations, precisely because it aligns with our core philosophy of data democratization. We believe that access to data and the opportunity for analysis must be democratically guaranteed to everyone, not just developers. This isn't just an ideal; it's a core value for our high-speed growth. Tools like Aura Analyst can dramatically boost operational efficiency by empowering our non-developer teams to get insights without a SQL bottleneck.


Our Perspective: How We Architected for Trust

This vision of a real-time, context-aware AI future is not just theoretical for us at PYLER—it is the challenge we solve daily. We provide real-time brand safety for global brands like Samsung Electronics and LVMH.

Our previous architecture (on PostgreSQL) faced the exact problems the industry is now discussing. Our service architecture is inherently complex, mixing high-throughput transactional (OLTP) queries from real-time ad metrics with heavy analytical (OLAP) queries from our massive video analysis database. Our previous PostgreSQL-based system could not guarantee latency for this mixed workload, which directly impacted user experience. Our engineering challenge was that traditional OLAP-optimized systems are less efficient when handling high-concurrency ingestion, and OLTP-optimized systems are not designed for complex analytical joins. We were forced into a brittle system of materialized views and nightly batch jobs, which is the very definition of 'stale context.'

This is how we solved the problem: Our engineering decision was to move to an HTAP architecture. By migrating, we eliminated the core bottleneck: the join latency between live ad-metrics data and petabyte-scale video analytics. This new design allows us to serve both query patterns concurrently with radically improved performance, which has been a critical factor in dramatically improving our user experience.

This isn't just a theoretical number. In our internal benchmarks, SingleStore delivered query speeds up to 100x faster for complex OLAP-style analytical queries compared to our previous partitioned PostgreSQL setup. This performance leap isn't just a vanity metric. For PYLER, this translates directly into two critical outcomes: a vastly improved, real-time experience for our user-facing analytical dashboards, and the engineering delta that separates 'brand safety' from 'brand risk.' It’s the difference between catching harmful video content before an ad impression or after the damage is done. This focus on how we architect for trust is what defines us.


Beyond the Sessions: A Meeting of Minds

Leaders from PYLER and partner companies exchange insights during a networking cruise at SingleStore Now 2025. From left to right: Jaeho Lee, Posco, Jaenyun Shin, A Platform, Hyeongjun Park, PYLER, DongChan Park, PYLER, Rahul Rastogi, SingleStore, Ranjit Panigrahi, Apple, Hannim Kim, APlatform, and Jaeun Kim, PYLER.


Perhaps just as valuable as the technical sessions was the opportunity to connect with leaders from companies like Apple, Posco, Goldman Sachs, K-Bank, IBM, Kakao, and Adobe. This wasn't just networking; it was an opportunity to exchange ideas with leaders from other data-critical industries—from manufacturing and finance to platform and SaaS.

We found that when we described our unique challenge—processing petabytes of unstructured video data in real-time to ensure trust and safety—it resonated deeply with other industry leaders, partners, and data executives at the conference.. Whether in finance, logistics, or tech, everyone is facing their own version of the 'real-time context' problem. The conversations confirmed that PYLER is not just solving a niche problem; we are solving a universal, cutting-edge data problem at an extreme scale.


The Future: Trust Through Understanding

PYLER engineers Hyeongjun Park and DongChan Park (center and right) connect with an infrastructure engineer at SingleStore Now 2025. The engineer on the left, an attendee from a quantitative firm, discussed their work in infrastructure engineering.


We left SingleStore Now 2025 with a reinforced conviction. The industry is waking up to a problem we've been solving for years: that AI without real-time context is a liability, not an asset.

The event confirmed that our architectural foundation is sound, but our mission is what truly sets us apart. The era of AI agents is here, but it requires a new standard of validation and trust. We don't just build AI; we engineer the understanding that makes it trustworthy. We’re inspired to see partners like SingleStore building the infrastructure that helps make that possible. The future of AI will not be defined by the models themselves, but by our collective ability to safely and verifiably connect them to the real world.


This piece was written by Hyeongjun Park, Backend Engineer and X-Ops Squad Lead at PYLER.

Tech

2025. 11. 14.

End of the Foundation Model Competition: Artificial World Needs a Validation Layer

For half a decade, the world of AI revolved around a single race: who could build the biggest, smartest, and most general foundation model. It was a battle fought with billions of dollars, GPU clusters the size of small nations, and an obsession with benchmark dominance. But as the dust settles, it is becoming clear that this competition is ending not because anyone has truly won, but because the game itself has changed.

1. The Age of the Model Arms Race

In the early 2020s, AI progress was defined by scaling. The mantra was simple: more data, more parameters, more GPUs. OpenAI, Anthropic, Google, and a few others poured vast sums into foundation models that grew from billions to trillions of parameters. The field of competition was limited to companies that could spend over a billion dollars on computing resources, data, and talent. While the public narrative focused on rivalry, in practice, these models began to converge in architecture, language, and behavior.

2. The Rise of Vertical Intelligence

The next wave of value creation will come from specialized and deeply vertical intelligence, AI that understands a domain with surgical precision rather than encyclopedic breadth. Whether it is medical diagnostics, financial risk, or brand safety in video content, the competitive frontier has shifted from foundation to application infrastructure.

3. The Post Model Era

In this new phase, the differentiator is not who trains the biggest model but who can orchestrate intelligence. Companies that combine reasoning, retrieval, and domain specific data pipelines will outperform those who merely scale compute. The foundation layer will become infrastructure, much like electricity. No startup can compete by building a power plant anymore.

4. What Choices Do We Have

Since 2021, PYLER has built a purpose-built multimodal video understanding model for UGC (User Generated Content) validation and retrieval for brands. To transform from a surviving company into a thriving one, we faced two choices:

  1. Create more B2B AI Agents, which could generate explosive short term revenue.

  2. Build a validation layer for Gen AI applications.


We chose the second path. We believe it is the only way to solve a foundational problem that the world will soon suffer from. The first path, creating more B2B AI Agents for quick revenue, will inevitably be dominated and replaced by companies like OpenAI, Google, and Anthropic.
Consider the OS and Cloud eras. Operating systems had defenders and cloud companies had security protocols, yet Okta, Wiz, and PaloAlto Networks built billion dollar businesses by providing third party validation for enterprises and nations.
Problems caused by humans can be solved by human capabilities. But the problems that will be caused by AI are an entirely different agenda.

5. The Validation Layer Imperative

Just like the OS and Platform eras, third party validation will be essential. The core value of Gen AI applications is generating outputs that meet a user's intention and provide satisfaction. However, if these applications validate themselves too aggressively, it can weaken their core generative competence.
At the same time, enterprises and nations cannot validate all AI outputs using only their internal capabilities. There is a critical lack of infrastructure level third party validators to secure their interests and safety. This is the modern equivalent of the seat belt in the automotive era.


In closing…

The end of the foundation model competition does not mean the entire competition is over. It simply means the participants in that specific race have been decided. The new winners of the next AI era will not be those who built the model, but those who understand how to build the validation layers that protect human profit, rights, and safety.


As steam once symbolized the dawn of the Industrial Revolution, code and data now mark the rise of the AI Revolution. But every revolution needs its validation layer — the mechanism that keeps progress aligned with trust and safety. (Painting – The Gare Saint-Lazare, Claude Monet (1877))

Jaeho Oh is the co-founder and CEO of PYLER.

Jaeho’s Lens

2025. 11. 7.

[Pre-TrustCon 2025 Release] Video, AI, and the Evolving Landscape of Trust & Safety

As AI-generated video content grows at an explosive rate, the challenges of ensuring digital trust and safety (T&S) have become more urgent and complex than ever. TrustCon 2025 will gather global T&S experts to tackle critical issues such as deepfakes, disinformation, and content validation in the age of AI. Pyler will participate to showcase how our Video Understanding AI addresses these challenges with scalable, multimodal solutions. Through this effort, we aim to strengthen platform integrity, improve ad trust, and help build a safer digital future.


TrustCon 2025: Why the Urgency Now?

Beyond its convenience and connectivity, our digital world casts a shadow of its own. Harmful content—hate speech, misinformation, inappropriate images and videos—erodes the integrity of online spaces and threatens the very trust we place in platforms and brands. Safeguarding users and upholding the 'trust' and 'safety' of this digital ecosystem, a field known as Trust & Safety (T&S), has become more critical than ever. T&S extends beyond simply removing harmful content, known as Content Moderation, to encompass the proactive assurance of content suitability and reliability on platforms, which we term Content Validation.

Link : TrustCon2025

TrustCon 2025 convenes leading T&S professionals from around the globe to diagnose the complex challenges emerging in our rapidly changing digital environment and to forge innovative solutions. TrustCon is not merely a conference; it’s a unique global forum dedicated to sharing real-world cases, learning from past failures, and engaging in deep discussions on T&S's most pressing issues. These include AI ethics, deepfakes and synthetic media manipulation, child safety, disinformation campaigns, and navigating evolving regulatory compliance. It is a critical arena for redefining digital platform responsibilities and collectively shaping a safer online world.


The Age of Video and AI: Unveiling T&S’s New Pandora’s Box

A particularly alarming trend impacting the T&S landscape is the explosive growth of video content coupled with the rapid advancement of Artificial Intelligence (AI). Cisco predicts that by 2027, an astounding 89% of global IP traffic will be video content. This sheer volume intensifies the burden of content moderation and exponentially escalates the complexity of T&S management.

The Invisible Threat: How AI-Generated Harmful Content Erodes Trust and Drains Ad Budgets

Furthermore, the proliferation of AI, especially generative AI, presents a double-edged sword. Sophisticated deepfake videos designed to spread misinformation and new forms of harmful content, such as AI-generated images, pose unprecedented challenges for existing T&S systems to detect and block. Traditional content moderation methods, relying solely on keywords or metadata, prove utterly inadequate in discerning the nuanced context or hidden intent within video content. This limitation directly impedes effective Content Validation, ultimately leading to severe repercussions. When brands' advertisements appear alongside inappropriate content, it erodes brand trust. Indeed, 64% of consumers report losing trust in a brand after encountering its ads next to unsuitable content. As traditional methods fail to cope with an exponentially increasing volume of AI content, this is no longer a challenge that can be overlooked.


Pyler Leadership’s Vision: Unlocking New Possibilities in the Complex World of Video Content Management

Pyler's leadership team, including our CEO, will be actively participating in TrustCon 2025. Our goal is to gain a deeper understanding of the profound challenges faced by content creators and platforms regarding video T&S, and to explore how Pyler can contribute to resolving these critical issues. Through direct engagement with T&S experts at TrustCon, we aim to reaffirm our conviction that AI can tackle the complexities of video content management far more effectively than traditional methods. Moreover, we seek to validate that Pyler possesses the unique technological capabilities to effectively address these formidable challenges today.

Pyler is laser-focused on developing and advancing our proprietary Video Understanding AI model, designed to comprehend video content much like a human would. This sophisticated model leverages a multimodal approach, holistically analyzing video , text , and audio to discern not just superficial elements, but also the nuanced visual, auditory, and contextual meanings within content.

Our commitment to scale is evident: Pyler is already processing over 300 million videos, and our daily processing volume is rapidly increasing. This extensive experience with large-scale data fuels the continuous evolution of Pyler's AI model, making it faster, more accurate, and more scalable in analyzing and understanding video content. As Pyler’s Video Understanding AI model advances, it will revolutionize the efficiency of vast-scale Content Moderation and dramatically enhance Content Validation capabilities—enabling precise judgment of content suitability even within complex contexts. We firmly believe this will not only improve ad efficiency but also fundamentally contribute to resolving T&S challenges across the board, ultimately fostering a safer and healthier digital ecosystem for society.


Beyond TrustCon 2025: Pyler's Commitment to a Trusted Digital Future

TrustCon 2025 serves as a crucial platform for Pyler to deepen our understanding of the T&S landscape and explore collaborative opportunities with global experts. Through this event, we will present concrete solutions to the urgent challenges in video content T&S, demonstrating how our unique AI technology contributes to building a more reliable digital ecosystem. 

Pyler is dedicated to leveraging our Video Understanding AI technology to create social value by ensuring the safety and trustworthiness of online content. We invite you to join us on this new journey, starting at TrustCon 2025, as we strive to build a more secure digital future.

PYLER

TrustAndSafety

2025. 7. 22.

The Invisible Threat: How AI-Generated Harmful Content Erodes Trust and Drains Ad Budgets

AI-generated fake videos are spreading rapidly, misleading vulnerable viewers like the elderly. These videos evade traditional filters, causing ads to appear on harmful content and wasting ad spend. A healthier ad ecosystem requires better content detection and clearer insight into where ads are placed.


In today's digital landscape, video content has become an indispensable part of our daily lives, projected to make up 80-90% of global internet traffic by 2027. Yet, lurking in the shadows of this vast digital space is a new and unsettling threat: harmful or deceptive content generated by artificial intelligence (AI) technology. Recent reports of "AI fake documentaries" circulating on online platforms, with outlandish titles like "Man Who Impregnated 32 Women" or "50-Something Korean Man Impregnated Three Mongolian Beauties," reveal a shocking reality.


Source: YouTube

These videos, crafted by combining AI image generators, synthesized narration, and provocative captions and thumbnails, are racking up millions of views despite their preposterous narratives. The alarming issue is that some elderly viewers are mistaking these fabricated videos for real events. Comments such as "Lucky man, I'm envious" and "Solution to population decline" underscore the severity of this misunderstanding. This phenomenon cunningly exploits the low digital literacy among the elderly and their vulnerability to information. Furthermore, the societal impact of AI-generated content cannot be ignored, as these videos are even consumed in public spaces, causing discomfort to those nearby. The spectrum of harm these videos inflict is vast, presenting themselves as "life wisdom" or "life lessons," while also incorporating racist tropes, excessive sexual objectification, and distorted fantasies about age gaps.


The Proliferation of AI-Generated Content and the Advertiser's Dilemma

The unchecked proliferation of such AI content presents advertisers with a significant, often unseen dilemma. In the digital advertising market, the "context" in which an ad appears remains a challenging issue for many advertisers. Traditional brand safety measures have primarily relied on keywords or metadata, but AI-generated content cleverly sidesteps these filters. This is because even sensational content can be disguised with ordinary titles and descriptions, or by subtly rephrasing specific words to bypass filtering mechanisms. Moreover, certain AI audio novels, which have minimal visual harmfulness, are difficult to detect with existing visual-centric filtering systems. 

This opaque and imprecise video inventory environment directly leads to financial losses for advertisers. According to an analysis by Pyler utilizing a Video Understanding AI model on approximately 10,000 campaigns and over 100 million ad placements, it's estimated that a typical brand spends 20% to 30% of its ad budget on video ads placed within unsafe or unsuitable content. This exposure to harmful videos directly translates into wasted ad spending and brand safety concerns for advertisers. Beyond mere budgetary waste, sponsoring creators of harmful videos can draw criticism regarding a company's corporate social responsibility (CSR). Ultimately, this can damage brand value and negatively impact long-term profitability.

The advancement of AI technology is a double-edged sword. While it offers groundbreaking opportunities, it also serves as a tool for mass-producing harmful or deceptive content. Now, more than ever, we must clearly recognize what to trust, how to protect ourselves in the "fake world" created by AI, and precisely where our valuable advertising budgets are being spent. Acknowledging and understanding these complex, multi-layered issues is the crucial first step toward building a healthy and responsible digital advertising environment.

BrandSafety

TrustAndSafety

2025. 7. 17.

Accelerating Video Understanding for ad generation with Visual AI Agents

PYLER supercharges video ad safety with NIVDIA's DGX B200 and AI Blueprint for smarter, safer placements.
Discover how PYLER's real-time video understanding helps brands like Samsung, BYD, and Hana Financial thrive.


At PYLER, we leverage our platform, video understanding AI, which deploys visual AI agents to ensure brand safety by verifying that ads are never displayed alongside inappropriate videos and placing them in more contextually relevant settings to protect brand value and enhance campaign effectiveness.

With the explosive growth of generative AI, the sheer volume of content uploaded to video platforms has skyrocketed. In such an environment, displaying ads next to content that’s not brand aligned increases marketing costs to target the correct audiences, but also presents a risk of negative brand impressions and potential reputational damage.

Video Understanding AI is crucial for effectively managing and validating content in an era of massive, high-speed uploads. When properly implemented, it becomes a key asset in safeguarding and enhancing a brand’s overall value.

Video content processing is significantly more complex than image or text processing, demanding substantial computational resources from preprocessing to inference. To handle this real-time influx of video content effectively, a high-performance pipeline utilizing optimized computing resources is essential.

In this post, we’ll discuss how PYLER addresses these challenges by adopting NVIDIA AI Blueprint for video search and summarization (VSS) in conjunction with NVIDIA DGX B200 systems. We’ll explore how we pair NVIDIA software and accelerated computing, with PYLER’s proprietary Video Understanding AI, to help ensure brand safety at scale.


PYLER Boosts Content Safety with NVIDIA AI Blueprint for VSS and NVIDIA AI Enterprise

PYLER automatically filters, classifies, and analyzes large-scale video data to help advertisers run their campaigns safely and efficiently. To accomplish this, PYLER selected NVIDIA AI Blueprint for VSS, built on the NVIDIA AI Enterprise software platform, offering enterprise-grade support, advanced security measures, and continuous, reliable updates.

NVIDIA AI Enterprise delivers innovative blueprints and commercial stability, enabling PYLER to deploy secure, scalable and continuously maintain AI workflows.

Key advantages provided by VSS Blueprint include:

  1. High-Quality Video Embeddings

    • State-of-the-art embedding models are essential for capturing rich contextual and semantic information from videos. PYLER integrates the NVIDIA NeMo Retriever  nv-embed-qa-1b-v2 model for minimal information loss during the embedding and captioning process.

  2. Scalable Infrastructure for training custom VLMs and Large-Scale Video Processing

    • NVIDIA DGX B200 accelerates custom VLM training and delivers up to 30x faster performance and best-in-class energy efficiency compared to its predecessor. 

    • With the immense volume of video content uploaded daily, GPU-based parallel processing is critical for scalability. PYLER utilizes VSS blueprint’s chunk-based parallel processing and GPU node expansion to dynamically distribute workloads and process video data in real-time. We look forward to enabling this on NVIDIA DGX B200 for inference.

  3. Flexible Verification and Search via RAG

    • PYLER rapidly searches and verifies vast amounts of video embeddings, leveraging Context-Aware Retrieval-Augmented Generation (RAG). This supports automated summarization, indexing, Q&A, and content validation tasks, significantly improving inference speed, accuracy, and context-awareness.


Expected Outcomes for PYLER

With these integrations, PYLER’s Video Understanding AI software can provide:

  1. Maximized Brand Safety

    • Precisely filter, classify, and recommend safe ad placements, ensuring ads never appear next to inappropriate or harmful content.

  2. Enhanced Brand Trust

    • Provide advertisers with robust and reliable AI models, fostering stronger viewer trust. As brands gain consumer confidence, PYLER’s credibility as an industry leader grows in parallel.

  3. Improved Data Pipeline Efficiency

    • Deploy automated, GPU-accelerated workflows optimized at a scale far beyond traditional MLOps pipelines, significantly increasing operational efficiency and throughput.

  4. Faster Service Launch

    • Leverage NVIDIA’s cutting-edge technology stack – including VSS blueprint, NVIDIA Metropolis, NVIDIA Dynamo-Triton, formerly Triton Inference Server, , NVIDIA NeMo Retriever and DGX B200 systems, – enabling innovative enterprises to quickly launch and scale AI-powered video services with fewer resource constraints.


Customer Success Stories

1. Samsung Electronics: PYLER AiD Redefines Brand Safety Standards for Product Branding Campaigns

  • As a leading global brand, Samsung Electronics faced challenges protecting its brand from appearing alongside misaligned content on digital platforms like YouTube
    By integrating PYLER AiD’s real-time AI video analysis, Samsung Electronics automatically blocked ads from harmful content categories based on global standards.
    By integrating the AiD solution into Samsung Electronics' ad campaigns, ads were automatically prevented from running on content categories threatening brand values, such as violence, explicit content, hate speech, and fake news. The AiD dashboard ensured brand safety and trust with transparent real-time monitoring and performance tracking.

  • Results:

    • Significant Reduction in Harmful Content Exposure: PYLER AiD reduced ad exposure to sensitive or inappropriate content that could threaten brand value by 77%, setting a new standard for brand protection.

    • Optimized Ad Budget Spend: Preventing ads from appearing on unsuitable content allowed Samsung to focus its marketing budget on safe, high-value, brand aligned placements, greatly increasing advertising effectiveness.

    • Enhanced Brand Trust and Value: Proactively eliminating potential brand crises and consistently delivering a safe and trustworthy brand image to consumers successfully defended and strengthened brand reputation and customer trust.

2. BYD: PYLER is Ensuring Successful Korean Market Entry Through Brand Protection and High Intent Moment Targeting.

  • As a global leader in electric vehicles, BYD faced the critical task of successfully launching its first model in Korea, the 'Atto 3'. It was essential to create a positive first impression while proactively managing the risk of ads appearing alongside potentially negative content (e.g., critical reviews, defect discussions) that could harm the new brand's image upon market entry.

  • PYLER's AI-Powered Contextual Targeting and Brand Suitability Management technology precisely targeted BYD ads for positive or neutral YouTube content about EVs and small SUVs such as vehicle reviews, test drives, and competitor comparisons. Simultaneously, the AI excluded negative videos by analyzing content tone and context. This ensured BYD ads appeared only in positive, relevant high intent moments aligned with potential customer interest, effectively achieving both contextual targeting and brand suitability simultaneously.

  • Results:

    • Maximized Exposure in Brand-Friendly Contexts: Ad placements within safe, positive, and relevant EV/SUV video contexts increased exponentially—more than 100x— compared to previous methods—delivering high relevance and brand suitability to potential customers.

    • Significantly Boosted Potential Customer Interest: By effectively filtering out negative noise and concentrating ads on highly relevant content, BYD captured genuine potential customer interest, driving click-through rates (CTR) nearly 4x higher than baseline campaigns.

    • Successful Market Entry & Enhanced Brand Image: PYLER's integrated AI solution effectively managed the risks associated with new market entry, built a positive brand image, and played a key role in BYD's successful initial launch in the Korean market, receiving over 1,000 pre-order applications in the first week.

3. Hana Financial Group: PYLER AiM Optimizes Performance Across Diverse Campaigns with Automatic Video Placement Recommendation and Targeting

Hana Financial Group, one of South Korea’s largest financial holding companies manages a diverse portfolio including banking, credit cards, securities, and asset management. Facing the challenge of running multiple YouTube campaigns with distinct goals-such as attracting public pension accounts, providing retirement pension information, and promoting a public service announcement against illegal gambling-the group needed to reach targeted audiences within relevant content contexts while achieving varied KPIs like conversion rates, ad efficiency, and user engagement.

To address this, Hana Financial Group implemented PYLER AiM, an AI-powered contextual targeting solution. PYLER’s AI video analysis enabled precise targeting of specific YouTube content moments aligned with each campaign’s objective. For example, the Public Pension campaign focused on retirement planning and celebrity ambassador content, while the Retirement Pension campaign targeted videos about post-retirement finance. The Anti-Gambling PSA featured esports star Faker and targeted gaming and esports content popular with younger viewers, maximizing engagement and campaign effectiveness.

  • Results:

    • Optimized Performance for Each Campaign Goal:

      • (Public Pension) Achieved high average conversion rates exceeding 1.5%, demonstrating effective customer acquisition, with specific target groups surpassing an outstanding 3.6% conversion rate.

      • (Retirement Pension) Showcased dramatic improvements in ad efficiency compared to general targeting, with click-through rates (CTR) increasing more than eightfold and cost per click (CPC) reduced by over 70%.

      • (Public Service Ad) Successfully connected the featured celebrity 'Faker' with fan interests, resulting not only in a noticeable uplift in CTR but also proving high user immersion, with average view duration per impression and 100% video completion rates doubling compared to general campaigns. Furthermore, it reached more unique users efficiently, with average frequency halved.

    • Accurate Target Channel Reach: Across all campaigns, PYLER's contextual targeting concentrated ads within highly relevant channels, demonstrating significantly better reach within key target channels compared to general targeting. This confirmed the effective allocation of the ad budget to the most suitable potential customers.


Conclusion and Future Outlook

By adopting agentic AI through NVIDIA AI Blueprint for video search and summarization and DGX B200 systems, PYLER aims to strike a balance between protecting brand value and maximizing advertising efficiency in a market flooded with video content.

With large-scale GPU parallel processing, efficient embeddings, and Context-Aware RAG for search and verification, we will be able to detect negative content almost in real time and reliably adhere to brand guidelines.

Going forward, PYLER plans to continue actively harnessing NVIDIA’s latest AI technology to deliver fast, high-quality services that connect video content safely and appropriately across various domains.

AdTech

ContextualTargeting

PYLER

VideoUnderstanding

2025. 6. 17.

PYLER CEO Oh Jaeho Named to Forbes Asia’s '30 Under 30' List

PYLER CEO Oh Jaeho named to Forbes Asia’s ‘30 Under 30’ for reshaping video advertising with AI.
Recognized for leading PYLER’s rise in brand safety innovation across global markets.


Recognized as a Young Leader Shaping the Future of AdTech

In May 2025, Oh Jaeho, Co-Founder and CEO of PYLER, was named to the Forbes Asia “30 Under 30” list in the Marketing & Advertising category. The list celebrates young leaders across Asia who are transforming their industries, and Oh’s selection marks a significant milestone for Korea’s emerging AdTech ecosystem.


Revolutionizing Ad Placement in Video: PYLER’s AiD

Founded in 2021, PYLER set out to solve a long-standing issue in the video advertising industry: brand safety. The company’s flagship solution, AiD, is an AI-powered platform that ensures ads appear in brand-safe video environments—particularly on platforms like YouTube.

AiD analyzes the content of videos in real time, evaluating tone, context, and messaging to determine whether an ad should be placed. By automating what previously required manual review by agencies and marketers, AiD not only protects brand image but also enhances ad effectiveness at scale.

Today, PYLER works with global brands like Samsung, Bulgari, and BYD, as well as leading advertising agencies including Cheil Worldwide and Innocean.


Proven Growth and Technology with $24M in Total Funding

To date, PYLER has raised a cumulative 34 billion KRW (~$24 million) in funding. In July 2024, the company secured a 22 billion KRW Series investment from top-tier investors such as KDB Bank, KT Investment, Stonebridge Ventures, and SV Investment—validating its technical excellence and market potential.


Beyond the Forbes List: A Signal of What’s to Come

Being named to the Forbes “30 Under 30” list is more than an accolade—it's a recognition of the transformative work being done in a rapidly evolving digital ad landscape. PYLER continues to lead this transformation by building AI solutions that make advertising smarter and safer.

Oh Jaeho’s inclusion is a strong signal of the global impact PYLER aims to achieve in the years ahead. As the company moves forward, it remains committed to setting new standards in video-based advertising and delivering AI solutions that marketers and brands can trust.


You can view the full list of honorees in the Marketing & Advertising category of Forbes 30 Under 30 – Asia 2025 at the link below:
Forbes 30 Under 30 – Asia 2025

To learn more about one of this year’s honorees, PYLER CEO Jaeho Oh, visit his Forbes profile here:
Jaeho Oh on Forbes

AdTech

PYLER

2025. 5. 15.

PYLER Becomes First in Korea to Deploy NVIDIA DGX B200

PYLER becomes the first in Korea to adopt NVIDIA’s DGX B200, redefining AI infrastructure for AdTech.
With 30x faster performance, PYLER accelerates brand safety, contextual targeting, and global AI leadership.


Pioneering AI Infrastructure with Next-Gen NVIDIA Hardware

On February 27, 2025, PYLER became the first company in Korea to deploy NVIDIA’s latest AI accelerator, the DGX B200, marking a major leap in the company’s AI infrastructure. As NVIDIA’s newest generation AI system, the DGX B200 has drawn significant global attention since its release. PYLER's adoption—preceding even major institutions both locally and abroad—highlights the company's position at the forefront of AI innovation.

A ceremony was held to commemorate the milestone, symbolizing PYLER’s renewed commitment to redefining advertising technology through AI.


DGX B200: A New Benchmark in AI Performance

Equipped with NVIDIA’s next-generation Blackwell GPU architecture, the DGX B200 offers up to 30x improved computational performance compared to its predecessor. It delivers industry-leading energy efficiency and is purpose-built for training and inference of large-scale, video-centric AI models.

For PYLER—whose core business involves real-time video understanding across vast volumes of online content—the DGX B200 is a game-changing addition that will significantly boost its technical capabilities.


Powering the Next Generation of AI AdTech

The DGX B200 will supercharge PYLER’s core AI solutions across the board, especially in three key areas:

  • Brand Safety: Faster and more accurate detection of harmful content and ad placement control

  • High-intent Moment Targeting: Enhanced precision in real-time targeting based on video context

  • Contextual Ad Optimization: Smarter prediction of user responses and ad relevance

PYLER’s flagship solution, AiD, has already expanded its footprint both domestically and globally, collaborating with major advertisers like Samsung Electronics, Nongshim and KT Corporation. This infrastructure upgrade is expected to greatly enhance customer experience and maximize advertising performance.


Building World-Class Video Understanding for Advertising

Jihun Kim, CTO of PYLER, stated:
“With the DGX B200—the first of its kind in Korea—we’ve laid the foundation for building world-class video understanding capabilities tailored to the advertising domain. We will continue to push the boundaries of AI innovation to ensure that brand messages are delivered in the safest and most contextually relevant environments possible.”

Under its strategic partnership with NVIDIA, PYLER is focused on leading the future of AI-powered advertising—from content moderation to contextual AdTech. The adoption of DGX B200 is more than just a hardware upgrade—it marks a pivotal step in realizing PYLER’s vision of raising both the quality and trust of digital advertising through cutting-edge AI.

ContextualTargeting

PYLER

VideoUnderstanding

2025. 2. 27.

PYLER Secures $16.9M in Series Funding to Advance AI Brand Safety Solutions, bringing Total Investment to $26.2M

PYLER raises $16.9M to scale its AI-powered brand safety solution, AiD.
Trusted by top brands, PYLER aims to lead globally in video understanding and ad transparency.


Recognized for Its Cutting-Edge AI Brand Safety Technology

PYLER, a leading provider of AI-powered brand safety solutions, has successfully secured $16.9M in its latest series funding round. Key investors include Stonebridge Ventures, Korea Development Bank, SV Investment, and KT Investment—underscoring strong confidence in PYLER’s technology and growth potential.


‘AiD’: Helping Brands Regain Control Over Ad Placement

PYLER’s flagship solution, AiD, leverages AI to analyze the context of YouTube videos where brand ads are placed, blocking exposure to harmful or inappropriate content. The solution protects brands from being associated with adult content, hate speech, fake news, and fringe religious material that could damage brand reputation.

With millions of new videos uploaded to YouTube every day, manual review is no longer feasible. In this context, investors recognized AiD’s value in empowering advertisers with greater control and visibility over where their ads appear.


Brand Safety Directly Impacts Consumer Trust and Behavior

According to a joint report published in January by PYLER and the Korea Broadcast Advertising Corporation (KOBACO), 89.5% of consumers said brand safety is important, while 96% stated they would not purchase from advertisers who neglect it. These figures clearly show that brand safety plays a critical role in shaping both consumer trust and purchasing behavior.


On Track to Become a Global Leader in Video Understanding

“Our solution has already been tested and trusted by major Korean brands like Samsung, Hyundai Motor Company, Cheil Worldwide, and Innocean,” said Jaeho Oh, CEO of PYLER. “With this new funding, we aim to establish ourselves as one of the most competitive players in video understanding on a global scale.”

Powered by advanced AI and real-time content analysis, PYLER remains committed to building a trustworthy advertising environment—for both brands and consumers.

AdTech

PYLER

2024. 7. 29.

Brand Safety in Korea: How It Falls Behind Global Standards

Korea still lags behind global standards in brand safety — but change is on the horizon.
Explore how international frameworks and AI solutions like AiD can help close the gap.


A Stark Contrast in Brand Safety Standards

In Korea, discussions around brand safety are still in their early stages. Even before tackling brand protection, the country lacks a fundamental legal framework to systematically develop the advertising industry.

Recently, there has been renewed interest in the Advertising Industry Promotion Act, reintroduced in Korea’s 22nd National Assembly. We hope this will become a stepping stone toward a more structured and responsible advertising ecosystem.

In contrast, many global markets have long recognized the importance of brand safety — not only to protect advertiser reputations, but also to reduce the monetization of harmful or inappropriate content. Let’s take a look at how leading countries are addressing brand safety.


Key International Organizations Leading Brand Safety Efforts

  • IAB (Interactive Advertising Bureau)
    Establishes industry standards for digital advertising in the U.S., including terminology, ad formats, pricing metrics, and implementation guidelines.

  • ARF (Advertising Research Foundation)
    Works with ESOMAR (European Society for Opinion and Marketing Research) to standardize ad effectiveness measurement globally.

  • MRC (Media Rating Council)
    Accredits media rating services and ensures quality and transparency in audience measurement.

  • GARM (Global Alliance for Responsible Media)
    A cross-industry initiative led by the World Federation of Advertisers (WFA). Includes major global advertisers, agencies, media companies, and ad tech providers working together to improve brand safety in digital media.

  • TAG (Trustworthy Accountability Group)
    Develops guidelines and certifications to combat ad fraud and brand risk, while collaborating with governments to localize global standards.

  • BSI (Brand Safety Institute)
    Provides education and certification for professionals focused on brand safety and digital advertising ethics.

  • APB (Advertiser Protection Bureau)
    A U.S. initiative led by the 4As (American Association of Advertising Agencies), focused on empowering ad professionals to assess their understanding of brand safety through tools like the Brand Safety Self-Assessment.


Why Brand Safety Is More Than Just a Brand Issue

Brand safety isn’t just about reputation management — it’s a critical part of maintaining a healthy digital ecosystem. By cutting off ad revenue from harmful or misleading content, the industry can help prevent the commercialization of toxicity and crime.

We believe Korea’s advertising market has the potential to mature into one that values both brand integrity and content responsibility. This shift will require not only updated regulations, but also industry-wide awareness and technical investment.

At PYLER, we’re committed to using our AI-powered video understanding technology to contribute to a cleaner, safer digital environment. We aim to challenge the uncomfortable realities of today’s content economy — and build solutions that move the industry forward.

BrandSafety

DigitalAdvertising

2024. 7. 3.

AiD: Solution That Protects Brand Trust

AiD by PYLER is a real-time brand safety solution that uses multimodal AI to automatically detect and block harmful content, improving both brand trust and ad performance. Built to meet global standards, it ensures brand messages are delivered in safe and effective environments.


PYLER’s New Standard for Brand Safety in Digital Advertising

Even as digital advertising becomes more sophisticated, many brands still face a fundamental challenge—the risk of ads appearing next to inappropriate content.

Violent, sexually explicit, politically or religiously biased, and hateful content can seriously damage brand perception. According to a global consumer survey, 64% of respondents said they lost trust in a brand after seeing its ad next to inappropriate content.

To solve this problem at the technological level, PYLER developed AiD, a real-time brand safety solution that automatically detects and blocks sensitive content in video-based ad environments—ensuring that brand messages are delivered in safe and effective contexts.


Fighting Fake News and Content Farming

One of AiD’s core priorities has been identifying and blocking fake news and manipulative content formats, such as clickbait or "storytime"-style videos that distort facts or spread false narratives.

On YouTube, this issue extends to “content farming” or “cyber re-uploaders”—channels that repurpose sensational material to exploit algorithms for visibility and revenue. When ads appear on these types of videos, brands can unintentionally become associated with misinformation or social division.

To prevent this, AiD excludes related content under its “Controversial Issues” category. Its AI models are continuously updated to respond to evolving content types and social issues, ensuring scalable and responsive brand protection.


Real-Time Filtering with Multimodal AI

AiD goes beyond basic keyword filtering. It uses multimodal AI, which analyzes both visual and textual elements of videos simultaneously. Through PYLER’s proprietary Vision Analyzer and Text Analyzer models, AiD classifies and blocks content related to Sexual content, Hate speech, Political or religious bias, Controversial or sensationalized “storytime” narratives etc.

AiD also supports custom filter criteria, allowing campaign-specific brand safety rules. As a result, advertisers can instantly assess content risk levels and ensure that their ads are placed only in safe, brand-aligned environments.


Proven Performance in the Field

AiD has already delivered tangible results in live campaigns:

  • 97% reduction in ad exposure on sensitive content

  • 15% improvement in ad performance (e.g., CVR)

  • 639% reduction in wasted ad spend on high-risk content

In one case, a brand reduced sensitive-content ad spend from 16.1% to just 2.1%, dramatically improving both efficiency and return on investment. AiD delivers the rare combination of trust protection and performance enhancement.


Built for Global Brand Safety Standards

AiD isn’t just functional—it’s also compliant with global brand safety guidelines.

PYLER is the first Korean company officially registered with the IAB Tech Lab, and AiD evaluates and filters content based on IAB’s globally recognized standards. These guidelines are used by the world’s top advertisers, making AiD a technically verified and transparent solution.


Globally Recognized Technology

AiD is powered by PYLER’s proprietary video AI, which has been internationally acclaimed:

  • CVPR 2022: 2nd place globally in Video AI (competing with Intel, Tencent, ByteDance, etc.)

  • CVPR & ICCV 2023: Research accepted by the world’s top AI conferences

  • 2024: Selected for both NVIDIA and AWS Global Startup Programs

PYLER continues to refine its technology in collaboration with global tech leaders.


Redefining the Basics of Advertising

In today’s digital landscape, advertising is no longer just about what you say—it’s about where you say it.

The context in which ads appear now directly impacts brand trust and consumer decision-making. AiD eliminates the need for manual review or guesswork. It enables advertisers to automatically avoid risky placements and run campaigns with both safety and efficiency in mind.

Now is the time for brands to make a strategic, technology-driven choice—to speak only in spaces that match their values. At PYLER we are committed to creating a safe space for brands to communicate confidently, backed by AI and built for the future.



Image Source: AiD Dashboard

BrandSafety

TrustAndSafety

2024. 4. 4.

EU’s Digital Services Act: Heavy Fines for Failing to Moderate Harmful Content

The EU’s Digital Services Act is reshaping online accountability — with massive fines for non-compliance.
As platforms scramble to moderate harmful content, brands must rethink where their ads truly belong.


What Is the Digital Services Act (DSA)?

The Digital Services Act (DSA) is a comprehensive EU regulation designed to hold online platforms accountable for the spread of illegal and harmful content — such as fake news, hate speech, and child exploitation. Platforms must remove such content swiftly and objectively, label AI-generated content, and ban targeted advertising based on sensitive data such as religion, sexual orientation, or content aimed at children and minors.

Failure to comply can result in fines of up to 6% of global annual revenue.

Source: Naver Encyclopedia
Companies affected: Google, Bing, YouTube, Facebook, X (formerly Twitter), Instagram, TikTok, Wikipedia, Apple, AliExpress, LinkedIn, and more


Regulating Big Tech Responsibility

DSA specifically targets Very Large Online Platforms (VLOPs) — those with over 45 million monthly active users in the EU. So far, 17 platforms and 2 search engines have been officially designated, including Google, Meta, X, and TikTok.

According to EU Internal Market Commissioner Thierry Breton,

“Compliance with the DSA will not only prevent penalties but also strengthen brand value and trust for these companies.”

European Commission President Ursula von der Leyen echoed this, saying:

“The DSA aims to protect children, society, and democracy through strict transparency and accountability rules.”


Enforcement Begins: DSA in Action

When misinformation and violent content spread rapidly across platforms following the Israel-Hamas conflict, the EU launched an official DSA investigation into X, questioning its ability to manage illegal content.

X responded that it had removed or labeled tens of thousands of posts and taken down hundreds of Hamas-linked accounts. Meta also reported deleting 800,000+ pieces of war-related content and establishing a special operations center for rapid content review.

Major platforms are now:

  • Removing recommendation algorithms based on sensitive user data

  • Adding public reporting channels for flagging illegal content

  • Filtering extremist or graphic content more aggressively

These actions are motivated by more than goodwill — DSA violations can trigger massive fines or even temporary bans from operating in the EU.


A Broader Vision: EU’s Digital Rulebook

The DSA is part of the EU’s digital governance trifecta, which also includes:

  • DMA (Digital Markets Act): Prevents anti-competitive practices by “gatekeeper” firms like Alphabet, Amazon, Apple, Meta, ByteDance, and Microsoft

  • DNA (Digital Networks Act): Aims to foster a unified digital market and promote investment and innovation in infrastructure and emerging players

Together, these laws enforce transparency, user protection, and fair competition in the EU digital ecosystem.


What About Korea?

While the EU pushes ahead with strong tech regulation, South Korea has yet to enact a comparable law to hold Big Tech accountable for algorithm transparency or content responsibility.

Civil society groups argue that Korea should move toward a comprehensive legislative framework, especially as:

  • Big Tech dominance threatens media diversity

  • Small businesses and content creators are increasingly dependent on platform decisions

  • Algorithmic news feeds raise concerns about information control

According to Oh Byung-il, head of Korea’s Progressive Network Center:
“Korea has long prioritized nurturing its domestic tech industry while overlooking critical issues like privacy and fair trade. The EU’s example shows it’s time for Korea to start serious discussions.”


Final Thoughts

From fake news to hate speech, the DSA reflects a growing global demand for platform responsibility. With major players like X, Meta, and TikTok scrambling to comply, it’s clear that user safety and algorithmic transparency are no longer optional.

In Korea and beyond, it’s time for governments and platforms alike to acknowledge their role in protecting the digital public — and for brands to ask hard questions about where their ads appear and what values they may be unintentionally endorsing.

AdTech

BrandSafety

DigitalAdvertising

TrustAndSafety

2023. 11. 1.

When Advertising Backfires: How Brands Lose Trust by Paying for the Wrong Placements

One poorly placed ad can damage years of brand trust — just ask Applebee’s.
Discover why brand safety matters more than ever in today’s volatile media landscape.


Are Video Ads Building Brand Value — or Destroying It?

Most of us are familiar with the Russia-Ukraine war, which began on February 24, 2022, when the Russian president announced a "special military operation" and invaded Ukraine — an event that continues to unfold today.

But what does war have to do with advertising?

출처 : Wikipedia

Source: Wikipedia

In the early hours of the invasion, Applebee’s, a casual American dining chain, found itself at the center of a global controversy. During a CNN broadcast covering air raid sirens over Kyiv, Applebee’s cheerful cowboy-themed commercial aired simultaneously in a split-screen format known as squeezeback, where ads are shown alongside live news to increase viewability.

The result? One half of the screen displayed a warzone, the other half showcased a light-hearted ad.
Viewers were shocked.

Source: CNN/Screenshot

The ad immediately went viral on social media, with many accusing Applebee’s of insensitivity or even indirectly sponsoring war. Despite issuing a prompt statement expressing concern about the war and disappointment in the broadcaster, the damage was done.
A significant investment in premium ad space ended up deeply harming the brand.


Chapter 1:
What Is Your Brand Really Associated With?

Think of a brand — now think of the model or celebrity associated with it.
For many in Korea, the pairing of Gong Yoo and KANU coffee is an iconic example. Since their campaign began in 2011, the brand and spokesperson have grown together in public perception.

That’s why brands carefully select models who align with their values and are unlikely to stir controversy.

But how much effort goes into choosing where the ad will appear?

If Applebee’s had used a high-profile celebrity in that cowboy ad, it’s likely that individual would also have faced backlash. Yet few brands spend even one-tenth of the time choosing ad placements as they do vetting their spokespeople.

Source: CHEQ, Magna & IPG Media Lab(2018) 


The Data Is Clear

According to research by CHEQ, Magna, and IPG Media Lab (2018):

  • Purchase intent drops by half when ads appear near harmful content

  • Brand recommendation likelihood drops 50%

Inappropriate ad placements severely damage trust, especially when they’re linked to national tragedies, hate content, or political extremism. For public companies, such reputational risks can even impact quarterly earnings and stock prices.

As more shocking and controversial content floods platforms to chase views and revenue, the risks for brands grow — and fast.


Chapter 2:
Rethinking Priorities in Advertising

What KPIs do marketers track for video campaigns?

CPV? CPM? VTR? Conversion rate?
All are valid metrics — but here's a more important question:

If you walked into your CEO’s office and asked,
“Which matters more — protecting the brand’s value and stock price, or improving media performance KPIs?”

What do you think the answer would be?

Most CEOs would agree: brand integrity always comes first.

Advertising exists to strengthen brand image and ultimately drive growth. But ironically, it can undermine brand equity when not managed with care. Marketers who focus solely on performance numbers may unknowingly put the brand at risk.


The Call to Action:
Make Brand Safety Everyone’s Business

It's time for brands to:

  • Establish robust processes for ad placement review

  • Monitor the content context of their media buys

  • Treat brand safety as a company-wide priority, not just a marketing concern

Digital content is becoming increasingly toxic, extreme, and monetized, and platforms aren’t always transparent. Marketers must now proactively assess the environments where their ads appear — and recognize that it’s no longer just about ROI.

Brand safety isn’t optional. It’s a strategic imperative.
And protecting your brand starts with knowing where your message lives.

BrandSafety

2023. 1. 30.

PYLER: Solving Challenges in the Digital Advertising Market with AI

PYLER's AiD safeguards brands in the risky world of video advertising with AI-powered contextual targeting.
Ranked 2nd globally at CVPR 2022, PYLER proves its cutting-edge capabilities in video understanding AI.


Launching AiD – Protecting Brand Safety in the Era of Video Advertising

PYLER addresses the growing brand safety concerns in the digital video advertising space with world-class Video Understanding AI, offering contextual video marketing solutions tailored for a new era.

In 2022, PYLER proved its technological prowess at CVPR, the world’s top AI and computer vision conference, by achieving outstanding results alongside global tech giants such as Intel, Tencent, and ByteDance. Through proprietary computer vision technologies and a brand safety algorithm built on IAB (Interactive Advertising Bureau) standards, Pyler has emerged as a highly competitive player.

Recognizing the seriousness of brand safety issues in video advertising, PYLER has launched AiD, an AI-powered solution to tackle these challenges head-on. As a leader in brand safety and contextual targeting in Korea, PYLER aims to raise industry awareness and restore an advertising environment where great ads are matched with trustworthy content.


Chapter 1:
Ranked 2nd Globally at CVPR 2022

PYLER was invited as a finalist to CVPR 2022, the world's largest conference on computer vision and pattern recognition, and proudly secured 2nd place in Track 3 of the LOVEU (Long-form Video Understanding) Workshop.

This challenge tested how effectively AI models could learn from instructional videos and scripts, and guide users through step-by-step tasks. In simple terms, the AI was required to interpret tutorial videos and then provide helpful answers when a user asked questions — similar to testing how well the AI "understands" the manual.

PYLER’s multimodal model—which integrates video, image, and text data—was praised for its ability to align complex questions with context and deliver user-relevant responses in sequential steps, making the task particularly challenging.

Dongchan Park (CTO) and Sangwook Park (ML Lead) of PYLER’s AI Context Lab shared:

“We competed against leading global corporations and research institutions with significantly fewer resources. Achieving 2nd place was meaningful—especially since we ranked 1st in Recall@3, one of the key evaluation metrics.”


Chapter 2:
AiD – Rebuilding Trust in Video Advertising

Brand safety concerns in platforms like YouTube have become severe across global markets, including South Korea. According to Pixability, one-third of total ad spend in 2021 was unintentionally spent on harmful or inappropriate content.

In one notable incident, KOBACO (Korea's only public ad agency) placed public service announcements on adult-themed YouTube channels, sparking controversy. Even major Korean corporations are often unaware of the exact content their ads are appearing alongside.

PYLER identified this market inefficiency—where 16% to 37% of ad spend is wasted on brand-damaging content—and created AiD: a 24/7 AI-powered guardian that continuously analyzes video content, flags risky categories, and blocks harmful ad placements.

Source: 2021 Pixability

While global advertisers now prioritize brand safety even above performance, many Korean advertisers remain uncertain about how to address the issue. PYLER hopes AiD will help normalize the ad ecosystem by giving advertisers peace of mind.

In the long run, AiD also aligns with privacy-first trends. It leverages contextual targeting without relying on third-party cookies, delivering efficient and effective ad performance without compromising user privacy.

BrandSafety

ContextualTargeting

VideoUnderstanding

2023. 1. 30.

All

TrustAndSafety

DigitalAdvertising

PYLER

AdTech

VideoUnderstanding

ContextualTargeting

BrandSafety

Jaeho’s Lens

Tech

Brand Safety Summit New York Recap: From Brand Safety to Brand & AI Safety

The AI Paradox We Now Face

PYLER has consistently emphasized the mission to overcome the AI Paradox—the inherent duality where AI delivers overwhelming productivity while simultaneously becoming the systemic source of threats like deepfakes and misinformation—and the broader Crisis of Trust created by the explosive rise of AI-driven content. Our goal has always been to help build the emerging Infrastructure of Trust. The recent Brand Safety Summit New York confirmed that this philosophy is no longer unique to a few early voices—it is rapidly becoming the shared vision of the global industry.

The Summit was a cooperative forum that moved beyond competition, bringing together leaders from platforms such as TikTok, Meta, and Snap; the world’s largest advertising companies—WPP, Publicis, IPG, and Omnicom—and key verification partners including IAS (Integral Ad Science) and Channel Factory. PYLER holds deep respect for the organizers' efforts in convening such a high-caliber group to share insights and build community around an increasingly complex challenge. The event served as a powerful catalyst for awareness and alignment, reinforcing our conviction that the industry’s overall standard for safety is set to rise significantly.


From Brand Safety to Brand and AI Safety: AI Changes the Scale of Safety

The core theme of this year’s Summit was the revolutionary expansion of the safety discussion. Where traditional Brand Safety centered on the limited context of “next-to” adjacency—whether an ad appears beside safe content—we have now entered an era where safety must account for the entire system in which AI participates in content creation, distribution, and recommendation.

Industry leaders made it clear that AI is fundamentally transforming the Brand Safety paradigm. And this shift is not merely a broader scope; AI is reshaping the very form, logic, and meaning of digital content itself. As a result, the question of safety and trust has moved far beyond placement management and into a deeper need to ensure the transparency and authenticity of content and the algorithms that govern it.

Brand Safety used to be about adjacency. Now, it’s about provenance, authenticity, and algorithmic integrity.


Key Insights from the Summit: Translating Safety into Tangible Performance

The sessions at the Summit made one point unmistakably clear: Brand Safety is no longer a cost or a regulatory burden — it is a strategic investment that directly drives performance.

  • The End of the Trade-Off and Contextual Intelligence:

    The era when advertisers had to choose between “safety” and “performance” is over. Speakers emphasized moving beyond the over-blocking caused by keyword-based exclusion and instead selecting brand-positive content through advanced Contextual Intelligence. When combined with audience data targeting, this approach significantly reduces wasted impressions and delivers measurable improvements in marketing performance. Safety and performance converge when context — not keywords — drives decisions.


  • Closed-Loop Measurement and the Growing Role of RMNs:

    Closed-Loop Measurement demonstrated that advertising in verified, trusted environments leads not only to higher engagement but to actual transactions. The advancement of consumer ID tracking, especially through RMNs (Retail Media Networks), marks a shift toward a world where trust directly converts into revenue. In trusted environments, impressions become outcomes.


  • Agentic AI and Irreplaceable Role of Human Judgment:

    As we enter the era of Agentic AI — systems capable of autonomously handling large parts of media buying and optimization — the Summit underscored a crucial point: human oversight remains essential. Contextual understanding and brand values still require human interpretation to ensure AI-driven decisions are aligned with brand identity. AI can optimize the path — but only humans can define the destination.


  • Auditable AI, Transparency, and Creative Leeway:

    Auditable AI — systems whose decisions can be tracked, reviewed, and verified — was highlighted as a core component of ethical governance. Speakers also emphasized that intentionally curating and managing a brand’s “Aura” while giving creators sufficient Creative Leeway strengthens both performance and authenticity. Accountability builds trust; authenticity sustains it.



PYLER's Observation: The Industrial Context Behind the Safety Discussion

While the Summit discussions focused on the ongoing challenges of the AI era, PYLER noted that these issues closely mirror past industry experiences. The UGC (User Generated Content) era of the 2010s—driven by smartphones and YouTube—democratized content creation but also led to the proliferation of low-quality content and misinformation. Brands being placed next to unsafe material and the inherent limits of human review were early warning signs that were already visible at the time.

The crisis of trust defining the AIGC (AI-generated content) era today can be seen as a reproduction of those same challenges—only now multiplied by the explosive complexity and accelerated speed introduced by AI. This historical parallel provides an essential framework for understanding the current AI safety problem and why the stakes are higher than ever.

The crisis hasn’t changed — only the acceleration has.


What PYLER Saw on the Ground: The Transparency Gap in Video Safety Technology

The overall message of the Summit indicated that the industry is well aware of the trust challenges posed by the AI era and is actively discussing potential paths toward a solution. However, one area remained conspicuously absent from clear answers: Video Safety.

While text- and image-based verification technologies are already being used at scale, there was a noticeable lack of practical examples—or even introductions—of the AI technologies and transparency frameworks required to meaningfully address safety for video content, which remains the center of digital media and the most powerful driver of emotional impact.

This gap is closely aligned with what PYLER observed at TrustCon 2025. Although participants universally acknowledged the need for AI-driven approaches to video content moderation, the absence of concrete sessions, case studies, or references to the underlying technology highlighted a major, unresolved challenge for the industry.

The industry acknowledges the problem — but video remains the unaddressed frontier.


Ultimately, the Core is 'How to Build the Infrastructure of Trust'

Observing the entirety of the Summit’s discussions, PYLER solidified one conviction: the AI era requires more than regulations or platform-level self-governance. The real imperative is to build a technological foundation capable of sustaining trust — the Infrastructure of Trust.

The accelerating speed of deepfakes and the growing sophistication of manipulated video content clearly expose the limits of existing systems. This is why the need for Multi-Modal Video Understanding AI felt even more urgent throughout the Summit.

This technology can simultaneously interpret and evaluate all elements of video — visual signals, audio cues, speech, text, behavior patterns, and narrative progression — as a unified context. In the video era, trust must begin with AI that understands video.

Trust is no longer a policy choice — it is an engineering challenge.


PYLER's Direction Post-Summit is Clearer

The Brand Safety Summit New York was not a venue for providing all the answers. However, it served as a crucial milestone — one that revealed what the industry is truly grappling with, where the pain points lie, and what critical tasks remain unresolved.

What became unequivocally clear is that AI must ultimately solve the challenges created by AI, and that Video Understanding AI holds the final key to trust. Based on the problem awareness and technological gaps underscored throughout the Summit, PYLER will focus even more intently on addressing the industry’s most critical and unaddressed challenge — Video Safety — and in doing so, help build the next stage of the Infrastructure of Trust.

The next era of trust will belong to the companies that can understand video, not just detect it.


The Message from Rob Rasko, President of the Brand Safety Summit Series

Rob Rasko, President of the Brand Safety Summit Series, emphasized that given the current challenges surrounding Brand Safety, it is time to reframe and expand the conversation into the broader discourse of “Brand and AI Safety.” This more comprehensive approach reflects the major shift in digital advertising driven by the rapid introduction of AI.

While traditional Brand Safety focused on the constraints of ad placement, adjacency, and keyword-based evaluation, Brand and AI Safety extends the discussion to all elements of media: campaign placement, suitability, target audiences, and performance. Importantly, this reframing is not an abandonment of Brand Safety but an evolution toward a more unified and effective framework—one that aligns with today’s data-driven, AI-enhanced advertising environment.

AI presents both risks and opportunities as it reshapes media placement and content creation, and the Summit’s mission is to help the industry navigate beyond the risks toward the opportunities.

His message was clear: Brand Safety isn’t being replaced — it’s being redefined for the AI era.

AdTech

BrandSafety

ContextualTargeting

DigitalAdvertising

VideoUnderstanding

2025. 11. 17.

Why Real-Time Context is a Critical Bottleneck for AI – PYLER's Take from SingleStore Now 2025

Attending SingleStore Now 2025 felt less like visiting an event and more like a conversation about the future we’re actively building. This event matters to us because its central theme—that AI's success hinges on a highly responsive, real-time context layer—is not just a future trend, but the core technical problem PYLER has been solving at petabyte scale.

The industry is now waking up to a problem we've long understood: AI models are "context-blind." Statistics from the opening keynote by SingleStore CEO Raj Verma (like the fact that fewer than 26% of AI projects achieve ROI) prove that legacy architectures are reaching their limits. They cannot deliver the ultra-low latency, high concurrency, and complex query support needed to provide real-time context.

This challenge is exponentially harder for unstructured video data—the most complex, data-heavy, and high-velocity medium. The keynote discussion resonated deeply with our obsessive focus on solving this high-latency data problem. Real-time context is the difference between preventing brand risk and merely reporting it after the damage is done.


Key Innovations Intersecting with Our Architectural Approach

PYLER team members join SingleStore CEO Raj Verma (front, center) and C-level executives to celebrate the SingleStore Nasdaq closing ceremony in New York City.


Each new capability introduced at the event reflected the same principles we’ve built our platform on — a shared belief in real-time, context-rich data as the foundation of AI. This alignment was clear across three key announcements, which together form a powerful toolkit for the next generation of AI applications.

1. AI/ML Functions: Bringing AI to the Data

SingleStore is embedding AI capabilities (like AI_SENTIMENT(), AI_SUMMARIZE(), AI_CLASSIFY()) directly into the database, accessible via simple SQL. This eliminates the slow, costly ETL pipelines traditionally required to move data to an external AI model and back. For PYLER, this directly aligns with our philosophy of 'bringing compute to the data.' Moving petabytes of video data for external inference is architecturally unfeasible and slow. In-database functions allow us to enrich our contextual analysis in-place, drastically reducing latency. It’s a move from complex, costly ETL pipelines to instant, on-the-fly intelligence—a core principle of our 'Pride-worthy Engineering.'

2. Zero Copy Attach: Agile Development on Live Data

SingleStore's new “Zero Copy Attach” feature allows developers to instantly attach smaller, isolated compute clusters to a production database without copying any data. This provides complete workload isolation and independent scaling. For PYLER's R&D, this is a massive enabler, as it allows our engineers to experiment fearlessly. We can now test new validation models or AI agents on live, production-scale data without risking performance isolation or incurring massive data duplication costs. This accelerates our innovation cycle from weeks to days, allowing us to test new trust models without compromising our core service.

3. Aura Analyst: The Natural Language Data Agent

The speed is only half the battle. The other half is accessibility. Aura Analyst is a new "context-aware" natural language agent system that allows business users to query data using plain English, with all queries being auditable and governable. While we may not embed this in customer-facing dashboards immediately, we see immense value in Aura Analyst for our internal operations, precisely because it aligns with our core philosophy of data democratization. We believe that access to data and the opportunity for analysis must be democratically guaranteed to everyone, not just developers. This isn't just an ideal; it's a core value for our high-speed growth. Tools like Aura Analyst can dramatically boost operational efficiency by empowering our non-developer teams to get insights without a SQL bottleneck.


Our Perspective: How We Architected for Trust

This vision of a real-time, context-aware AI future is not just theoretical for us at PYLER—it is the challenge we solve daily. We provide real-time brand safety for global brands like Samsung Electronics and LVMH.

Our previous architecture (on PostgreSQL) faced the exact problems the industry is now discussing. Our service architecture is inherently complex, mixing high-throughput transactional (OLTP) queries from real-time ad metrics with heavy analytical (OLAP) queries from our massive video analysis database. Our previous PostgreSQL-based system could not guarantee latency for this mixed workload, which directly impacted user experience. Our engineering challenge was that traditional OLAP-optimized systems are less efficient when handling high-concurrency ingestion, and OLTP-optimized systems are not designed for complex analytical joins. We were forced into a brittle system of materialized views and nightly batch jobs, which is the very definition of 'stale context.'

This is how we solved the problem: Our engineering decision was to move to an HTAP architecture. By migrating, we eliminated the core bottleneck: the join latency between live ad-metrics data and petabyte-scale video analytics. This new design allows us to serve both query patterns concurrently with radically improved performance, which has been a critical factor in dramatically improving our user experience.

This isn't just a theoretical number. In our internal benchmarks, SingleStore delivered query speeds up to 100x faster for complex OLAP-style analytical queries compared to our previous partitioned PostgreSQL setup. This performance leap isn't just a vanity metric. For PYLER, this translates directly into two critical outcomes: a vastly improved, real-time experience for our user-facing analytical dashboards, and the engineering delta that separates 'brand safety' from 'brand risk.' It’s the difference between catching harmful video content before an ad impression or after the damage is done. This focus on how we architect for trust is what defines us.


Beyond the Sessions: A Meeting of Minds

Leaders from PYLER and partner companies exchange insights during a networking cruise at SingleStore Now 2025. From left to right: Jaeho Lee, Posco, Jaenyun Shin, A Platform, Hyeongjun Park, PYLER, DongChan Park, PYLER, Rahul Rastogi, SingleStore, Ranjit Panigrahi, Apple, Hannim Kim, APlatform, and Jaeun Kim, PYLER.


Perhaps just as valuable as the technical sessions was the opportunity to connect with leaders from companies like Apple, Posco, Goldman Sachs, K-Bank, IBM, Kakao, and Adobe. This wasn't just networking; it was an opportunity to exchange ideas with leaders from other data-critical industries—from manufacturing and finance to platform and SaaS.

We found that when we described our unique challenge—processing petabytes of unstructured video data in real-time to ensure trust and safety—it resonated deeply with other industry leaders, partners, and data executives at the conference.. Whether in finance, logistics, or tech, everyone is facing their own version of the 'real-time context' problem. The conversations confirmed that PYLER is not just solving a niche problem; we are solving a universal, cutting-edge data problem at an extreme scale.


The Future: Trust Through Understanding

PYLER engineers Hyeongjun Park and DongChan Park (center and right) connect with an infrastructure engineer at SingleStore Now 2025. The engineer on the left, an attendee from a quantitative firm, discussed their work in infrastructure engineering.


We left SingleStore Now 2025 with a reinforced conviction. The industry is waking up to a problem we've been solving for years: that AI without real-time context is a liability, not an asset.

The event confirmed that our architectural foundation is sound, but our mission is what truly sets us apart. The era of AI agents is here, but it requires a new standard of validation and trust. We don't just build AI; we engineer the understanding that makes it trustworthy. We’re inspired to see partners like SingleStore building the infrastructure that helps make that possible. The future of AI will not be defined by the models themselves, but by our collective ability to safely and verifiably connect them to the real world.


This piece was written by Hyeongjun Park, Backend Engineer and X-Ops Squad Lead at PYLER.

Tech

2025. 11. 14.

End of the Foundation Model Competition: Artificial World Needs a Validation Layer

For half a decade, the world of AI revolved around a single race: who could build the biggest, smartest, and most general foundation model. It was a battle fought with billions of dollars, GPU clusters the size of small nations, and an obsession with benchmark dominance. But as the dust settles, it is becoming clear that this competition is ending not because anyone has truly won, but because the game itself has changed.

1. The Age of the Model Arms Race

In the early 2020s, AI progress was defined by scaling. The mantra was simple: more data, more parameters, more GPUs. OpenAI, Anthropic, Google, and a few others poured vast sums into foundation models that grew from billions to trillions of parameters. The field of competition was limited to companies that could spend over a billion dollars on computing resources, data, and talent. While the public narrative focused on rivalry, in practice, these models began to converge in architecture, language, and behavior.

2. The Rise of Vertical Intelligence

The next wave of value creation will come from specialized and deeply vertical intelligence, AI that understands a domain with surgical precision rather than encyclopedic breadth. Whether it is medical diagnostics, financial risk, or brand safety in video content, the competitive frontier has shifted from foundation to application infrastructure.

3. The Post Model Era

In this new phase, the differentiator is not who trains the biggest model but who can orchestrate intelligence. Companies that combine reasoning, retrieval, and domain specific data pipelines will outperform those who merely scale compute. The foundation layer will become infrastructure, much like electricity. No startup can compete by building a power plant anymore.

4. What Choices Do We Have

Since 2021, PYLER has built a purpose-built multimodal video understanding model for UGC (User Generated Content) validation and retrieval for brands. To transform from a surviving company into a thriving one, we faced two choices:

  1. Create more B2B AI Agents, which could generate explosive short term revenue.

  2. Build a validation layer for Gen AI applications.


We chose the second path. We believe it is the only way to solve a foundational problem that the world will soon suffer from. The first path, creating more B2B AI Agents for quick revenue, will inevitably be dominated and replaced by companies like OpenAI, Google, and Anthropic.
Consider the OS and Cloud eras. Operating systems had defenders and cloud companies had security protocols, yet Okta, Wiz, and PaloAlto Networks built billion dollar businesses by providing third party validation for enterprises and nations.
Problems caused by humans can be solved by human capabilities. But the problems that will be caused by AI are an entirely different agenda.

5. The Validation Layer Imperative

Just like the OS and Platform eras, third party validation will be essential. The core value of Gen AI applications is generating outputs that meet a user's intention and provide satisfaction. However, if these applications validate themselves too aggressively, it can weaken their core generative competence.
At the same time, enterprises and nations cannot validate all AI outputs using only their internal capabilities. There is a critical lack of infrastructure level third party validators to secure their interests and safety. This is the modern equivalent of the seat belt in the automotive era.


In closing…

The end of the foundation model competition does not mean the entire competition is over. It simply means the participants in that specific race have been decided. The new winners of the next AI era will not be those who built the model, but those who understand how to build the validation layers that protect human profit, rights, and safety.


As steam once symbolized the dawn of the Industrial Revolution, code and data now mark the rise of the AI Revolution. But every revolution needs its validation layer — the mechanism that keeps progress aligned with trust and safety. (Painting – The Gare Saint-Lazare, Claude Monet (1877))

Jaeho Oh is the co-founder and CEO of PYLER.

Jaeho’s Lens

2025. 11. 7.

[Pre-TrustCon 2025 Release] Video, AI, and the Evolving Landscape of Trust & Safety

As AI-generated video content grows at an explosive rate, the challenges of ensuring digital trust and safety (T&S) have become more urgent and complex than ever. TrustCon 2025 will gather global T&S experts to tackle critical issues such as deepfakes, disinformation, and content validation in the age of AI. Pyler will participate to showcase how our Video Understanding AI addresses these challenges with scalable, multimodal solutions. Through this effort, we aim to strengthen platform integrity, improve ad trust, and help build a safer digital future.


TrustCon 2025: Why the Urgency Now?

Beyond its convenience and connectivity, our digital world casts a shadow of its own. Harmful content—hate speech, misinformation, inappropriate images and videos—erodes the integrity of online spaces and threatens the very trust we place in platforms and brands. Safeguarding users and upholding the 'trust' and 'safety' of this digital ecosystem, a field known as Trust & Safety (T&S), has become more critical than ever. T&S extends beyond simply removing harmful content, known as Content Moderation, to encompass the proactive assurance of content suitability and reliability on platforms, which we term Content Validation.

Link : TrustCon2025

TrustCon 2025 convenes leading T&S professionals from around the globe to diagnose the complex challenges emerging in our rapidly changing digital environment and to forge innovative solutions. TrustCon is not merely a conference; it’s a unique global forum dedicated to sharing real-world cases, learning from past failures, and engaging in deep discussions on T&S's most pressing issues. These include AI ethics, deepfakes and synthetic media manipulation, child safety, disinformation campaigns, and navigating evolving regulatory compliance. It is a critical arena for redefining digital platform responsibilities and collectively shaping a safer online world.


The Age of Video and AI: Unveiling T&S’s New Pandora’s Box

A particularly alarming trend impacting the T&S landscape is the explosive growth of video content coupled with the rapid advancement of Artificial Intelligence (AI). Cisco predicts that by 2027, an astounding 89% of global IP traffic will be video content. This sheer volume intensifies the burden of content moderation and exponentially escalates the complexity of T&S management.

The Invisible Threat: How AI-Generated Harmful Content Erodes Trust and Drains Ad Budgets

Furthermore, the proliferation of AI, especially generative AI, presents a double-edged sword. Sophisticated deepfake videos designed to spread misinformation and new forms of harmful content, such as AI-generated images, pose unprecedented challenges for existing T&S systems to detect and block. Traditional content moderation methods, relying solely on keywords or metadata, prove utterly inadequate in discerning the nuanced context or hidden intent within video content. This limitation directly impedes effective Content Validation, ultimately leading to severe repercussions. When brands' advertisements appear alongside inappropriate content, it erodes brand trust. Indeed, 64% of consumers report losing trust in a brand after encountering its ads next to unsuitable content. As traditional methods fail to cope with an exponentially increasing volume of AI content, this is no longer a challenge that can be overlooked.


Pyler Leadership’s Vision: Unlocking New Possibilities in the Complex World of Video Content Management

Pyler's leadership team, including our CEO, will be actively participating in TrustCon 2025. Our goal is to gain a deeper understanding of the profound challenges faced by content creators and platforms regarding video T&S, and to explore how Pyler can contribute to resolving these critical issues. Through direct engagement with T&S experts at TrustCon, we aim to reaffirm our conviction that AI can tackle the complexities of video content management far more effectively than traditional methods. Moreover, we seek to validate that Pyler possesses the unique technological capabilities to effectively address these formidable challenges today.

Pyler is laser-focused on developing and advancing our proprietary Video Understanding AI model, designed to comprehend video content much like a human would. This sophisticated model leverages a multimodal approach, holistically analyzing video , text , and audio to discern not just superficial elements, but also the nuanced visual, auditory, and contextual meanings within content.

Our commitment to scale is evident: Pyler is already processing over 300 million videos, and our daily processing volume is rapidly increasing. This extensive experience with large-scale data fuels the continuous evolution of Pyler's AI model, making it faster, more accurate, and more scalable in analyzing and understanding video content. As Pyler’s Video Understanding AI model advances, it will revolutionize the efficiency of vast-scale Content Moderation and dramatically enhance Content Validation capabilities—enabling precise judgment of content suitability even within complex contexts. We firmly believe this will not only improve ad efficiency but also fundamentally contribute to resolving T&S challenges across the board, ultimately fostering a safer and healthier digital ecosystem for society.


Beyond TrustCon 2025: Pyler's Commitment to a Trusted Digital Future

TrustCon 2025 serves as a crucial platform for Pyler to deepen our understanding of the T&S landscape and explore collaborative opportunities with global experts. Through this event, we will present concrete solutions to the urgent challenges in video content T&S, demonstrating how our unique AI technology contributes to building a more reliable digital ecosystem. 

Pyler is dedicated to leveraging our Video Understanding AI technology to create social value by ensuring the safety and trustworthiness of online content. We invite you to join us on this new journey, starting at TrustCon 2025, as we strive to build a more secure digital future.

PYLER

TrustAndSafety

2025. 7. 22.

The Invisible Threat: How AI-Generated Harmful Content Erodes Trust and Drains Ad Budgets

AI-generated fake videos are spreading rapidly, misleading vulnerable viewers like the elderly. These videos evade traditional filters, causing ads to appear on harmful content and wasting ad spend. A healthier ad ecosystem requires better content detection and clearer insight into where ads are placed.


In today's digital landscape, video content has become an indispensable part of our daily lives, projected to make up 80-90% of global internet traffic by 2027. Yet, lurking in the shadows of this vast digital space is a new and unsettling threat: harmful or deceptive content generated by artificial intelligence (AI) technology. Recent reports of "AI fake documentaries" circulating on online platforms, with outlandish titles like "Man Who Impregnated 32 Women" or "50-Something Korean Man Impregnated Three Mongolian Beauties," reveal a shocking reality.


Source: YouTube

These videos, crafted by combining AI image generators, synthesized narration, and provocative captions and thumbnails, are racking up millions of views despite their preposterous narratives. The alarming issue is that some elderly viewers are mistaking these fabricated videos for real events. Comments such as "Lucky man, I'm envious" and "Solution to population decline" underscore the severity of this misunderstanding. This phenomenon cunningly exploits the low digital literacy among the elderly and their vulnerability to information. Furthermore, the societal impact of AI-generated content cannot be ignored, as these videos are even consumed in public spaces, causing discomfort to those nearby. The spectrum of harm these videos inflict is vast, presenting themselves as "life wisdom" or "life lessons," while also incorporating racist tropes, excessive sexual objectification, and distorted fantasies about age gaps.


The Proliferation of AI-Generated Content and the Advertiser's Dilemma

The unchecked proliferation of such AI content presents advertisers with a significant, often unseen dilemma. In the digital advertising market, the "context" in which an ad appears remains a challenging issue for many advertisers. Traditional brand safety measures have primarily relied on keywords or metadata, but AI-generated content cleverly sidesteps these filters. This is because even sensational content can be disguised with ordinary titles and descriptions, or by subtly rephrasing specific words to bypass filtering mechanisms. Moreover, certain AI audio novels, which have minimal visual harmfulness, are difficult to detect with existing visual-centric filtering systems. 

This opaque and imprecise video inventory environment directly leads to financial losses for advertisers. According to an analysis by Pyler utilizing a Video Understanding AI model on approximately 10,000 campaigns and over 100 million ad placements, it's estimated that a typical brand spends 20% to 30% of its ad budget on video ads placed within unsafe or unsuitable content. This exposure to harmful videos directly translates into wasted ad spending and brand safety concerns for advertisers. Beyond mere budgetary waste, sponsoring creators of harmful videos can draw criticism regarding a company's corporate social responsibility (CSR). Ultimately, this can damage brand value and negatively impact long-term profitability.

The advancement of AI technology is a double-edged sword. While it offers groundbreaking opportunities, it also serves as a tool for mass-producing harmful or deceptive content. Now, more than ever, we must clearly recognize what to trust, how to protect ourselves in the "fake world" created by AI, and precisely where our valuable advertising budgets are being spent. Acknowledging and understanding these complex, multi-layered issues is the crucial first step toward building a healthy and responsible digital advertising environment.

BrandSafety

TrustAndSafety

2025. 7. 17.

Accelerating Video Understanding for ad generation with Visual AI Agents

PYLER supercharges video ad safety with NIVDIA's DGX B200 and AI Blueprint for smarter, safer placements.
Discover how PYLER's real-time video understanding helps brands like Samsung, BYD, and Hana Financial thrive.


At PYLER, we leverage our platform, video understanding AI, which deploys visual AI agents to ensure brand safety by verifying that ads are never displayed alongside inappropriate videos and placing them in more contextually relevant settings to protect brand value and enhance campaign effectiveness.

With the explosive growth of generative AI, the sheer volume of content uploaded to video platforms has skyrocketed. In such an environment, displaying ads next to content that’s not brand aligned increases marketing costs to target the correct audiences, but also presents a risk of negative brand impressions and potential reputational damage.

Video Understanding AI is crucial for effectively managing and validating content in an era of massive, high-speed uploads. When properly implemented, it becomes a key asset in safeguarding and enhancing a brand’s overall value.

Video content processing is significantly more complex than image or text processing, demanding substantial computational resources from preprocessing to inference. To handle this real-time influx of video content effectively, a high-performance pipeline utilizing optimized computing resources is essential.

In this post, we’ll discuss how PYLER addresses these challenges by adopting NVIDIA AI Blueprint for video search and summarization (VSS) in conjunction with NVIDIA DGX B200 systems. We’ll explore how we pair NVIDIA software and accelerated computing, with PYLER’s proprietary Video Understanding AI, to help ensure brand safety at scale.


PYLER Boosts Content Safety with NVIDIA AI Blueprint for VSS and NVIDIA AI Enterprise

PYLER automatically filters, classifies, and analyzes large-scale video data to help advertisers run their campaigns safely and efficiently. To accomplish this, PYLER selected NVIDIA AI Blueprint for VSS, built on the NVIDIA AI Enterprise software platform, offering enterprise-grade support, advanced security measures, and continuous, reliable updates.

NVIDIA AI Enterprise delivers innovative blueprints and commercial stability, enabling PYLER to deploy secure, scalable and continuously maintain AI workflows.

Key advantages provided by VSS Blueprint include:

  1. High-Quality Video Embeddings

    • State-of-the-art embedding models are essential for capturing rich contextual and semantic information from videos. PYLER integrates the NVIDIA NeMo Retriever  nv-embed-qa-1b-v2 model for minimal information loss during the embedding and captioning process.

  2. Scalable Infrastructure for training custom VLMs and Large-Scale Video Processing

    • NVIDIA DGX B200 accelerates custom VLM training and delivers up to 30x faster performance and best-in-class energy efficiency compared to its predecessor. 

    • With the immense volume of video content uploaded daily, GPU-based parallel processing is critical for scalability. PYLER utilizes VSS blueprint’s chunk-based parallel processing and GPU node expansion to dynamically distribute workloads and process video data in real-time. We look forward to enabling this on NVIDIA DGX B200 for inference.

  3. Flexible Verification and Search via RAG

    • PYLER rapidly searches and verifies vast amounts of video embeddings, leveraging Context-Aware Retrieval-Augmented Generation (RAG). This supports automated summarization, indexing, Q&A, and content validation tasks, significantly improving inference speed, accuracy, and context-awareness.


Expected Outcomes for PYLER

With these integrations, PYLER’s Video Understanding AI software can provide:

  1. Maximized Brand Safety

    • Precisely filter, classify, and recommend safe ad placements, ensuring ads never appear next to inappropriate or harmful content.

  2. Enhanced Brand Trust

    • Provide advertisers with robust and reliable AI models, fostering stronger viewer trust. As brands gain consumer confidence, PYLER’s credibility as an industry leader grows in parallel.

  3. Improved Data Pipeline Efficiency

    • Deploy automated, GPU-accelerated workflows optimized at a scale far beyond traditional MLOps pipelines, significantly increasing operational efficiency and throughput.

  4. Faster Service Launch

    • Leverage NVIDIA’s cutting-edge technology stack – including VSS blueprint, NVIDIA Metropolis, NVIDIA Dynamo-Triton, formerly Triton Inference Server, , NVIDIA NeMo Retriever and DGX B200 systems, – enabling innovative enterprises to quickly launch and scale AI-powered video services with fewer resource constraints.


Customer Success Stories

1. Samsung Electronics: PYLER AiD Redefines Brand Safety Standards for Product Branding Campaigns

  • As a leading global brand, Samsung Electronics faced challenges protecting its brand from appearing alongside misaligned content on digital platforms like YouTube
    By integrating PYLER AiD’s real-time AI video analysis, Samsung Electronics automatically blocked ads from harmful content categories based on global standards.
    By integrating the AiD solution into Samsung Electronics' ad campaigns, ads were automatically prevented from running on content categories threatening brand values, such as violence, explicit content, hate speech, and fake news. The AiD dashboard ensured brand safety and trust with transparent real-time monitoring and performance tracking.

  • Results:

    • Significant Reduction in Harmful Content Exposure: PYLER AiD reduced ad exposure to sensitive or inappropriate content that could threaten brand value by 77%, setting a new standard for brand protection.

    • Optimized Ad Budget Spend: Preventing ads from appearing on unsuitable content allowed Samsung to focus its marketing budget on safe, high-value, brand aligned placements, greatly increasing advertising effectiveness.

    • Enhanced Brand Trust and Value: Proactively eliminating potential brand crises and consistently delivering a safe and trustworthy brand image to consumers successfully defended and strengthened brand reputation and customer trust.

2. BYD: PYLER is Ensuring Successful Korean Market Entry Through Brand Protection and High Intent Moment Targeting.

  • As a global leader in electric vehicles, BYD faced the critical task of successfully launching its first model in Korea, the 'Atto 3'. It was essential to create a positive first impression while proactively managing the risk of ads appearing alongside potentially negative content (e.g., critical reviews, defect discussions) that could harm the new brand's image upon market entry.

  • PYLER's AI-Powered Contextual Targeting and Brand Suitability Management technology precisely targeted BYD ads for positive or neutral YouTube content about EVs and small SUVs such as vehicle reviews, test drives, and competitor comparisons. Simultaneously, the AI excluded negative videos by analyzing content tone and context. This ensured BYD ads appeared only in positive, relevant high intent moments aligned with potential customer interest, effectively achieving both contextual targeting and brand suitability simultaneously.

  • Results:

    • Maximized Exposure in Brand-Friendly Contexts: Ad placements within safe, positive, and relevant EV/SUV video contexts increased exponentially—more than 100x— compared to previous methods—delivering high relevance and brand suitability to potential customers.

    • Significantly Boosted Potential Customer Interest: By effectively filtering out negative noise and concentrating ads on highly relevant content, BYD captured genuine potential customer interest, driving click-through rates (CTR) nearly 4x higher than baseline campaigns.

    • Successful Market Entry & Enhanced Brand Image: PYLER's integrated AI solution effectively managed the risks associated with new market entry, built a positive brand image, and played a key role in BYD's successful initial launch in the Korean market, receiving over 1,000 pre-order applications in the first week.

3. Hana Financial Group: PYLER AiM Optimizes Performance Across Diverse Campaigns with Automatic Video Placement Recommendation and Targeting

Hana Financial Group, one of South Korea’s largest financial holding companies manages a diverse portfolio including banking, credit cards, securities, and asset management. Facing the challenge of running multiple YouTube campaigns with distinct goals-such as attracting public pension accounts, providing retirement pension information, and promoting a public service announcement against illegal gambling-the group needed to reach targeted audiences within relevant content contexts while achieving varied KPIs like conversion rates, ad efficiency, and user engagement.

To address this, Hana Financial Group implemented PYLER AiM, an AI-powered contextual targeting solution. PYLER’s AI video analysis enabled precise targeting of specific YouTube content moments aligned with each campaign’s objective. For example, the Public Pension campaign focused on retirement planning and celebrity ambassador content, while the Retirement Pension campaign targeted videos about post-retirement finance. The Anti-Gambling PSA featured esports star Faker and targeted gaming and esports content popular with younger viewers, maximizing engagement and campaign effectiveness.

  • Results:

    • Optimized Performance for Each Campaign Goal:

      • (Public Pension) Achieved high average conversion rates exceeding 1.5%, demonstrating effective customer acquisition, with specific target groups surpassing an outstanding 3.6% conversion rate.

      • (Retirement Pension) Showcased dramatic improvements in ad efficiency compared to general targeting, with click-through rates (CTR) increasing more than eightfold and cost per click (CPC) reduced by over 70%.

      • (Public Service Ad) Successfully connected the featured celebrity 'Faker' with fan interests, resulting not only in a noticeable uplift in CTR but also proving high user immersion, with average view duration per impression and 100% video completion rates doubling compared to general campaigns. Furthermore, it reached more unique users efficiently, with average frequency halved.

    • Accurate Target Channel Reach: Across all campaigns, PYLER's contextual targeting concentrated ads within highly relevant channels, demonstrating significantly better reach within key target channels compared to general targeting. This confirmed the effective allocation of the ad budget to the most suitable potential customers.


Conclusion and Future Outlook

By adopting agentic AI through NVIDIA AI Blueprint for video search and summarization and DGX B200 systems, PYLER aims to strike a balance between protecting brand value and maximizing advertising efficiency in a market flooded with video content.

With large-scale GPU parallel processing, efficient embeddings, and Context-Aware RAG for search and verification, we will be able to detect negative content almost in real time and reliably adhere to brand guidelines.

Going forward, PYLER plans to continue actively harnessing NVIDIA’s latest AI technology to deliver fast, high-quality services that connect video content safely and appropriately across various domains.

AdTech

ContextualTargeting

PYLER

VideoUnderstanding

2025. 6. 17.

PYLER CEO Oh Jaeho Named to Forbes Asia’s '30 Under 30' List

PYLER CEO Oh Jaeho named to Forbes Asia’s ‘30 Under 30’ for reshaping video advertising with AI.
Recognized for leading PYLER’s rise in brand safety innovation across global markets.


Recognized as a Young Leader Shaping the Future of AdTech

In May 2025, Oh Jaeho, Co-Founder and CEO of PYLER, was named to the Forbes Asia “30 Under 30” list in the Marketing & Advertising category. The list celebrates young leaders across Asia who are transforming their industries, and Oh’s selection marks a significant milestone for Korea’s emerging AdTech ecosystem.


Revolutionizing Ad Placement in Video: PYLER’s AiD

Founded in 2021, PYLER set out to solve a long-standing issue in the video advertising industry: brand safety. The company’s flagship solution, AiD, is an AI-powered platform that ensures ads appear in brand-safe video environments—particularly on platforms like YouTube.

AiD analyzes the content of videos in real time, evaluating tone, context, and messaging to determine whether an ad should be placed. By automating what previously required manual review by agencies and marketers, AiD not only protects brand image but also enhances ad effectiveness at scale.

Today, PYLER works with global brands like Samsung, Bulgari, and BYD, as well as leading advertising agencies including Cheil Worldwide and Innocean.


Proven Growth and Technology with $24M in Total Funding

To date, PYLER has raised a cumulative 34 billion KRW (~$24 million) in funding. In July 2024, the company secured a 22 billion KRW Series investment from top-tier investors such as KDB Bank, KT Investment, Stonebridge Ventures, and SV Investment—validating its technical excellence and market potential.


Beyond the Forbes List: A Signal of What’s to Come

Being named to the Forbes “30 Under 30” list is more than an accolade—it's a recognition of the transformative work being done in a rapidly evolving digital ad landscape. PYLER continues to lead this transformation by building AI solutions that make advertising smarter and safer.

Oh Jaeho’s inclusion is a strong signal of the global impact PYLER aims to achieve in the years ahead. As the company moves forward, it remains committed to setting new standards in video-based advertising and delivering AI solutions that marketers and brands can trust.


You can view the full list of honorees in the Marketing & Advertising category of Forbes 30 Under 30 – Asia 2025 at the link below:
Forbes 30 Under 30 – Asia 2025

To learn more about one of this year’s honorees, PYLER CEO Jaeho Oh, visit his Forbes profile here:
Jaeho Oh on Forbes

AdTech

PYLER

2025. 5. 15.

PYLER Becomes First in Korea to Deploy NVIDIA DGX B200

PYLER becomes the first in Korea to adopt NVIDIA’s DGX B200, redefining AI infrastructure for AdTech.
With 30x faster performance, PYLER accelerates brand safety, contextual targeting, and global AI leadership.


Pioneering AI Infrastructure with Next-Gen NVIDIA Hardware

On February 27, 2025, PYLER became the first company in Korea to deploy NVIDIA’s latest AI accelerator, the DGX B200, marking a major leap in the company’s AI infrastructure. As NVIDIA’s newest generation AI system, the DGX B200 has drawn significant global attention since its release. PYLER's adoption—preceding even major institutions both locally and abroad—highlights the company's position at the forefront of AI innovation.

A ceremony was held to commemorate the milestone, symbolizing PYLER’s renewed commitment to redefining advertising technology through AI.


DGX B200: A New Benchmark in AI Performance

Equipped with NVIDIA’s next-generation Blackwell GPU architecture, the DGX B200 offers up to 30x improved computational performance compared to its predecessor. It delivers industry-leading energy efficiency and is purpose-built for training and inference of large-scale, video-centric AI models.

For PYLER—whose core business involves real-time video understanding across vast volumes of online content—the DGX B200 is a game-changing addition that will significantly boost its technical capabilities.


Powering the Next Generation of AI AdTech

The DGX B200 will supercharge PYLER’s core AI solutions across the board, especially in three key areas:

  • Brand Safety: Faster and more accurate detection of harmful content and ad placement control

  • High-intent Moment Targeting: Enhanced precision in real-time targeting based on video context

  • Contextual Ad Optimization: Smarter prediction of user responses and ad relevance

PYLER’s flagship solution, AiD, has already expanded its footprint both domestically and globally, collaborating with major advertisers like Samsung Electronics, Nongshim and KT Corporation. This infrastructure upgrade is expected to greatly enhance customer experience and maximize advertising performance.


Building World-Class Video Understanding for Advertising

Jihun Kim, CTO of PYLER, stated:
“With the DGX B200—the first of its kind in Korea—we’ve laid the foundation for building world-class video understanding capabilities tailored to the advertising domain. We will continue to push the boundaries of AI innovation to ensure that brand messages are delivered in the safest and most contextually relevant environments possible.”

Under its strategic partnership with NVIDIA, PYLER is focused on leading the future of AI-powered advertising—from content moderation to contextual AdTech. The adoption of DGX B200 is more than just a hardware upgrade—it marks a pivotal step in realizing PYLER’s vision of raising both the quality and trust of digital advertising through cutting-edge AI.

ContextualTargeting

PYLER

VideoUnderstanding

2025. 2. 27.

PYLER Secures $16.9M in Series Funding to Advance AI Brand Safety Solutions, bringing Total Investment to $26.2M

PYLER raises $16.9M to scale its AI-powered brand safety solution, AiD.
Trusted by top brands, PYLER aims to lead globally in video understanding and ad transparency.


Recognized for Its Cutting-Edge AI Brand Safety Technology

PYLER, a leading provider of AI-powered brand safety solutions, has successfully secured $16.9M in its latest series funding round. Key investors include Stonebridge Ventures, Korea Development Bank, SV Investment, and KT Investment—underscoring strong confidence in PYLER’s technology and growth potential.


‘AiD’: Helping Brands Regain Control Over Ad Placement

PYLER’s flagship solution, AiD, leverages AI to analyze the context of YouTube videos where brand ads are placed, blocking exposure to harmful or inappropriate content. The solution protects brands from being associated with adult content, hate speech, fake news, and fringe religious material that could damage brand reputation.

With millions of new videos uploaded to YouTube every day, manual review is no longer feasible. In this context, investors recognized AiD’s value in empowering advertisers with greater control and visibility over where their ads appear.


Brand Safety Directly Impacts Consumer Trust and Behavior

According to a joint report published in January by PYLER and the Korea Broadcast Advertising Corporation (KOBACO), 89.5% of consumers said brand safety is important, while 96% stated they would not purchase from advertisers who neglect it. These figures clearly show that brand safety plays a critical role in shaping both consumer trust and purchasing behavior.


On Track to Become a Global Leader in Video Understanding

“Our solution has already been tested and trusted by major Korean brands like Samsung, Hyundai Motor Company, Cheil Worldwide, and Innocean,” said Jaeho Oh, CEO of PYLER. “With this new funding, we aim to establish ourselves as one of the most competitive players in video understanding on a global scale.”

Powered by advanced AI and real-time content analysis, PYLER remains committed to building a trustworthy advertising environment—for both brands and consumers.

AdTech

PYLER

2024. 7. 29.

Brand Safety in Korea: How It Falls Behind Global Standards

Korea still lags behind global standards in brand safety — but change is on the horizon.
Explore how international frameworks and AI solutions like AiD can help close the gap.


A Stark Contrast in Brand Safety Standards

In Korea, discussions around brand safety are still in their early stages. Even before tackling brand protection, the country lacks a fundamental legal framework to systematically develop the advertising industry.

Recently, there has been renewed interest in the Advertising Industry Promotion Act, reintroduced in Korea’s 22nd National Assembly. We hope this will become a stepping stone toward a more structured and responsible advertising ecosystem.

In contrast, many global markets have long recognized the importance of brand safety — not only to protect advertiser reputations, but also to reduce the monetization of harmful or inappropriate content. Let’s take a look at how leading countries are addressing brand safety.


Key International Organizations Leading Brand Safety Efforts

  • IAB (Interactive Advertising Bureau)
    Establishes industry standards for digital advertising in the U.S., including terminology, ad formats, pricing metrics, and implementation guidelines.

  • ARF (Advertising Research Foundation)
    Works with ESOMAR (European Society for Opinion and Marketing Research) to standardize ad effectiveness measurement globally.

  • MRC (Media Rating Council)
    Accredits media rating services and ensures quality and transparency in audience measurement.

  • GARM (Global Alliance for Responsible Media)
    A cross-industry initiative led by the World Federation of Advertisers (WFA). Includes major global advertisers, agencies, media companies, and ad tech providers working together to improve brand safety in digital media.

  • TAG (Trustworthy Accountability Group)
    Develops guidelines and certifications to combat ad fraud and brand risk, while collaborating with governments to localize global standards.

  • BSI (Brand Safety Institute)
    Provides education and certification for professionals focused on brand safety and digital advertising ethics.

  • APB (Advertiser Protection Bureau)
    A U.S. initiative led by the 4As (American Association of Advertising Agencies), focused on empowering ad professionals to assess their understanding of brand safety through tools like the Brand Safety Self-Assessment.


Why Brand Safety Is More Than Just a Brand Issue

Brand safety isn’t just about reputation management — it’s a critical part of maintaining a healthy digital ecosystem. By cutting off ad revenue from harmful or misleading content, the industry can help prevent the commercialization of toxicity and crime.

We believe Korea’s advertising market has the potential to mature into one that values both brand integrity and content responsibility. This shift will require not only updated regulations, but also industry-wide awareness and technical investment.

At PYLER, we’re committed to using our AI-powered video understanding technology to contribute to a cleaner, safer digital environment. We aim to challenge the uncomfortable realities of today’s content economy — and build solutions that move the industry forward.

BrandSafety

DigitalAdvertising

2024. 7. 3.

AiD: Solution That Protects Brand Trust

AiD by PYLER is a real-time brand safety solution that uses multimodal AI to automatically detect and block harmful content, improving both brand trust and ad performance. Built to meet global standards, it ensures brand messages are delivered in safe and effective environments.


PYLER’s New Standard for Brand Safety in Digital Advertising

Even as digital advertising becomes more sophisticated, many brands still face a fundamental challenge—the risk of ads appearing next to inappropriate content.

Violent, sexually explicit, politically or religiously biased, and hateful content can seriously damage brand perception. According to a global consumer survey, 64% of respondents said they lost trust in a brand after seeing its ad next to inappropriate content.

To solve this problem at the technological level, PYLER developed AiD, a real-time brand safety solution that automatically detects and blocks sensitive content in video-based ad environments—ensuring that brand messages are delivered in safe and effective contexts.


Fighting Fake News and Content Farming

One of AiD’s core priorities has been identifying and blocking fake news and manipulative content formats, such as clickbait or "storytime"-style videos that distort facts or spread false narratives.

On YouTube, this issue extends to “content farming” or “cyber re-uploaders”—channels that repurpose sensational material to exploit algorithms for visibility and revenue. When ads appear on these types of videos, brands can unintentionally become associated with misinformation or social division.

To prevent this, AiD excludes related content under its “Controversial Issues” category. Its AI models are continuously updated to respond to evolving content types and social issues, ensuring scalable and responsive brand protection.


Real-Time Filtering with Multimodal AI

AiD goes beyond basic keyword filtering. It uses multimodal AI, which analyzes both visual and textual elements of videos simultaneously. Through PYLER’s proprietary Vision Analyzer and Text Analyzer models, AiD classifies and blocks content related to Sexual content, Hate speech, Political or religious bias, Controversial or sensationalized “storytime” narratives etc.

AiD also supports custom filter criteria, allowing campaign-specific brand safety rules. As a result, advertisers can instantly assess content risk levels and ensure that their ads are placed only in safe, brand-aligned environments.


Proven Performance in the Field

AiD has already delivered tangible results in live campaigns:

  • 97% reduction in ad exposure on sensitive content

  • 15% improvement in ad performance (e.g., CVR)

  • 639% reduction in wasted ad spend on high-risk content

In one case, a brand reduced sensitive-content ad spend from 16.1% to just 2.1%, dramatically improving both efficiency and return on investment. AiD delivers the rare combination of trust protection and performance enhancement.


Built for Global Brand Safety Standards

AiD isn’t just functional—it’s also compliant with global brand safety guidelines.

PYLER is the first Korean company officially registered with the IAB Tech Lab, and AiD evaluates and filters content based on IAB’s globally recognized standards. These guidelines are used by the world’s top advertisers, making AiD a technically verified and transparent solution.


Globally Recognized Technology

AiD is powered by PYLER’s proprietary video AI, which has been internationally acclaimed:

  • CVPR 2022: 2nd place globally in Video AI (competing with Intel, Tencent, ByteDance, etc.)

  • CVPR & ICCV 2023: Research accepted by the world’s top AI conferences

  • 2024: Selected for both NVIDIA and AWS Global Startup Programs

PYLER continues to refine its technology in collaboration with global tech leaders.


Redefining the Basics of Advertising

In today’s digital landscape, advertising is no longer just about what you say—it’s about where you say it.

The context in which ads appear now directly impacts brand trust and consumer decision-making. AiD eliminates the need for manual review or guesswork. It enables advertisers to automatically avoid risky placements and run campaigns with both safety and efficiency in mind.

Now is the time for brands to make a strategic, technology-driven choice—to speak only in spaces that match their values. At PYLER we are committed to creating a safe space for brands to communicate confidently, backed by AI and built for the future.



Image Source: AiD Dashboard

BrandSafety

TrustAndSafety

2024. 4. 4.

EU’s Digital Services Act: Heavy Fines for Failing to Moderate Harmful Content

The EU’s Digital Services Act is reshaping online accountability — with massive fines for non-compliance.
As platforms scramble to moderate harmful content, brands must rethink where their ads truly belong.


What Is the Digital Services Act (DSA)?

The Digital Services Act (DSA) is a comprehensive EU regulation designed to hold online platforms accountable for the spread of illegal and harmful content — such as fake news, hate speech, and child exploitation. Platforms must remove such content swiftly and objectively, label AI-generated content, and ban targeted advertising based on sensitive data such as religion, sexual orientation, or content aimed at children and minors.

Failure to comply can result in fines of up to 6% of global annual revenue.

Source: Naver Encyclopedia
Companies affected: Google, Bing, YouTube, Facebook, X (formerly Twitter), Instagram, TikTok, Wikipedia, Apple, AliExpress, LinkedIn, and more


Regulating Big Tech Responsibility

DSA specifically targets Very Large Online Platforms (VLOPs) — those with over 45 million monthly active users in the EU. So far, 17 platforms and 2 search engines have been officially designated, including Google, Meta, X, and TikTok.

According to EU Internal Market Commissioner Thierry Breton,

“Compliance with the DSA will not only prevent penalties but also strengthen brand value and trust for these companies.”

European Commission President Ursula von der Leyen echoed this, saying:

“The DSA aims to protect children, society, and democracy through strict transparency and accountability rules.”


Enforcement Begins: DSA in Action

When misinformation and violent content spread rapidly across platforms following the Israel-Hamas conflict, the EU launched an official DSA investigation into X, questioning its ability to manage illegal content.

X responded that it had removed or labeled tens of thousands of posts and taken down hundreds of Hamas-linked accounts. Meta also reported deleting 800,000+ pieces of war-related content and establishing a special operations center for rapid content review.

Major platforms are now:

  • Removing recommendation algorithms based on sensitive user data

  • Adding public reporting channels for flagging illegal content

  • Filtering extremist or graphic content more aggressively

These actions are motivated by more than goodwill — DSA violations can trigger massive fines or even temporary bans from operating in the EU.


A Broader Vision: EU’s Digital Rulebook

The DSA is part of the EU’s digital governance trifecta, which also includes:

  • DMA (Digital Markets Act): Prevents anti-competitive practices by “gatekeeper” firms like Alphabet, Amazon, Apple, Meta, ByteDance, and Microsoft

  • DNA (Digital Networks Act): Aims to foster a unified digital market and promote investment and innovation in infrastructure and emerging players

Together, these laws enforce transparency, user protection, and fair competition in the EU digital ecosystem.


What About Korea?

While the EU pushes ahead with strong tech regulation, South Korea has yet to enact a comparable law to hold Big Tech accountable for algorithm transparency or content responsibility.

Civil society groups argue that Korea should move toward a comprehensive legislative framework, especially as:

  • Big Tech dominance threatens media diversity

  • Small businesses and content creators are increasingly dependent on platform decisions

  • Algorithmic news feeds raise concerns about information control

According to Oh Byung-il, head of Korea’s Progressive Network Center:
“Korea has long prioritized nurturing its domestic tech industry while overlooking critical issues like privacy and fair trade. The EU’s example shows it’s time for Korea to start serious discussions.”


Final Thoughts

From fake news to hate speech, the DSA reflects a growing global demand for platform responsibility. With major players like X, Meta, and TikTok scrambling to comply, it’s clear that user safety and algorithmic transparency are no longer optional.

In Korea and beyond, it’s time for governments and platforms alike to acknowledge their role in protecting the digital public — and for brands to ask hard questions about where their ads appear and what values they may be unintentionally endorsing.

AdTech

BrandSafety

DigitalAdvertising

TrustAndSafety

2023. 11. 1.

When Advertising Backfires: How Brands Lose Trust by Paying for the Wrong Placements

One poorly placed ad can damage years of brand trust — just ask Applebee’s.
Discover why brand safety matters more than ever in today’s volatile media landscape.


Are Video Ads Building Brand Value — or Destroying It?

Most of us are familiar with the Russia-Ukraine war, which began on February 24, 2022, when the Russian president announced a "special military operation" and invaded Ukraine — an event that continues to unfold today.

But what does war have to do with advertising?

출처 : Wikipedia

Source: Wikipedia

In the early hours of the invasion, Applebee’s, a casual American dining chain, found itself at the center of a global controversy. During a CNN broadcast covering air raid sirens over Kyiv, Applebee’s cheerful cowboy-themed commercial aired simultaneously in a split-screen format known as squeezeback, where ads are shown alongside live news to increase viewability.

The result? One half of the screen displayed a warzone, the other half showcased a light-hearted ad.
Viewers were shocked.

Source: CNN/Screenshot

The ad immediately went viral on social media, with many accusing Applebee’s of insensitivity or even indirectly sponsoring war. Despite issuing a prompt statement expressing concern about the war and disappointment in the broadcaster, the damage was done.
A significant investment in premium ad space ended up deeply harming the brand.


Chapter 1:
What Is Your Brand Really Associated With?

Think of a brand — now think of the model or celebrity associated with it.
For many in Korea, the pairing of Gong Yoo and KANU coffee is an iconic example. Since their campaign began in 2011, the brand and spokesperson have grown together in public perception.

That’s why brands carefully select models who align with their values and are unlikely to stir controversy.

But how much effort goes into choosing where the ad will appear?

If Applebee’s had used a high-profile celebrity in that cowboy ad, it’s likely that individual would also have faced backlash. Yet few brands spend even one-tenth of the time choosing ad placements as they do vetting their spokespeople.

Source: CHEQ, Magna & IPG Media Lab(2018) 


The Data Is Clear

According to research by CHEQ, Magna, and IPG Media Lab (2018):

  • Purchase intent drops by half when ads appear near harmful content

  • Brand recommendation likelihood drops 50%

Inappropriate ad placements severely damage trust, especially when they’re linked to national tragedies, hate content, or political extremism. For public companies, such reputational risks can even impact quarterly earnings and stock prices.

As more shocking and controversial content floods platforms to chase views and revenue, the risks for brands grow — and fast.


Chapter 2:
Rethinking Priorities in Advertising

What KPIs do marketers track for video campaigns?

CPV? CPM? VTR? Conversion rate?
All are valid metrics — but here's a more important question:

If you walked into your CEO’s office and asked,
“Which matters more — protecting the brand’s value and stock price, or improving media performance KPIs?”

What do you think the answer would be?

Most CEOs would agree: brand integrity always comes first.

Advertising exists to strengthen brand image and ultimately drive growth. But ironically, it can undermine brand equity when not managed with care. Marketers who focus solely on performance numbers may unknowingly put the brand at risk.


The Call to Action:
Make Brand Safety Everyone’s Business

It's time for brands to:

  • Establish robust processes for ad placement review

  • Monitor the content context of their media buys

  • Treat brand safety as a company-wide priority, not just a marketing concern

Digital content is becoming increasingly toxic, extreme, and monetized, and platforms aren’t always transparent. Marketers must now proactively assess the environments where their ads appear — and recognize that it’s no longer just about ROI.

Brand safety isn’t optional. It’s a strategic imperative.
And protecting your brand starts with knowing where your message lives.

BrandSafety

2023. 1. 30.

PYLER: Solving Challenges in the Digital Advertising Market with AI

PYLER's AiD safeguards brands in the risky world of video advertising with AI-powered contextual targeting.
Ranked 2nd globally at CVPR 2022, PYLER proves its cutting-edge capabilities in video understanding AI.


Launching AiD – Protecting Brand Safety in the Era of Video Advertising

PYLER addresses the growing brand safety concerns in the digital video advertising space with world-class Video Understanding AI, offering contextual video marketing solutions tailored for a new era.

In 2022, PYLER proved its technological prowess at CVPR, the world’s top AI and computer vision conference, by achieving outstanding results alongside global tech giants such as Intel, Tencent, and ByteDance. Through proprietary computer vision technologies and a brand safety algorithm built on IAB (Interactive Advertising Bureau) standards, Pyler has emerged as a highly competitive player.

Recognizing the seriousness of brand safety issues in video advertising, PYLER has launched AiD, an AI-powered solution to tackle these challenges head-on. As a leader in brand safety and contextual targeting in Korea, PYLER aims to raise industry awareness and restore an advertising environment where great ads are matched with trustworthy content.


Chapter 1:
Ranked 2nd Globally at CVPR 2022

PYLER was invited as a finalist to CVPR 2022, the world's largest conference on computer vision and pattern recognition, and proudly secured 2nd place in Track 3 of the LOVEU (Long-form Video Understanding) Workshop.

This challenge tested how effectively AI models could learn from instructional videos and scripts, and guide users through step-by-step tasks. In simple terms, the AI was required to interpret tutorial videos and then provide helpful answers when a user asked questions — similar to testing how well the AI "understands" the manual.

PYLER’s multimodal model—which integrates video, image, and text data—was praised for its ability to align complex questions with context and deliver user-relevant responses in sequential steps, making the task particularly challenging.

Dongchan Park (CTO) and Sangwook Park (ML Lead) of PYLER’s AI Context Lab shared:

“We competed against leading global corporations and research institutions with significantly fewer resources. Achieving 2nd place was meaningful—especially since we ranked 1st in Recall@3, one of the key evaluation metrics.”


Chapter 2:
AiD – Rebuilding Trust in Video Advertising

Brand safety concerns in platforms like YouTube have become severe across global markets, including South Korea. According to Pixability, one-third of total ad spend in 2021 was unintentionally spent on harmful or inappropriate content.

In one notable incident, KOBACO (Korea's only public ad agency) placed public service announcements on adult-themed YouTube channels, sparking controversy. Even major Korean corporations are often unaware of the exact content their ads are appearing alongside.

PYLER identified this market inefficiency—where 16% to 37% of ad spend is wasted on brand-damaging content—and created AiD: a 24/7 AI-powered guardian that continuously analyzes video content, flags risky categories, and blocks harmful ad placements.

Source: 2021 Pixability

While global advertisers now prioritize brand safety even above performance, many Korean advertisers remain uncertain about how to address the issue. PYLER hopes AiD will help normalize the ad ecosystem by giving advertisers peace of mind.

In the long run, AiD also aligns with privacy-first trends. It leverages contextual targeting without relying on third-party cookies, delivering efficient and effective ad performance without compromising user privacy.

BrandSafety

ContextualTargeting

VideoUnderstanding

2023. 1. 30.

All

TrustAndSafety

DigitalAdvertising

PYLER

AdTech

VideoUnderstanding

ContextualTargeting

BrandSafety

Jaeho’s Lens

Tech

Brand Safety Summit New York Recap: From Brand Safety to Brand & AI Safety

The AI Paradox We Now Face

PYLER has consistently emphasized the mission to overcome the AI Paradox—the inherent duality where AI delivers overwhelming productivity while simultaneously becoming the systemic source of threats like deepfakes and misinformation—and the broader Crisis of Trust created by the explosive rise of AI-driven content. Our goal has always been to help build the emerging Infrastructure of Trust. The recent Brand Safety Summit New York confirmed that this philosophy is no longer unique to a few early voices—it is rapidly becoming the shared vision of the global industry.

The Summit was a cooperative forum that moved beyond competition, bringing together leaders from platforms such as TikTok, Meta, and Snap; the world’s largest advertising companies—WPP, Publicis, IPG, and Omnicom—and key verification partners including IAS (Integral Ad Science) and Channel Factory. PYLER holds deep respect for the organizers' efforts in convening such a high-caliber group to share insights and build community around an increasingly complex challenge. The event served as a powerful catalyst for awareness and alignment, reinforcing our conviction that the industry’s overall standard for safety is set to rise significantly.


From Brand Safety to Brand and AI Safety: AI Changes the Scale of Safety

The core theme of this year’s Summit was the revolutionary expansion of the safety discussion. Where traditional Brand Safety centered on the limited context of “next-to” adjacency—whether an ad appears beside safe content—we have now entered an era where safety must account for the entire system in which AI participates in content creation, distribution, and recommendation.

Industry leaders made it clear that AI is fundamentally transforming the Brand Safety paradigm. And this shift is not merely a broader scope; AI is reshaping the very form, logic, and meaning of digital content itself. As a result, the question of safety and trust has moved far beyond placement management and into a deeper need to ensure the transparency and authenticity of content and the algorithms that govern it.

Brand Safety used to be about adjacency. Now, it’s about provenance, authenticity, and algorithmic integrity.


Key Insights from the Summit: Translating Safety into Tangible Performance

The sessions at the Summit made one point unmistakably clear: Brand Safety is no longer a cost or a regulatory burden — it is a strategic investment that directly drives performance.

  • The End of the Trade-Off and Contextual Intelligence:

    The era when advertisers had to choose between “safety” and “performance” is over. Speakers emphasized moving beyond the over-blocking caused by keyword-based exclusion and instead selecting brand-positive content through advanced Contextual Intelligence. When combined with audience data targeting, this approach significantly reduces wasted impressions and delivers measurable improvements in marketing performance. Safety and performance converge when context — not keywords — drives decisions.


  • Closed-Loop Measurement and the Growing Role of RMNs:

    Closed-Loop Measurement demonstrated that advertising in verified, trusted environments leads not only to higher engagement but to actual transactions. The advancement of consumer ID tracking, especially through RMNs (Retail Media Networks), marks a shift toward a world where trust directly converts into revenue. In trusted environments, impressions become outcomes.


  • Agentic AI and Irreplaceable Role of Human Judgment:

    As we enter the era of Agentic AI — systems capable of autonomously handling large parts of media buying and optimization — the Summit underscored a crucial point: human oversight remains essential. Contextual understanding and brand values still require human interpretation to ensure AI-driven decisions are aligned with brand identity. AI can optimize the path — but only humans can define the destination.


  • Auditable AI, Transparency, and Creative Leeway:

    Auditable AI — systems whose decisions can be tracked, reviewed, and verified — was highlighted as a core component of ethical governance. Speakers also emphasized that intentionally curating and managing a brand’s “Aura” while giving creators sufficient Creative Leeway strengthens both performance and authenticity. Accountability builds trust; authenticity sustains it.



PYLER's Observation: The Industrial Context Behind the Safety Discussion

While the Summit discussions focused on the ongoing challenges of the AI era, PYLER noted that these issues closely mirror past industry experiences. The UGC (User Generated Content) era of the 2010s—driven by smartphones and YouTube—democratized content creation but also led to the proliferation of low-quality content and misinformation. Brands being placed next to unsafe material and the inherent limits of human review were early warning signs that were already visible at the time.

The crisis of trust defining the AIGC (AI-generated content) era today can be seen as a reproduction of those same challenges—only now multiplied by the explosive complexity and accelerated speed introduced by AI. This historical parallel provides an essential framework for understanding the current AI safety problem and why the stakes are higher than ever.

The crisis hasn’t changed — only the acceleration has.


What PYLER Saw on the Ground: The Transparency Gap in Video Safety Technology

The overall message of the Summit indicated that the industry is well aware of the trust challenges posed by the AI era and is actively discussing potential paths toward a solution. However, one area remained conspicuously absent from clear answers: Video Safety.

While text- and image-based verification technologies are already being used at scale, there was a noticeable lack of practical examples—or even introductions—of the AI technologies and transparency frameworks required to meaningfully address safety for video content, which remains the center of digital media and the most powerful driver of emotional impact.

This gap is closely aligned with what PYLER observed at TrustCon 2025. Although participants universally acknowledged the need for AI-driven approaches to video content moderation, the absence of concrete sessions, case studies, or references to the underlying technology highlighted a major, unresolved challenge for the industry.

The industry acknowledges the problem — but video remains the unaddressed frontier.


Ultimately, the Core is 'How to Build the Infrastructure of Trust'

Observing the entirety of the Summit’s discussions, PYLER solidified one conviction: the AI era requires more than regulations or platform-level self-governance. The real imperative is to build a technological foundation capable of sustaining trust — the Infrastructure of Trust.

The accelerating speed of deepfakes and the growing sophistication of manipulated video content clearly expose the limits of existing systems. This is why the need for Multi-Modal Video Understanding AI felt even more urgent throughout the Summit.

This technology can simultaneously interpret and evaluate all elements of video — visual signals, audio cues, speech, text, behavior patterns, and narrative progression — as a unified context. In the video era, trust must begin with AI that understands video.

Trust is no longer a policy choice — it is an engineering challenge.


PYLER's Direction Post-Summit is Clearer

The Brand Safety Summit New York was not a venue for providing all the answers. However, it served as a crucial milestone — one that revealed what the industry is truly grappling with, where the pain points lie, and what critical tasks remain unresolved.

What became unequivocally clear is that AI must ultimately solve the challenges created by AI, and that Video Understanding AI holds the final key to trust. Based on the problem awareness and technological gaps underscored throughout the Summit, PYLER will focus even more intently on addressing the industry’s most critical and unaddressed challenge — Video Safety — and in doing so, help build the next stage of the Infrastructure of Trust.

The next era of trust will belong to the companies that can understand video, not just detect it.


The Message from Rob Rasko, President of the Brand Safety Summit Series

Rob Rasko, President of the Brand Safety Summit Series, emphasized that given the current challenges surrounding Brand Safety, it is time to reframe and expand the conversation into the broader discourse of “Brand and AI Safety.” This more comprehensive approach reflects the major shift in digital advertising driven by the rapid introduction of AI.

While traditional Brand Safety focused on the constraints of ad placement, adjacency, and keyword-based evaluation, Brand and AI Safety extends the discussion to all elements of media: campaign placement, suitability, target audiences, and performance. Importantly, this reframing is not an abandonment of Brand Safety but an evolution toward a more unified and effective framework—one that aligns with today’s data-driven, AI-enhanced advertising environment.

AI presents both risks and opportunities as it reshapes media placement and content creation, and the Summit’s mission is to help the industry navigate beyond the risks toward the opportunities.

His message was clear: Brand Safety isn’t being replaced — it’s being redefined for the AI era.

AdTech

BrandSafety

ContextualTargeting

DigitalAdvertising

VideoUnderstanding

2025. 11. 17.

Why Real-Time Context is a Critical Bottleneck for AI – PYLER's Take from SingleStore Now 2025

Attending SingleStore Now 2025 felt less like visiting an event and more like a conversation about the future we’re actively building. This event matters to us because its central theme—that AI's success hinges on a highly responsive, real-time context layer—is not just a future trend, but the core technical problem PYLER has been solving at petabyte scale.

The industry is now waking up to a problem we've long understood: AI models are "context-blind." Statistics from the opening keynote by SingleStore CEO Raj Verma (like the fact that fewer than 26% of AI projects achieve ROI) prove that legacy architectures are reaching their limits. They cannot deliver the ultra-low latency, high concurrency, and complex query support needed to provide real-time context.

This challenge is exponentially harder for unstructured video data—the most complex, data-heavy, and high-velocity medium. The keynote discussion resonated deeply with our obsessive focus on solving this high-latency data problem. Real-time context is the difference between preventing brand risk and merely reporting it after the damage is done.


Key Innovations Intersecting with Our Architectural Approach

PYLER team members join SingleStore CEO Raj Verma (front, center) and C-level executives to celebrate the SingleStore Nasdaq closing ceremony in New York City.


Each new capability introduced at the event reflected the same principles we’ve built our platform on — a shared belief in real-time, context-rich data as the foundation of AI. This alignment was clear across three key announcements, which together form a powerful toolkit for the next generation of AI applications.

1. AI/ML Functions: Bringing AI to the Data

SingleStore is embedding AI capabilities (like AI_SENTIMENT(), AI_SUMMARIZE(), AI_CLASSIFY()) directly into the database, accessible via simple SQL. This eliminates the slow, costly ETL pipelines traditionally required to move data to an external AI model and back. For PYLER, this directly aligns with our philosophy of 'bringing compute to the data.' Moving petabytes of video data for external inference is architecturally unfeasible and slow. In-database functions allow us to enrich our contextual analysis in-place, drastically reducing latency. It’s a move from complex, costly ETL pipelines to instant, on-the-fly intelligence—a core principle of our 'Pride-worthy Engineering.'

2. Zero Copy Attach: Agile Development on Live Data

SingleStore's new “Zero Copy Attach” feature allows developers to instantly attach smaller, isolated compute clusters to a production database without copying any data. This provides complete workload isolation and independent scaling. For PYLER's R&D, this is a massive enabler, as it allows our engineers to experiment fearlessly. We can now test new validation models or AI agents on live, production-scale data without risking performance isolation or incurring massive data duplication costs. This accelerates our innovation cycle from weeks to days, allowing us to test new trust models without compromising our core service.

3. Aura Analyst: The Natural Language Data Agent

The speed is only half the battle. The other half is accessibility. Aura Analyst is a new "context-aware" natural language agent system that allows business users to query data using plain English, with all queries being auditable and governable. While we may not embed this in customer-facing dashboards immediately, we see immense value in Aura Analyst for our internal operations, precisely because it aligns with our core philosophy of data democratization. We believe that access to data and the opportunity for analysis must be democratically guaranteed to everyone, not just developers. This isn't just an ideal; it's a core value for our high-speed growth. Tools like Aura Analyst can dramatically boost operational efficiency by empowering our non-developer teams to get insights without a SQL bottleneck.


Our Perspective: How We Architected for Trust

This vision of a real-time, context-aware AI future is not just theoretical for us at PYLER—it is the challenge we solve daily. We provide real-time brand safety for global brands like Samsung Electronics and LVMH.

Our previous architecture (on PostgreSQL) faced the exact problems the industry is now discussing. Our service architecture is inherently complex, mixing high-throughput transactional (OLTP) queries from real-time ad metrics with heavy analytical (OLAP) queries from our massive video analysis database. Our previous PostgreSQL-based system could not guarantee latency for this mixed workload, which directly impacted user experience. Our engineering challenge was that traditional OLAP-optimized systems are less efficient when handling high-concurrency ingestion, and OLTP-optimized systems are not designed for complex analytical joins. We were forced into a brittle system of materialized views and nightly batch jobs, which is the very definition of 'stale context.'

This is how we solved the problem: Our engineering decision was to move to an HTAP architecture. By migrating, we eliminated the core bottleneck: the join latency between live ad-metrics data and petabyte-scale video analytics. This new design allows us to serve both query patterns concurrently with radically improved performance, which has been a critical factor in dramatically improving our user experience.

This isn't just a theoretical number. In our internal benchmarks, SingleStore delivered query speeds up to 100x faster for complex OLAP-style analytical queries compared to our previous partitioned PostgreSQL setup. This performance leap isn't just a vanity metric. For PYLER, this translates directly into two critical outcomes: a vastly improved, real-time experience for our user-facing analytical dashboards, and the engineering delta that separates 'brand safety' from 'brand risk.' It’s the difference between catching harmful video content before an ad impression or after the damage is done. This focus on how we architect for trust is what defines us.


Beyond the Sessions: A Meeting of Minds

Leaders from PYLER and partner companies exchange insights during a networking cruise at SingleStore Now 2025. From left to right: Jaeho Lee, Posco, Jaenyun Shin, A Platform, Hyeongjun Park, PYLER, DongChan Park, PYLER, Rahul Rastogi, SingleStore, Ranjit Panigrahi, Apple, Hannim Kim, APlatform, and Jaeun Kim, PYLER.


Perhaps just as valuable as the technical sessions was the opportunity to connect with leaders from companies like Apple, Posco, Goldman Sachs, K-Bank, IBM, Kakao, and Adobe. This wasn't just networking; it was an opportunity to exchange ideas with leaders from other data-critical industries—from manufacturing and finance to platform and SaaS.

We found that when we described our unique challenge—processing petabytes of unstructured video data in real-time to ensure trust and safety—it resonated deeply with other industry leaders, partners, and data executives at the conference.. Whether in finance, logistics, or tech, everyone is facing their own version of the 'real-time context' problem. The conversations confirmed that PYLER is not just solving a niche problem; we are solving a universal, cutting-edge data problem at an extreme scale.


The Future: Trust Through Understanding

PYLER engineers Hyeongjun Park and DongChan Park (center and right) connect with an infrastructure engineer at SingleStore Now 2025. The engineer on the left, an attendee from a quantitative firm, discussed their work in infrastructure engineering.


We left SingleStore Now 2025 with a reinforced conviction. The industry is waking up to a problem we've been solving for years: that AI without real-time context is a liability, not an asset.

The event confirmed that our architectural foundation is sound, but our mission is what truly sets us apart. The era of AI agents is here, but it requires a new standard of validation and trust. We don't just build AI; we engineer the understanding that makes it trustworthy. We’re inspired to see partners like SingleStore building the infrastructure that helps make that possible. The future of AI will not be defined by the models themselves, but by our collective ability to safely and verifiably connect them to the real world.


This piece was written by Hyeongjun Park, Backend Engineer and X-Ops Squad Lead at PYLER.

Tech

2025. 11. 14.

End of the Foundation Model Competition: Artificial World Needs a Validation Layer

For half a decade, the world of AI revolved around a single race: who could build the biggest, smartest, and most general foundation model. It was a battle fought with billions of dollars, GPU clusters the size of small nations, and an obsession with benchmark dominance. But as the dust settles, it is becoming clear that this competition is ending not because anyone has truly won, but because the game itself has changed.

1. The Age of the Model Arms Race

In the early 2020s, AI progress was defined by scaling. The mantra was simple: more data, more parameters, more GPUs. OpenAI, Anthropic, Google, and a few others poured vast sums into foundation models that grew from billions to trillions of parameters. The field of competition was limited to companies that could spend over a billion dollars on computing resources, data, and talent. While the public narrative focused on rivalry, in practice, these models began to converge in architecture, language, and behavior.

2. The Rise of Vertical Intelligence

The next wave of value creation will come from specialized and deeply vertical intelligence, AI that understands a domain with surgical precision rather than encyclopedic breadth. Whether it is medical diagnostics, financial risk, or brand safety in video content, the competitive frontier has shifted from foundation to application infrastructure.

3. The Post Model Era

In this new phase, the differentiator is not who trains the biggest model but who can orchestrate intelligence. Companies that combine reasoning, retrieval, and domain specific data pipelines will outperform those who merely scale compute. The foundation layer will become infrastructure, much like electricity. No startup can compete by building a power plant anymore.

4. What Choices Do We Have

Since 2021, PYLER has built a purpose-built multimodal video understanding model for UGC (User Generated Content) validation and retrieval for brands. To transform from a surviving company into a thriving one, we faced two choices:

  1. Create more B2B AI Agents, which could generate explosive short term revenue.

  2. Build a validation layer for Gen AI applications.


We chose the second path. We believe it is the only way to solve a foundational problem that the world will soon suffer from. The first path, creating more B2B AI Agents for quick revenue, will inevitably be dominated and replaced by companies like OpenAI, Google, and Anthropic.
Consider the OS and Cloud eras. Operating systems had defenders and cloud companies had security protocols, yet Okta, Wiz, and PaloAlto Networks built billion dollar businesses by providing third party validation for enterprises and nations.
Problems caused by humans can be solved by human capabilities. But the problems that will be caused by AI are an entirely different agenda.

5. The Validation Layer Imperative

Just like the OS and Platform eras, third party validation will be essential. The core value of Gen AI applications is generating outputs that meet a user's intention and provide satisfaction. However, if these applications validate themselves too aggressively, it can weaken their core generative competence.
At the same time, enterprises and nations cannot validate all AI outputs using only their internal capabilities. There is a critical lack of infrastructure level third party validators to secure their interests and safety. This is the modern equivalent of the seat belt in the automotive era.


In closing…

The end of the foundation model competition does not mean the entire competition is over. It simply means the participants in that specific race have been decided. The new winners of the next AI era will not be those who built the model, but those who understand how to build the validation layers that protect human profit, rights, and safety.


As steam once symbolized the dawn of the Industrial Revolution, code and data now mark the rise of the AI Revolution. But every revolution needs its validation layer — the mechanism that keeps progress aligned with trust and safety. (Painting – The Gare Saint-Lazare, Claude Monet (1877))

Jaeho Oh is the co-founder and CEO of PYLER.

Jaeho’s Lens

2025. 11. 7.

[Pre-TrustCon 2025 Release] Video, AI, and the Evolving Landscape of Trust & Safety

As AI-generated video content grows at an explosive rate, the challenges of ensuring digital trust and safety (T&S) have become more urgent and complex than ever. TrustCon 2025 will gather global T&S experts to tackle critical issues such as deepfakes, disinformation, and content validation in the age of AI. Pyler will participate to showcase how our Video Understanding AI addresses these challenges with scalable, multimodal solutions. Through this effort, we aim to strengthen platform integrity, improve ad trust, and help build a safer digital future.


TrustCon 2025: Why the Urgency Now?

Beyond its convenience and connectivity, our digital world casts a shadow of its own. Harmful content—hate speech, misinformation, inappropriate images and videos—erodes the integrity of online spaces and threatens the very trust we place in platforms and brands. Safeguarding users and upholding the 'trust' and 'safety' of this digital ecosystem, a field known as Trust & Safety (T&S), has become more critical than ever. T&S extends beyond simply removing harmful content, known as Content Moderation, to encompass the proactive assurance of content suitability and reliability on platforms, which we term Content Validation.

Link : TrustCon2025

TrustCon 2025 convenes leading T&S professionals from around the globe to diagnose the complex challenges emerging in our rapidly changing digital environment and to forge innovative solutions. TrustCon is not merely a conference; it’s a unique global forum dedicated to sharing real-world cases, learning from past failures, and engaging in deep discussions on T&S's most pressing issues. These include AI ethics, deepfakes and synthetic media manipulation, child safety, disinformation campaigns, and navigating evolving regulatory compliance. It is a critical arena for redefining digital platform responsibilities and collectively shaping a safer online world.


The Age of Video and AI: Unveiling T&S’s New Pandora’s Box

A particularly alarming trend impacting the T&S landscape is the explosive growth of video content coupled with the rapid advancement of Artificial Intelligence (AI). Cisco predicts that by 2027, an astounding 89% of global IP traffic will be video content. This sheer volume intensifies the burden of content moderation and exponentially escalates the complexity of T&S management.

The Invisible Threat: How AI-Generated Harmful Content Erodes Trust and Drains Ad Budgets

Furthermore, the proliferation of AI, especially generative AI, presents a double-edged sword. Sophisticated deepfake videos designed to spread misinformation and new forms of harmful content, such as AI-generated images, pose unprecedented challenges for existing T&S systems to detect and block. Traditional content moderation methods, relying solely on keywords or metadata, prove utterly inadequate in discerning the nuanced context or hidden intent within video content. This limitation directly impedes effective Content Validation, ultimately leading to severe repercussions. When brands' advertisements appear alongside inappropriate content, it erodes brand trust. Indeed, 64% of consumers report losing trust in a brand after encountering its ads next to unsuitable content. As traditional methods fail to cope with an exponentially increasing volume of AI content, this is no longer a challenge that can be overlooked.


Pyler Leadership’s Vision: Unlocking New Possibilities in the Complex World of Video Content Management

Pyler's leadership team, including our CEO, will be actively participating in TrustCon 2025. Our goal is to gain a deeper understanding of the profound challenges faced by content creators and platforms regarding video T&S, and to explore how Pyler can contribute to resolving these critical issues. Through direct engagement with T&S experts at TrustCon, we aim to reaffirm our conviction that AI can tackle the complexities of video content management far more effectively than traditional methods. Moreover, we seek to validate that Pyler possesses the unique technological capabilities to effectively address these formidable challenges today.

Pyler is laser-focused on developing and advancing our proprietary Video Understanding AI model, designed to comprehend video content much like a human would. This sophisticated model leverages a multimodal approach, holistically analyzing video , text , and audio to discern not just superficial elements, but also the nuanced visual, auditory, and contextual meanings within content.

Our commitment to scale is evident: Pyler is already processing over 300 million videos, and our daily processing volume is rapidly increasing. This extensive experience with large-scale data fuels the continuous evolution of Pyler's AI model, making it faster, more accurate, and more scalable in analyzing and understanding video content. As Pyler’s Video Understanding AI model advances, it will revolutionize the efficiency of vast-scale Content Moderation and dramatically enhance Content Validation capabilities—enabling precise judgment of content suitability even within complex contexts. We firmly believe this will not only improve ad efficiency but also fundamentally contribute to resolving T&S challenges across the board, ultimately fostering a safer and healthier digital ecosystem for society.


Beyond TrustCon 2025: Pyler's Commitment to a Trusted Digital Future

TrustCon 2025 serves as a crucial platform for Pyler to deepen our understanding of the T&S landscape and explore collaborative opportunities with global experts. Through this event, we will present concrete solutions to the urgent challenges in video content T&S, demonstrating how our unique AI technology contributes to building a more reliable digital ecosystem. 

Pyler is dedicated to leveraging our Video Understanding AI technology to create social value by ensuring the safety and trustworthiness of online content. We invite you to join us on this new journey, starting at TrustCon 2025, as we strive to build a more secure digital future.

PYLER

TrustAndSafety

2025. 7. 22.

The Invisible Threat: How AI-Generated Harmful Content Erodes Trust and Drains Ad Budgets

AI-generated fake videos are spreading rapidly, misleading vulnerable viewers like the elderly. These videos evade traditional filters, causing ads to appear on harmful content and wasting ad spend. A healthier ad ecosystem requires better content detection and clearer insight into where ads are placed.


In today's digital landscape, video content has become an indispensable part of our daily lives, projected to make up 80-90% of global internet traffic by 2027. Yet, lurking in the shadows of this vast digital space is a new and unsettling threat: harmful or deceptive content generated by artificial intelligence (AI) technology. Recent reports of "AI fake documentaries" circulating on online platforms, with outlandish titles like "Man Who Impregnated 32 Women" or "50-Something Korean Man Impregnated Three Mongolian Beauties," reveal a shocking reality.


Source: YouTube

These videos, crafted by combining AI image generators, synthesized narration, and provocative captions and thumbnails, are racking up millions of views despite their preposterous narratives. The alarming issue is that some elderly viewers are mistaking these fabricated videos for real events. Comments such as "Lucky man, I'm envious" and "Solution to population decline" underscore the severity of this misunderstanding. This phenomenon cunningly exploits the low digital literacy among the elderly and their vulnerability to information. Furthermore, the societal impact of AI-generated content cannot be ignored, as these videos are even consumed in public spaces, causing discomfort to those nearby. The spectrum of harm these videos inflict is vast, presenting themselves as "life wisdom" or "life lessons," while also incorporating racist tropes, excessive sexual objectification, and distorted fantasies about age gaps.


The Proliferation of AI-Generated Content and the Advertiser's Dilemma

The unchecked proliferation of such AI content presents advertisers with a significant, often unseen dilemma. In the digital advertising market, the "context" in which an ad appears remains a challenging issue for many advertisers. Traditional brand safety measures have primarily relied on keywords or metadata, but AI-generated content cleverly sidesteps these filters. This is because even sensational content can be disguised with ordinary titles and descriptions, or by subtly rephrasing specific words to bypass filtering mechanisms. Moreover, certain AI audio novels, which have minimal visual harmfulness, are difficult to detect with existing visual-centric filtering systems. 

This opaque and imprecise video inventory environment directly leads to financial losses for advertisers. According to an analysis by Pyler utilizing a Video Understanding AI model on approximately 10,000 campaigns and over 100 million ad placements, it's estimated that a typical brand spends 20% to 30% of its ad budget on video ads placed within unsafe or unsuitable content. This exposure to harmful videos directly translates into wasted ad spending and brand safety concerns for advertisers. Beyond mere budgetary waste, sponsoring creators of harmful videos can draw criticism regarding a company's corporate social responsibility (CSR). Ultimately, this can damage brand value and negatively impact long-term profitability.

The advancement of AI technology is a double-edged sword. While it offers groundbreaking opportunities, it also serves as a tool for mass-producing harmful or deceptive content. Now, more than ever, we must clearly recognize what to trust, how to protect ourselves in the "fake world" created by AI, and precisely where our valuable advertising budgets are being spent. Acknowledging and understanding these complex, multi-layered issues is the crucial first step toward building a healthy and responsible digital advertising environment.

BrandSafety

TrustAndSafety

2025. 7. 17.

Accelerating Video Understanding for ad generation with Visual AI Agents

PYLER supercharges video ad safety with NIVDIA's DGX B200 and AI Blueprint for smarter, safer placements.
Discover how PYLER's real-time video understanding helps brands like Samsung, BYD, and Hana Financial thrive.


At PYLER, we leverage our platform, video understanding AI, which deploys visual AI agents to ensure brand safety by verifying that ads are never displayed alongside inappropriate videos and placing them in more contextually relevant settings to protect brand value and enhance campaign effectiveness.

With the explosive growth of generative AI, the sheer volume of content uploaded to video platforms has skyrocketed. In such an environment, displaying ads next to content that’s not brand aligned increases marketing costs to target the correct audiences, but also presents a risk of negative brand impressions and potential reputational damage.

Video Understanding AI is crucial for effectively managing and validating content in an era of massive, high-speed uploads. When properly implemented, it becomes a key asset in safeguarding and enhancing a brand’s overall value.

Video content processing is significantly more complex than image or text processing, demanding substantial computational resources from preprocessing to inference. To handle this real-time influx of video content effectively, a high-performance pipeline utilizing optimized computing resources is essential.

In this post, we’ll discuss how PYLER addresses these challenges by adopting NVIDIA AI Blueprint for video search and summarization (VSS) in conjunction with NVIDIA DGX B200 systems. We’ll explore how we pair NVIDIA software and accelerated computing, with PYLER’s proprietary Video Understanding AI, to help ensure brand safety at scale.


PYLER Boosts Content Safety with NVIDIA AI Blueprint for VSS and NVIDIA AI Enterprise

PYLER automatically filters, classifies, and analyzes large-scale video data to help advertisers run their campaigns safely and efficiently. To accomplish this, PYLER selected NVIDIA AI Blueprint for VSS, built on the NVIDIA AI Enterprise software platform, offering enterprise-grade support, advanced security measures, and continuous, reliable updates.

NVIDIA AI Enterprise delivers innovative blueprints and commercial stability, enabling PYLER to deploy secure, scalable and continuously maintain AI workflows.

Key advantages provided by VSS Blueprint include:

  1. High-Quality Video Embeddings

    • State-of-the-art embedding models are essential for capturing rich contextual and semantic information from videos. PYLER integrates the NVIDIA NeMo Retriever  nv-embed-qa-1b-v2 model for minimal information loss during the embedding and captioning process.

  2. Scalable Infrastructure for training custom VLMs and Large-Scale Video Processing

    • NVIDIA DGX B200 accelerates custom VLM training and delivers up to 30x faster performance and best-in-class energy efficiency compared to its predecessor. 

    • With the immense volume of video content uploaded daily, GPU-based parallel processing is critical for scalability. PYLER utilizes VSS blueprint’s chunk-based parallel processing and GPU node expansion to dynamically distribute workloads and process video data in real-time. We look forward to enabling this on NVIDIA DGX B200 for inference.

  3. Flexible Verification and Search via RAG

    • PYLER rapidly searches and verifies vast amounts of video embeddings, leveraging Context-Aware Retrieval-Augmented Generation (RAG). This supports automated summarization, indexing, Q&A, and content validation tasks, significantly improving inference speed, accuracy, and context-awareness.


Expected Outcomes for PYLER

With these integrations, PYLER’s Video Understanding AI software can provide:

  1. Maximized Brand Safety

    • Precisely filter, classify, and recommend safe ad placements, ensuring ads never appear next to inappropriate or harmful content.

  2. Enhanced Brand Trust

    • Provide advertisers with robust and reliable AI models, fostering stronger viewer trust. As brands gain consumer confidence, PYLER’s credibility as an industry leader grows in parallel.

  3. Improved Data Pipeline Efficiency

    • Deploy automated, GPU-accelerated workflows optimized at a scale far beyond traditional MLOps pipelines, significantly increasing operational efficiency and throughput.

  4. Faster Service Launch

    • Leverage NVIDIA’s cutting-edge technology stack – including VSS blueprint, NVIDIA Metropolis, NVIDIA Dynamo-Triton, formerly Triton Inference Server, , NVIDIA NeMo Retriever and DGX B200 systems, – enabling innovative enterprises to quickly launch and scale AI-powered video services with fewer resource constraints.


Customer Success Stories

1. Samsung Electronics: PYLER AiD Redefines Brand Safety Standards for Product Branding Campaigns

  • As a leading global brand, Samsung Electronics faced challenges protecting its brand from appearing alongside misaligned content on digital platforms like YouTube
    By integrating PYLER AiD’s real-time AI video analysis, Samsung Electronics automatically blocked ads from harmful content categories based on global standards.
    By integrating the AiD solution into Samsung Electronics' ad campaigns, ads were automatically prevented from running on content categories threatening brand values, such as violence, explicit content, hate speech, and fake news. The AiD dashboard ensured brand safety and trust with transparent real-time monitoring and performance tracking.

  • Results:

    • Significant Reduction in Harmful Content Exposure: PYLER AiD reduced ad exposure to sensitive or inappropriate content that could threaten brand value by 77%, setting a new standard for brand protection.

    • Optimized Ad Budget Spend: Preventing ads from appearing on unsuitable content allowed Samsung to focus its marketing budget on safe, high-value, brand aligned placements, greatly increasing advertising effectiveness.

    • Enhanced Brand Trust and Value: Proactively eliminating potential brand crises and consistently delivering a safe and trustworthy brand image to consumers successfully defended and strengthened brand reputation and customer trust.

2. BYD: PYLER is Ensuring Successful Korean Market Entry Through Brand Protection and High Intent Moment Targeting.

  • As a global leader in electric vehicles, BYD faced the critical task of successfully launching its first model in Korea, the 'Atto 3'. It was essential to create a positive first impression while proactively managing the risk of ads appearing alongside potentially negative content (e.g., critical reviews, defect discussions) that could harm the new brand's image upon market entry.

  • PYLER's AI-Powered Contextual Targeting and Brand Suitability Management technology precisely targeted BYD ads for positive or neutral YouTube content about EVs and small SUVs such as vehicle reviews, test drives, and competitor comparisons. Simultaneously, the AI excluded negative videos by analyzing content tone and context. This ensured BYD ads appeared only in positive, relevant high intent moments aligned with potential customer interest, effectively achieving both contextual targeting and brand suitability simultaneously.

  • Results:

    • Maximized Exposure in Brand-Friendly Contexts: Ad placements within safe, positive, and relevant EV/SUV video contexts increased exponentially—more than 100x— compared to previous methods—delivering high relevance and brand suitability to potential customers.

    • Significantly Boosted Potential Customer Interest: By effectively filtering out negative noise and concentrating ads on highly relevant content, BYD captured genuine potential customer interest, driving click-through rates (CTR) nearly 4x higher than baseline campaigns.

    • Successful Market Entry & Enhanced Brand Image: PYLER's integrated AI solution effectively managed the risks associated with new market entry, built a positive brand image, and played a key role in BYD's successful initial launch in the Korean market, receiving over 1,000 pre-order applications in the first week.

3. Hana Financial Group: PYLER AiM Optimizes Performance Across Diverse Campaigns with Automatic Video Placement Recommendation and Targeting

Hana Financial Group, one of South Korea’s largest financial holding companies manages a diverse portfolio including banking, credit cards, securities, and asset management. Facing the challenge of running multiple YouTube campaigns with distinct goals-such as attracting public pension accounts, providing retirement pension information, and promoting a public service announcement against illegal gambling-the group needed to reach targeted audiences within relevant content contexts while achieving varied KPIs like conversion rates, ad efficiency, and user engagement.

To address this, Hana Financial Group implemented PYLER AiM, an AI-powered contextual targeting solution. PYLER’s AI video analysis enabled precise targeting of specific YouTube content moments aligned with each campaign’s objective. For example, the Public Pension campaign focused on retirement planning and celebrity ambassador content, while the Retirement Pension campaign targeted videos about post-retirement finance. The Anti-Gambling PSA featured esports star Faker and targeted gaming and esports content popular with younger viewers, maximizing engagement and campaign effectiveness.

  • Results:

    • Optimized Performance for Each Campaign Goal:

      • (Public Pension) Achieved high average conversion rates exceeding 1.5%, demonstrating effective customer acquisition, with specific target groups surpassing an outstanding 3.6% conversion rate.

      • (Retirement Pension) Showcased dramatic improvements in ad efficiency compared to general targeting, with click-through rates (CTR) increasing more than eightfold and cost per click (CPC) reduced by over 70%.

      • (Public Service Ad) Successfully connected the featured celebrity 'Faker' with fan interests, resulting not only in a noticeable uplift in CTR but also proving high user immersion, with average view duration per impression and 100% video completion rates doubling compared to general campaigns. Furthermore, it reached more unique users efficiently, with average frequency halved.

    • Accurate Target Channel Reach: Across all campaigns, PYLER's contextual targeting concentrated ads within highly relevant channels, demonstrating significantly better reach within key target channels compared to general targeting. This confirmed the effective allocation of the ad budget to the most suitable potential customers.


Conclusion and Future Outlook

By adopting agentic AI through NVIDIA AI Blueprint for video search and summarization and DGX B200 systems, PYLER aims to strike a balance between protecting brand value and maximizing advertising efficiency in a market flooded with video content.

With large-scale GPU parallel processing, efficient embeddings, and Context-Aware RAG for search and verification, we will be able to detect negative content almost in real time and reliably adhere to brand guidelines.

Going forward, PYLER plans to continue actively harnessing NVIDIA’s latest AI technology to deliver fast, high-quality services that connect video content safely and appropriately across various domains.

AdTech

ContextualTargeting

PYLER

VideoUnderstanding

2025. 6. 17.

PYLER CEO Oh Jaeho Named to Forbes Asia’s '30 Under 30' List

PYLER CEO Oh Jaeho named to Forbes Asia’s ‘30 Under 30’ for reshaping video advertising with AI.
Recognized for leading PYLER’s rise in brand safety innovation across global markets.


Recognized as a Young Leader Shaping the Future of AdTech

In May 2025, Oh Jaeho, Co-Founder and CEO of PYLER, was named to the Forbes Asia “30 Under 30” list in the Marketing & Advertising category. The list celebrates young leaders across Asia who are transforming their industries, and Oh’s selection marks a significant milestone for Korea’s emerging AdTech ecosystem.


Revolutionizing Ad Placement in Video: PYLER’s AiD

Founded in 2021, PYLER set out to solve a long-standing issue in the video advertising industry: brand safety. The company’s flagship solution, AiD, is an AI-powered platform that ensures ads appear in brand-safe video environments—particularly on platforms like YouTube.

AiD analyzes the content of videos in real time, evaluating tone, context, and messaging to determine whether an ad should be placed. By automating what previously required manual review by agencies and marketers, AiD not only protects brand image but also enhances ad effectiveness at scale.

Today, PYLER works with global brands like Samsung, Bulgari, and BYD, as well as leading advertising agencies including Cheil Worldwide and Innocean.


Proven Growth and Technology with $24M in Total Funding

To date, PYLER has raised a cumulative 34 billion KRW (~$24 million) in funding. In July 2024, the company secured a 22 billion KRW Series investment from top-tier investors such as KDB Bank, KT Investment, Stonebridge Ventures, and SV Investment—validating its technical excellence and market potential.


Beyond the Forbes List: A Signal of What’s to Come

Being named to the Forbes “30 Under 30” list is more than an accolade—it's a recognition of the transformative work being done in a rapidly evolving digital ad landscape. PYLER continues to lead this transformation by building AI solutions that make advertising smarter and safer.

Oh Jaeho’s inclusion is a strong signal of the global impact PYLER aims to achieve in the years ahead. As the company moves forward, it remains committed to setting new standards in video-based advertising and delivering AI solutions that marketers and brands can trust.


You can view the full list of honorees in the Marketing & Advertising category of Forbes 30 Under 30 – Asia 2025 at the link below:
Forbes 30 Under 30 – Asia 2025

To learn more about one of this year’s honorees, PYLER CEO Jaeho Oh, visit his Forbes profile here:
Jaeho Oh on Forbes

AdTech

PYLER

2025. 5. 15.

PYLER Becomes First in Korea to Deploy NVIDIA DGX B200

PYLER becomes the first in Korea to adopt NVIDIA’s DGX B200, redefining AI infrastructure for AdTech.
With 30x faster performance, PYLER accelerates brand safety, contextual targeting, and global AI leadership.


Pioneering AI Infrastructure with Next-Gen NVIDIA Hardware

On February 27, 2025, PYLER became the first company in Korea to deploy NVIDIA’s latest AI accelerator, the DGX B200, marking a major leap in the company’s AI infrastructure. As NVIDIA’s newest generation AI system, the DGX B200 has drawn significant global attention since its release. PYLER's adoption—preceding even major institutions both locally and abroad—highlights the company's position at the forefront of AI innovation.

A ceremony was held to commemorate the milestone, symbolizing PYLER’s renewed commitment to redefining advertising technology through AI.


DGX B200: A New Benchmark in AI Performance

Equipped with NVIDIA’s next-generation Blackwell GPU architecture, the DGX B200 offers up to 30x improved computational performance compared to its predecessor. It delivers industry-leading energy efficiency and is purpose-built for training and inference of large-scale, video-centric AI models.

For PYLER—whose core business involves real-time video understanding across vast volumes of online content—the DGX B200 is a game-changing addition that will significantly boost its technical capabilities.


Powering the Next Generation of AI AdTech

The DGX B200 will supercharge PYLER’s core AI solutions across the board, especially in three key areas:

  • Brand Safety: Faster and more accurate detection of harmful content and ad placement control

  • High-intent Moment Targeting: Enhanced precision in real-time targeting based on video context

  • Contextual Ad Optimization: Smarter prediction of user responses and ad relevance

PYLER’s flagship solution, AiD, has already expanded its footprint both domestically and globally, collaborating with major advertisers like Samsung Electronics, Nongshim and KT Corporation. This infrastructure upgrade is expected to greatly enhance customer experience and maximize advertising performance.


Building World-Class Video Understanding for Advertising

Jihun Kim, CTO of PYLER, stated:
“With the DGX B200—the first of its kind in Korea—we’ve laid the foundation for building world-class video understanding capabilities tailored to the advertising domain. We will continue to push the boundaries of AI innovation to ensure that brand messages are delivered in the safest and most contextually relevant environments possible.”

Under its strategic partnership with NVIDIA, PYLER is focused on leading the future of AI-powered advertising—from content moderation to contextual AdTech. The adoption of DGX B200 is more than just a hardware upgrade—it marks a pivotal step in realizing PYLER’s vision of raising both the quality and trust of digital advertising through cutting-edge AI.

ContextualTargeting

PYLER

VideoUnderstanding

2025. 2. 27.

PYLER Secures $16.9M in Series Funding to Advance AI Brand Safety Solutions, bringing Total Investment to $26.2M

PYLER raises $16.9M to scale its AI-powered brand safety solution, AiD.
Trusted by top brands, PYLER aims to lead globally in video understanding and ad transparency.


Recognized for Its Cutting-Edge AI Brand Safety Technology

PYLER, a leading provider of AI-powered brand safety solutions, has successfully secured $16.9M in its latest series funding round. Key investors include Stonebridge Ventures, Korea Development Bank, SV Investment, and KT Investment—underscoring strong confidence in PYLER’s technology and growth potential.


‘AiD’: Helping Brands Regain Control Over Ad Placement

PYLER’s flagship solution, AiD, leverages AI to analyze the context of YouTube videos where brand ads are placed, blocking exposure to harmful or inappropriate content. The solution protects brands from being associated with adult content, hate speech, fake news, and fringe religious material that could damage brand reputation.

With millions of new videos uploaded to YouTube every day, manual review is no longer feasible. In this context, investors recognized AiD’s value in empowering advertisers with greater control and visibility over where their ads appear.


Brand Safety Directly Impacts Consumer Trust and Behavior

According to a joint report published in January by PYLER and the Korea Broadcast Advertising Corporation (KOBACO), 89.5% of consumers said brand safety is important, while 96% stated they would not purchase from advertisers who neglect it. These figures clearly show that brand safety plays a critical role in shaping both consumer trust and purchasing behavior.


On Track to Become a Global Leader in Video Understanding

“Our solution has already been tested and trusted by major Korean brands like Samsung, Hyundai Motor Company, Cheil Worldwide, and Innocean,” said Jaeho Oh, CEO of PYLER. “With this new funding, we aim to establish ourselves as one of the most competitive players in video understanding on a global scale.”

Powered by advanced AI and real-time content analysis, PYLER remains committed to building a trustworthy advertising environment—for both brands and consumers.

AdTech

PYLER

2024. 7. 29.

Brand Safety in Korea: How It Falls Behind Global Standards

Korea still lags behind global standards in brand safety — but change is on the horizon.
Explore how international frameworks and AI solutions like AiD can help close the gap.


A Stark Contrast in Brand Safety Standards

In Korea, discussions around brand safety are still in their early stages. Even before tackling brand protection, the country lacks a fundamental legal framework to systematically develop the advertising industry.

Recently, there has been renewed interest in the Advertising Industry Promotion Act, reintroduced in Korea’s 22nd National Assembly. We hope this will become a stepping stone toward a more structured and responsible advertising ecosystem.

In contrast, many global markets have long recognized the importance of brand safety — not only to protect advertiser reputations, but also to reduce the monetization of harmful or inappropriate content. Let’s take a look at how leading countries are addressing brand safety.


Key International Organizations Leading Brand Safety Efforts

  • IAB (Interactive Advertising Bureau)
    Establishes industry standards for digital advertising in the U.S., including terminology, ad formats, pricing metrics, and implementation guidelines.

  • ARF (Advertising Research Foundation)
    Works with ESOMAR (European Society for Opinion and Marketing Research) to standardize ad effectiveness measurement globally.

  • MRC (Media Rating Council)
    Accredits media rating services and ensures quality and transparency in audience measurement.

  • GARM (Global Alliance for Responsible Media)
    A cross-industry initiative led by the World Federation of Advertisers (WFA). Includes major global advertisers, agencies, media companies, and ad tech providers working together to improve brand safety in digital media.

  • TAG (Trustworthy Accountability Group)
    Develops guidelines and certifications to combat ad fraud and brand risk, while collaborating with governments to localize global standards.

  • BSI (Brand Safety Institute)
    Provides education and certification for professionals focused on brand safety and digital advertising ethics.

  • APB (Advertiser Protection Bureau)
    A U.S. initiative led by the 4As (American Association of Advertising Agencies), focused on empowering ad professionals to assess their understanding of brand safety through tools like the Brand Safety Self-Assessment.


Why Brand Safety Is More Than Just a Brand Issue

Brand safety isn’t just about reputation management — it’s a critical part of maintaining a healthy digital ecosystem. By cutting off ad revenue from harmful or misleading content, the industry can help prevent the commercialization of toxicity and crime.

We believe Korea’s advertising market has the potential to mature into one that values both brand integrity and content responsibility. This shift will require not only updated regulations, but also industry-wide awareness and technical investment.

At PYLER, we’re committed to using our AI-powered video understanding technology to contribute to a cleaner, safer digital environment. We aim to challenge the uncomfortable realities of today’s content economy — and build solutions that move the industry forward.

BrandSafety

DigitalAdvertising

2024. 7. 3.

AiD: Solution That Protects Brand Trust

AiD by PYLER is a real-time brand safety solution that uses multimodal AI to automatically detect and block harmful content, improving both brand trust and ad performance. Built to meet global standards, it ensures brand messages are delivered in safe and effective environments.


PYLER’s New Standard for Brand Safety in Digital Advertising

Even as digital advertising becomes more sophisticated, many brands still face a fundamental challenge—the risk of ads appearing next to inappropriate content.

Violent, sexually explicit, politically or religiously biased, and hateful content can seriously damage brand perception. According to a global consumer survey, 64% of respondents said they lost trust in a brand after seeing its ad next to inappropriate content.

To solve this problem at the technological level, PYLER developed AiD, a real-time brand safety solution that automatically detects and blocks sensitive content in video-based ad environments—ensuring that brand messages are delivered in safe and effective contexts.


Fighting Fake News and Content Farming

One of AiD’s core priorities has been identifying and blocking fake news and manipulative content formats, such as clickbait or "storytime"-style videos that distort facts or spread false narratives.

On YouTube, this issue extends to “content farming” or “cyber re-uploaders”—channels that repurpose sensational material to exploit algorithms for visibility and revenue. When ads appear on these types of videos, brands can unintentionally become associated with misinformation or social division.

To prevent this, AiD excludes related content under its “Controversial Issues” category. Its AI models are continuously updated to respond to evolving content types and social issues, ensuring scalable and responsive brand protection.


Real-Time Filtering with Multimodal AI

AiD goes beyond basic keyword filtering. It uses multimodal AI, which analyzes both visual and textual elements of videos simultaneously. Through PYLER’s proprietary Vision Analyzer and Text Analyzer models, AiD classifies and blocks content related to Sexual content, Hate speech, Political or religious bias, Controversial or sensationalized “storytime” narratives etc.

AiD also supports custom filter criteria, allowing campaign-specific brand safety rules. As a result, advertisers can instantly assess content risk levels and ensure that their ads are placed only in safe, brand-aligned environments.


Proven Performance in the Field

AiD has already delivered tangible results in live campaigns:

  • 97% reduction in ad exposure on sensitive content

  • 15% improvement in ad performance (e.g., CVR)

  • 639% reduction in wasted ad spend on high-risk content

In one case, a brand reduced sensitive-content ad spend from 16.1% to just 2.1%, dramatically improving both efficiency and return on investment. AiD delivers the rare combination of trust protection and performance enhancement.


Built for Global Brand Safety Standards

AiD isn’t just functional—it’s also compliant with global brand safety guidelines.

PYLER is the first Korean company officially registered with the IAB Tech Lab, and AiD evaluates and filters content based on IAB’s globally recognized standards. These guidelines are used by the world’s top advertisers, making AiD a technically verified and transparent solution.


Globally Recognized Technology

AiD is powered by PYLER’s proprietary video AI, which has been internationally acclaimed:

  • CVPR 2022: 2nd place globally in Video AI (competing with Intel, Tencent, ByteDance, etc.)

  • CVPR & ICCV 2023: Research accepted by the world’s top AI conferences

  • 2024: Selected for both NVIDIA and AWS Global Startup Programs

PYLER continues to refine its technology in collaboration with global tech leaders.


Redefining the Basics of Advertising

In today’s digital landscape, advertising is no longer just about what you say—it’s about where you say it.

The context in which ads appear now directly impacts brand trust and consumer decision-making. AiD eliminates the need for manual review or guesswork. It enables advertisers to automatically avoid risky placements and run campaigns with both safety and efficiency in mind.

Now is the time for brands to make a strategic, technology-driven choice—to speak only in spaces that match their values. At PYLER we are committed to creating a safe space for brands to communicate confidently, backed by AI and built for the future.



Image Source: AiD Dashboard

BrandSafety

TrustAndSafety

2024. 4. 4.

EU’s Digital Services Act: Heavy Fines for Failing to Moderate Harmful Content

The EU’s Digital Services Act is reshaping online accountability — with massive fines for non-compliance.
As platforms scramble to moderate harmful content, brands must rethink where their ads truly belong.


What Is the Digital Services Act (DSA)?

The Digital Services Act (DSA) is a comprehensive EU regulation designed to hold online platforms accountable for the spread of illegal and harmful content — such as fake news, hate speech, and child exploitation. Platforms must remove such content swiftly and objectively, label AI-generated content, and ban targeted advertising based on sensitive data such as religion, sexual orientation, or content aimed at children and minors.

Failure to comply can result in fines of up to 6% of global annual revenue.

Source: Naver Encyclopedia
Companies affected: Google, Bing, YouTube, Facebook, X (formerly Twitter), Instagram, TikTok, Wikipedia, Apple, AliExpress, LinkedIn, and more


Regulating Big Tech Responsibility

DSA specifically targets Very Large Online Platforms (VLOPs) — those with over 45 million monthly active users in the EU. So far, 17 platforms and 2 search engines have been officially designated, including Google, Meta, X, and TikTok.

According to EU Internal Market Commissioner Thierry Breton,

“Compliance with the DSA will not only prevent penalties but also strengthen brand value and trust for these companies.”

European Commission President Ursula von der Leyen echoed this, saying:

“The DSA aims to protect children, society, and democracy through strict transparency and accountability rules.”


Enforcement Begins: DSA in Action

When misinformation and violent content spread rapidly across platforms following the Israel-Hamas conflict, the EU launched an official DSA investigation into X, questioning its ability to manage illegal content.

X responded that it had removed or labeled tens of thousands of posts and taken down hundreds of Hamas-linked accounts. Meta also reported deleting 800,000+ pieces of war-related content and establishing a special operations center for rapid content review.

Major platforms are now:

  • Removing recommendation algorithms based on sensitive user data

  • Adding public reporting channels for flagging illegal content

  • Filtering extremist or graphic content more aggressively

These actions are motivated by more than goodwill — DSA violations can trigger massive fines or even temporary bans from operating in the EU.


A Broader Vision: EU’s Digital Rulebook

The DSA is part of the EU’s digital governance trifecta, which also includes:

  • DMA (Digital Markets Act): Prevents anti-competitive practices by “gatekeeper” firms like Alphabet, Amazon, Apple, Meta, ByteDance, and Microsoft

  • DNA (Digital Networks Act): Aims to foster a unified digital market and promote investment and innovation in infrastructure and emerging players

Together, these laws enforce transparency, user protection, and fair competition in the EU digital ecosystem.


What About Korea?

While the EU pushes ahead with strong tech regulation, South Korea has yet to enact a comparable law to hold Big Tech accountable for algorithm transparency or content responsibility.

Civil society groups argue that Korea should move toward a comprehensive legislative framework, especially as:

  • Big Tech dominance threatens media diversity

  • Small businesses and content creators are increasingly dependent on platform decisions

  • Algorithmic news feeds raise concerns about information control

According to Oh Byung-il, head of Korea’s Progressive Network Center:
“Korea has long prioritized nurturing its domestic tech industry while overlooking critical issues like privacy and fair trade. The EU’s example shows it’s time for Korea to start serious discussions.”


Final Thoughts

From fake news to hate speech, the DSA reflects a growing global demand for platform responsibility. With major players like X, Meta, and TikTok scrambling to comply, it’s clear that user safety and algorithmic transparency are no longer optional.

In Korea and beyond, it’s time for governments and platforms alike to acknowledge their role in protecting the digital public — and for brands to ask hard questions about where their ads appear and what values they may be unintentionally endorsing.

AdTech

BrandSafety

DigitalAdvertising

TrustAndSafety

2023. 11. 1.

When Advertising Backfires: How Brands Lose Trust by Paying for the Wrong Placements

One poorly placed ad can damage years of brand trust — just ask Applebee’s.
Discover why brand safety matters more than ever in today’s volatile media landscape.


Are Video Ads Building Brand Value — or Destroying It?

Most of us are familiar with the Russia-Ukraine war, which began on February 24, 2022, when the Russian president announced a "special military operation" and invaded Ukraine — an event that continues to unfold today.

But what does war have to do with advertising?

출처 : Wikipedia

Source: Wikipedia

In the early hours of the invasion, Applebee’s, a casual American dining chain, found itself at the center of a global controversy. During a CNN broadcast covering air raid sirens over Kyiv, Applebee’s cheerful cowboy-themed commercial aired simultaneously in a split-screen format known as squeezeback, where ads are shown alongside live news to increase viewability.

The result? One half of the screen displayed a warzone, the other half showcased a light-hearted ad.
Viewers were shocked.

Source: CNN/Screenshot

The ad immediately went viral on social media, with many accusing Applebee’s of insensitivity or even indirectly sponsoring war. Despite issuing a prompt statement expressing concern about the war and disappointment in the broadcaster, the damage was done.
A significant investment in premium ad space ended up deeply harming the brand.


Chapter 1:
What Is Your Brand Really Associated With?

Think of a brand — now think of the model or celebrity associated with it.
For many in Korea, the pairing of Gong Yoo and KANU coffee is an iconic example. Since their campaign began in 2011, the brand and spokesperson have grown together in public perception.

That’s why brands carefully select models who align with their values and are unlikely to stir controversy.

But how much effort goes into choosing where the ad will appear?

If Applebee’s had used a high-profile celebrity in that cowboy ad, it’s likely that individual would also have faced backlash. Yet few brands spend even one-tenth of the time choosing ad placements as they do vetting their spokespeople.

Source: CHEQ, Magna & IPG Media Lab(2018) 


The Data Is Clear

According to research by CHEQ, Magna, and IPG Media Lab (2018):

  • Purchase intent drops by half when ads appear near harmful content

  • Brand recommendation likelihood drops 50%

Inappropriate ad placements severely damage trust, especially when they’re linked to national tragedies, hate content, or political extremism. For public companies, such reputational risks can even impact quarterly earnings and stock prices.

As more shocking and controversial content floods platforms to chase views and revenue, the risks for brands grow — and fast.


Chapter 2:
Rethinking Priorities in Advertising

What KPIs do marketers track for video campaigns?

CPV? CPM? VTR? Conversion rate?
All are valid metrics — but here's a more important question:

If you walked into your CEO’s office and asked,
“Which matters more — protecting the brand’s value and stock price, or improving media performance KPIs?”

What do you think the answer would be?

Most CEOs would agree: brand integrity always comes first.

Advertising exists to strengthen brand image and ultimately drive growth. But ironically, it can undermine brand equity when not managed with care. Marketers who focus solely on performance numbers may unknowingly put the brand at risk.


The Call to Action:
Make Brand Safety Everyone’s Business

It's time for brands to:

  • Establish robust processes for ad placement review

  • Monitor the content context of their media buys

  • Treat brand safety as a company-wide priority, not just a marketing concern

Digital content is becoming increasingly toxic, extreme, and monetized, and platforms aren’t always transparent. Marketers must now proactively assess the environments where their ads appear — and recognize that it’s no longer just about ROI.

Brand safety isn’t optional. It’s a strategic imperative.
And protecting your brand starts with knowing where your message lives.

BrandSafety

2023. 1. 30.

PYLER: Solving Challenges in the Digital Advertising Market with AI

PYLER's AiD safeguards brands in the risky world of video advertising with AI-powered contextual targeting.
Ranked 2nd globally at CVPR 2022, PYLER proves its cutting-edge capabilities in video understanding AI.


Launching AiD – Protecting Brand Safety in the Era of Video Advertising

PYLER addresses the growing brand safety concerns in the digital video advertising space with world-class Video Understanding AI, offering contextual video marketing solutions tailored for a new era.

In 2022, PYLER proved its technological prowess at CVPR, the world’s top AI and computer vision conference, by achieving outstanding results alongside global tech giants such as Intel, Tencent, and ByteDance. Through proprietary computer vision technologies and a brand safety algorithm built on IAB (Interactive Advertising Bureau) standards, Pyler has emerged as a highly competitive player.

Recognizing the seriousness of brand safety issues in video advertising, PYLER has launched AiD, an AI-powered solution to tackle these challenges head-on. As a leader in brand safety and contextual targeting in Korea, PYLER aims to raise industry awareness and restore an advertising environment where great ads are matched with trustworthy content.


Chapter 1:
Ranked 2nd Globally at CVPR 2022

PYLER was invited as a finalist to CVPR 2022, the world's largest conference on computer vision and pattern recognition, and proudly secured 2nd place in Track 3 of the LOVEU (Long-form Video Understanding) Workshop.

This challenge tested how effectively AI models could learn from instructional videos and scripts, and guide users through step-by-step tasks. In simple terms, the AI was required to interpret tutorial videos and then provide helpful answers when a user asked questions — similar to testing how well the AI "understands" the manual.

PYLER’s multimodal model—which integrates video, image, and text data—was praised for its ability to align complex questions with context and deliver user-relevant responses in sequential steps, making the task particularly challenging.

Dongchan Park (CTO) and Sangwook Park (ML Lead) of PYLER’s AI Context Lab shared:

“We competed against leading global corporations and research institutions with significantly fewer resources. Achieving 2nd place was meaningful—especially since we ranked 1st in Recall@3, one of the key evaluation metrics.”


Chapter 2:
AiD – Rebuilding Trust in Video Advertising

Brand safety concerns in platforms like YouTube have become severe across global markets, including South Korea. According to Pixability, one-third of total ad spend in 2021 was unintentionally spent on harmful or inappropriate content.

In one notable incident, KOBACO (Korea's only public ad agency) placed public service announcements on adult-themed YouTube channels, sparking controversy. Even major Korean corporations are often unaware of the exact content their ads are appearing alongside.

PYLER identified this market inefficiency—where 16% to 37% of ad spend is wasted on brand-damaging content—and created AiD: a 24/7 AI-powered guardian that continuously analyzes video content, flags risky categories, and blocks harmful ad placements.

Source: 2021 Pixability

While global advertisers now prioritize brand safety even above performance, many Korean advertisers remain uncertain about how to address the issue. PYLER hopes AiD will help normalize the ad ecosystem by giving advertisers peace of mind.

In the long run, AiD also aligns with privacy-first trends. It leverages contextual targeting without relying on third-party cookies, delivering efficient and effective ad performance without compromising user privacy.

BrandSafety

ContextualTargeting

VideoUnderstanding

2023. 1. 30.

© 2025 PYLER. All rights reserved.

pylerbiz@pyler.tech

© 2025 PYLER. All rights reserved.

pylerbiz@pyler.tech

© 2025 PYLER. All rights reserved.

pylerbiz@pyler.tech