BrandSafety

TrustAndSafety

AdTech

DigitalAdvertising

Harmful Content Blocking: From Optional to Default

2024. 1. 29.

From Twitch exits to Meta lawsuits, platforms are making harmful content blocking the new default.
As user expectations rise, brands must take an active role in ensuring safer digital ad environments.


In today’s digital world, where content creation and consumption are open to everyone, harmful content has become increasingly prevalent. With monetization opportunities expanding, creators are incentivized to produce content that is more provocative, graphic, or shocking — often at the cost of viewer safety.


Dopamine, Digital Addiction, and Content Extremes

The 2021 book “Dopamine Nation” highlights how our brains are becoming increasingly vulnerable to overstimulation in the digital age:

“We now live in a world of abundance, not scarcity. Dopamine triggers are everywhere: gambling, shopping, explicit messages, Instagram, YouTube, TikTok, Facebook... Digital platforms are essentially delivering dopamine in real time — like a modern IV drip to the brain.”
Source: Book description from Kyobo Bookstore

This overstimulation fosters a craving for more intense content, driving creators to push boundaries. However, the tide is turning. Users and tech platforms alike are beginning to acknowledge the damaging effects of harmful content — leading to a wave of new policies and technologies aimed at filtering and blocking such material.


Case Study 1:
Chzzk – Building a Safer Alternative to Twitch

With Twitch’s exit from the Korean market, platforms like Chzzk (backed by Naver) and AfreecaTV are competing to fill the void — not just with features, but with stronger content safety standards.

Chzzk leverages Naver’s AI-based CLOVA GreenEye technology to:

  • Filter harmful video content in real time

  • Operate a 24/7 three-shift human monitoring team

  • Enforce stricter terms of service, including bans on creators involved in criminal activity, hate speech, or misinformation

These updates reflect broader societal expectations. Users spreading hate speech or promoting discriminatory ideologies can now be denied streaming contracts — even if indirectly involved.

Source: Naver

Meanwhile, AfreecaTV has invested in over 100 dedicated moderators and developed proprietary AI filtering tools, including:

  • TaegwonS: Real-time detection of explicit content

  • TaegwonA: Monitoring illegal promotions or offensive keywords in text and images

Source: ZUM News


Case Study 2:
Meta’s Response to Harmful Content Toward Minors

Under legal pressure from U.S. states, Meta (Facebook and Instagram) announced new safety measures to protect minors from harmful content, including:

  • Automatic content restrictions for self-harm, violence, and eating disorders

  • Age-specific content filters

  • Blocking explicit posts from appearing, even if shared by friends or followed accounts

  • Allowing recovery-related content (e.g., eating disorder recovery) while hiding harmful variations

These changes followed lawsuits filed by 41 U.S. state governments, claiming Meta failed to address the impact of addictive content and child exploitation. One case revealed over 100,000 children had received unsolicited explicit content via Meta platforms.

Sources: Digital Times, Yonhap News, The Guardian


A New Standard for Brands and Platforms

The proactive measures taken by Chzzk, AfreecaTV, and Meta signal a clear industry shift: content safety is no longer optional — it’s the default.

For brands, this shift brings new responsibilities. As digital ads appear across platforms, businesses must ensure they’re not inadvertently funding or appearing alongside harmful content. Consumers increasingly expect brands to prioritize safety, integrity, and social responsibility.

To succeed in this environment, companies must:

  • Vet platforms for content safety tools and features

  • Demand transparency in ad placements

  • Align brand values with the broader goal of a healthier digital ecosystem

As harmful content becomes more pervasive — and detection more sophisticated — brands and platforms must work together to create a safer, more trusted digital space for all users.

© 2025 PYLER. All rights reserved.

pylerbiz@pyler.tech | 19th floor, 396, Seocho-daero, Seocho-gu, Seoul, Republic of Korea (06619)

pylerbiz@pyler.tech | 19th floor, 396, Seocho-daero, Seocho-gu, Seoul, Republic of Korea (06619)

© 2025 PYLER. All rights reserved.