Case Study
When existing tools no longer match growing needs
Rebuilding moderation under pressure at COOL Media
Challenge
COOL media's growth brought complexity. As the company expanded its publisher network, the moderation team had to support more comment streams. Each stream having different rules and risk profiles. Quality assurance became harder to sustain as volume increased.
Solution
The team replaced a basic in-house dashboard with a system built around adjustable thresholds, client-specific rules, and policy-based queues. Automation handles volume, while human moderators make the final decisions.
Results
Most comments are processed automatically, allowing moderators to focus on higher-risk edge cases. While operating with a smaller team, moderation quality was maintained while engagement features remained active for publishers.
Comments as both a growth lever and a moderation risk
COOL Media works with news and media publishers to increase the visibility of their content, in part through commenting modules embedded under articles.
For publishers, comment sections are more than a community feature. They add fresh content, encourage repeat visits, and drive traffic.
At the same time, open commenting introduces risk. Spam, personal attacks, and heated exchanges can quickly damage credibility if left unchecked. For that reason, moderation is offered as an integrated part of the commenting product.
Each publisher follows its own editorial standards, requiring a moderation setup that can adapt continuously.
This is the context in which Yuliia Matsiuk leads the moderation operation.
Building moderation as an operational function
As Moderation Team Lead, Yuliia oversees how moderation policies are defined with clients, translated into rules, and applied consistently by the team.
Depending on volume and demand, the team consists of two to six moderators. All moderation work happens inside a single system. Moderation was always part of the product offering, but the way it was managed evolved over time.
When a basic dashboard stopped being enough
Moderation initially ran through an internal dashboard designed for simpler workflows, such as approving or removing comments, banning users, and viewing limited user history.
When context was needed for evaluation, moderators had to open the live website, locate the comment, read the surrounding thread, and return to take action. Policies were applied from memory and varied per client, which made consistency hard to maintain as volume grew.
Yuliia: “If we needed context, we had to go to the website, find the comment, and read the whole thread. That was very time-consuming.”
The team experimented with automated moderation via a third-party API. While it handled volume, accuracy was unreliable. “To protect quality, we had to manually review comments that had already been automatically moderated.”, Yuliia says.
At that point, the challenge was maintaining consistent quality as volume increased.
“It became clear that our existing setup wouldn’t scale with the level of complexity we were handling. We had to change something, because we couldn’t continue to work like how we did.”
Finding right fit
When evaluating alternatives, Yuliia focused on what moderation needed to support on a day to day basis. Two considerations were most important:
Client policies changed frequently. After incidents, publishers asked for stricter enforcement. After user complaints, they asked for more leniency. Rules needed to be adjusted quickly, without creating new risk. Yuliia: “We have to create a lot of rules for different websites, and those websites can change things at random moments. You need to be able to go in and quickly adjust something.”
Moderators also needed to be supported during the transition. The new system had to be understandable without extensive training or technical help.
In short, rule changes had to be fast, reversible, and safe to make by moderators themselves.
Adapting to ever-changing policies
During implementation, Yuliia learned that changing a rule was quick. Deciding what to change took more time.
The team started with stricter settings and loosened them gradually, based on what actually appeared in the review queue.
“It’s impossible to get this right from the start. We tested it, looked at what was being flagged, and then adjusted again ”, says Yuliia.
Adoption: initial hesitation, then relief
For moderators, the biggest concern at first was the change itself. The old dashboard was simple and limited. The new system exposed more information and options, which felt overwhelming initially.
That hesitation faded within days. Once moderators became familiar with the workflow, the benefits were clear: fewer external links, less context switching, and better visibility into user behavior and history.
Client feedback in practice
Client satisfaction at COOL Media is being tracked informally, through incoming feedback.
So far, Yuliia has received zero negative feedback about moderation quality. “Honestly, I haven’t heard anything negative about moderation. When feedback does come up, it’s aimed at individual decisions rather than systemic problems.”
The role of humans in automation
Yuliia is clear about the limits of AI moderation. Context matters, and automated systems to this day struggle to infer intent reliably. A statement directed at a public figure may be acceptable, while the same wording aimed at another commenter may constitute harassment.
Her preference is a system where automation handles volume and prioritization, but humans retain control over behavior, thresholds, and final decisions. “The system does a lot of the work,” she says, “but people need to control how it behaves.”
The goal is not to eliminate human judgment, but to spend it where it matters most: protecting publisher reputation while increasing user engagement.

Highlighted Features
- Text Moderation
- Review Queues
- User Reports
- Automatic Translations
Lasso does a lot of the work for you, but you fully control how it behaves.
Yuliia Matsiuk
Moderation Lead - COOL Media
Related case studies
Woov: Keeping Festival Communities Safe at Scale
Woov provides community and technology platforms for music festivals across the globe. They serve ov... Read MoreEnsuring the safety of 45 million monthly players
CrazyGames is a leading global browser-based gaming platform that offers over 4,500 free-to-play, we... Read MoreBloxd: Scaling Game Chat Moderation with AI
Bloxd, an innovative multiplayer gaming platform, has captivated millions of gamers worldwide, rapid... Read MoreLearn more about our Publishing solutions
Book a demo for a personalized walk-through. Learn how Lasso can empower your team in protecting your users and brand.
© 2026. All rights reserved.
