Case Study
Ensuring the safety of 45 million monthly players
How CrazyGames enabled player identity at scale, without increasing moderation overhead
Challenge
CrazyGames wanted to introduce custom usernames for its 45 million monthly players, while protecting a young audience against abuse and keeping the time spent on moderation low for the product team.
Solution
The product team implemented AI-assisted content moderation for usernames and for user feedback to game developers. It combines automated decisions with clear review flows, allowing most content to be handled automatically while keeping human oversight where it matters.
Results
Moderation became a lightweight, sustainable part of product operations. The team retained control over player safety, prevented abuse from reaching users, and kept daily moderation time low enough to support future social features.
When scale turns small features into big decisions
CrazyGames is a global casual gaming platform where players can instantly access thousands of games through the browser or app. With around 45 million monthly players, the platform operates at a scale where even small product decisions can have outsized consequences.
As CrazyGames grew, so did a familiar product tension: how do you add social features without opening the door to abuse?
For CrazyGames, that question became concrete when they decided to introduce custom usernames.
Giving players an online identity, safely
When Jonas Boonen joined CrazyGames as Head of Product (now VP Product), players couldn’t choose their own usernames. Accounts were assigned names from a predefined list. It was safe, but limiting.
Jonas explains: “Players couldn’t choose their own name, and that made them feel less connected to the platform.”
At the same time, he knew what would happen once users could type freely.
“If you open a door, it gets kicked in quickly,” he says. “We have a relatively young audience, with many under-18 users. Safety is an extra concern.”
The risk wasn’t abstract. Bad experiences cause users to leave, and at CrazyGames’ scale that quickly becomes a reputation and PR risk.
Instead of shipping first and reacting later, the team decided to put moderation in place before opening up the identity feature.
Why moderation became a product responsibility
The CrazyGames product team owns the player-facing platform: profiles, friends, usernames, and discovery. Most games themselves are built by external developers.
That means the platform has influence, but not full control. Guardrails need to live at the platform level.
Custom usernames touched multiple surfaces:
- User profiles
- Friend interactions
- Names shown inside games
Previously, some games allowed players to enter arbitrary in-game names. That created obvious issues.
With the new setup, CrazyGames required games to use the platform username**,** the one that would be moderated.
This closed a major loophole and made safety enforceable across the ecosystem.
Choosing a moderation solution
CrazyGames evaluated several moderation tools. What mattered in their decision was whether the system fit real product workflows.
“Moderation can’t be fully automated,” Jonas says. “You have to assume some things will be reviewed manually. That process has to work really smoothly.”
The product team itself picked up moderation, so usability and efficient handling of incoming content was important. Moderation isn’t a one-off integration; it’s something teams live with. Jonas evaluated several renowned moderation platforms, looking at:
- Moderation beyond raw AI scores
- Clear queues and review flows
- Rule-based logic, not just probabilities
- Visibility into users and behavior over time
Pricing predictability also played an important role. CrazyGames is cautious of tools that are easy to adopt early but become expensive once deeply embedded.
Early reality: when the product team had to step in
At first, moderation relied heavily on profanity and toxicity scores. That approach worked as a basic filter, but its limits showed quickly.
“Some things score 20% toxic and aren’t toxic at all,” Jonas explains. “Others score 15% and feel very toxic.”
To stay safe, anything in a broad “gray zone” was reviewed manually.
That kept abuse out, but it introduced a new constraint: time.
Moderation wasn’t handled by a separate operations team. Being cautious meant reviewing more content, not less.
The system worked, but it didn’t scale comfortably. The feature was live, safety was under control, yet the review workload kept growing.
This is where many teams get stuck: moderation technically works, but only by consuming more product time than feels sustainable.
Reducing review time with AI-assisted moderation
As Lasso’s AI capabilities evolved, CrazyGames adjusted its setup.
Instead of sending large volumes of borderline content to humans, flagged items are now routed through an AI moderator first. Humans step in only when the AI is uncertain.
“The AI does about 99% of the work now,” Jonas says. “Sometimes we even forget to look at our queues for a while.”
He feels confident about Lasso correctly handling their moderation queues, which became quieter and more predictable.
How CrazyGames runs moderation today
Today, moderation is intentionally lightweight.
Usernames are handled through different flows depending on risk:
- Standard usernames are mostly handled automatically
- Repeat offenders receive stricter scrutiny
Rules are adjusted mainly after platform changes or when workload drifts.
Moderation ownership rotates within the product team and takes roughly 15 minutes a day.
CrazyGames also uses moderation for user feedback sent to game developers, tagging spam versus meaningful input so developers can focus on what matters.
Adding user-generated content means accepting new product risk. If you open the door, you need to be prepared for what comes through it, and for the time it takes to manage it properly. As the platform continues to evolve and explore richer social features, moderation is no longer a blocker; it’s a foundation the product team can build on.

Highlighted Features
- AI Moderators
- Review Queues
- User Reports
- Username Moderation
AI moderators reduced manual review from high volume to just 15 minutes per day - a 95% efficiency gain
Jonas Boonen
VP of Product - Crazy Games
Related case studies
When existing tools no longer match growing needs
COOL Media helps news publishers increase visibility and engagement through open comment sections. ... Read MoreWoov: Keeping Festival Communities Safe at Scale
Woov provides community and technology platforms for music festivals across the globe. They serve ov... Read MoreBloxd: Scaling Game Chat Moderation with AI
Bloxd, an innovative multiplayer gaming platform, has captivated millions of gamers worldwide, rapid... Read MoreLearn more about our Gaming solutions
Book a demo for a personalized walk-through. Learn how Lasso can empower your team in protecting your users and brand.
© 2026. All rights reserved.
