Content ModerationFeb 16, 2026

What is Community Sift? Features & switch considerations for the phase-out

Ruud Visser

Ruud Visser

Founder & CEO

Microsoft acquired Community Sift from Two Hat in 2021 and turned it into the moderation backbone for Xbox, EA, and Supercell.

Now they're shutting it down.

TL;DR;

  • Community Sift is Microsoft's AI-powered content moderation platform, purpose-built for gaming and online communities. It classifies text, images, video, and usernames in real time.
  • It's sunsetting in 2026. Microsoft has communicated this to existing customers. If you're on the platform, you should be planning your migration now.
  • The technology is strong: 100 billion+ interactions classified monthly, 22 languages, advanced evasion detection, grooming detection. Your replacement needs to match these capabilities.
  • You have options. Several platforms can replace Community Sift depending on your use case. We've compared them in our alternatives guide.

Table of Contents

Free Consultation

Need help working through the feature gaps?

Book a free 30-minute call with our migration team. Bring your CS integration docs and any custom classifier logic you've built. We'll map your setup to Lasso's equivalents and flag anything that needs special handling.

Schedule your free migration call

Community Sift in 30 seconds

Community Sift is an AI-powered content moderation platform that combines machine learning with human review workflows to keep online communities safe. It was built for high-volume, real-time environments: the kind where thousands of chat messages fly by every second and a single unmoderated slur can poison the experience for everyone.

The platform covers four content types: text messages, images, video, and usernames. For each, it runs AI classification, assigns severity scores, and takes automated action (allow, flag, or remove) based on rules you configure. Content that falls in a gray area routes to human moderators through a built-in review queue.

Quick Reference
Founded2015 (Two Hat Security, Kelowna BC)
Acquired2021 (Microsoft)
Monthly volume100B+ interactions
Languages22+ (native models per language)
CustomersXbox, Electronic Arts, Supercell, Kabam, Flip
StatusSunsetting 2026

This guide covers everything you need to know: how the technology works, the Two Hat-to-Microsoft backstory, what's happening with the sunset, and, most importantly, what to do now.

Whether you're a trust and safety lead evaluating next steps or an engineering manager mapping out a migration timeline, this is the resource you need.

How Community Sift works

At its core, Community Sift follows a three-stage pipeline: ingest, automate, and escalate. Understanding this flow matters for migration planning, because your replacement needs to cover each stage.

Stage 1: Content ingestion

Your platform sends content to Community Sift via its REST API. Every text message, image, video clip, or username gets submitted as an API call. Metadata about the user (account age, history, role) travels alongside the content to inform moderation decisions.

Stage 2: AI classification

The platform runs multiple AI models in parallel. Natural language processing analyzes text for toxicity, hate speech, harassment, profanity, and sexual content. Community Sift's "Unnatural Language Processing" engine handles leet speak (h4t3), intentional misspellings, Unicode tricks, emoji substitutions, and evolving slang. These are the evasion tactics that make basic word filters useless. Image and video scanning uses computer vision to detect nudity, violence, gore, and other visual violations. Contextual analysis considers user behavior patterns, account history, and conversation context to assess severity.

Each piece of content receives a severity score and is automatically allowed, flagged for review, or removed, all before it appears in your community.

Stage 3: Human escalation

Content that falls between clear-cut categories gets routed to human moderators through Community Sift's review queue. Moderators see the content, the AI's classification, the user's history, and the relevant policy. Their actions are relayed back to your platform via webhooks in real time.

The result: AI handles the bulk of volume (Microsoft reports an 88% reduction in moderator workload for enterprise customers), while human judgment handles the nuanced cases.

How It Works

The Community Sift moderation pipeline

1

Content ingestion via API

Your platform sends text, images, video, and usernames to Community Sift through a REST API. User metadata (account age, role, history) travels alongside the content.

2

AI classification and automated action

NLP, computer vision, and behavioral models run in parallel. Each item gets a severity score and is automatically allowed, flagged, or removed based on your custom rules.

3

Human review and escalation

Ambiguous content routes to human moderators via a built-in review queue. Moderator decisions are relayed back to your platform through real-time webhooks.

Key features

Understanding what Community Sift does well isn't just about appreciating the platform: it's about knowing what your replacement needs to match. These are the capabilities your migration checklist should cover.

Unnatural Language Processing

Community Sift's signature capability. The engine specifically targets evasion tactics: leet speak, intentional misspellings, Unicode character swaps, emoji sequences used as code, and rapidly evolving slang. If your replacement can't handle "h4t3" and "d@mn" as well as "hate" and "damn," you'll see a spike in missed content on day one.

Multi-language support

22 premium languages natively: English, Arabic, Chinese (Traditional and Simplified), Dutch, Finnish, French, German, Indonesian, Italian, Japanese, Korean, Polish, Portuguese, Romanian, Russian, Spanish, Thai, Turkish, Vietnamese, Tagalog/Filipino, and Hindi. These aren't translation layers. Each language has its own detection models. Make sure your replacement covers every language you currently use.

Image and video moderation

Computer vision for nudity, violence, gore, and other visual policy violations. Adjustable risk sliders per content category. An image-blurring feature to protect human moderators from repeated graphic content exposure.

Custom moderation rules

Rules are collections of conditions paired with actions, for example, "flag any user who signed up in the last 3 hours AND has posted 5+ messages flagged by AI." Conditions target user attributes (account age, country, signup method, email domain, report count) and content attributes (text, AI classification, length). Rules update in the dashboard without code deployments. Export and document all your custom rules before migrating.

Grooming and predator detection

Purpose-built models for identifying grooming behavior. These are conversational patterns indicating an adult is attempting to build trust with a minor. This was a core focus of the original Two Hat team. If you serve younger audiences, make sure your replacement has comparable depth here.

Dashboard and analytics

Review queues, content and user overviews, policy management, user reports, action logs, and analytics.

Capability Community Sift coverage Migration priority
Text toxicity / hate speech Real-time NLP across 22 languages Critical
Evasion detection (leet speak, Unicode, slang) Advanced — "Unnatural Language Processing" Critical
Image / video moderation Computer vision with adjustable risk sliders High
Grooming / predator detection Purpose-built behavioral models High
Custom moderation rules Condition + action rules, dashboard-editable High
Username moderation NLP scanning at registration High
Multi-language support 22 premium languages, native models Critical (if multilingual)
Human review queue Built-in dashboard with moderator tools High

The story: From Two Hat to Microsoft

Understanding Community Sift's origins helps explain both its strengths and the current situation.

Two Hat Security was founded with a specific mission: make the internet safer for kids. The founding team focused on the hardest problems in online safety: detecting grooming, identifying predatory behavior patterns, and understanding the creative ways people circumvent moderation systems. That child-safety DNA is embedded in Community Sift's architecture.

Two Hat built Community Sift as its flagship product, targeting the gaming industry. Gaming studios needed real-time moderation that could handle the speed and creativity of player chat, and Community Sift's evasion-detection capabilities were a genuine differentiator. The company signed major gaming studios: Supercell, Kabam, Electronic Arts, and others.

In 2021, Microsoft acquired Two Hat, integrating the team and technology into its gaming division. Community Sift moved to Azure infrastructure, gained enterprise support capabilities, and integrated deeply with Xbox's moderation workflows. The acquisition brought scale: 100 billion interactions per month, 22+ language support, and the resources of one of the largest technology companies in the world.

Recently, Microsoft announced to its clients that current contracts would be honored through their scheduled expiration, after which they will not be renewed.

Timeline

The Community Sift story

Founded

2015

Two Hat Security
Kelowna, BC

Gaming traction

2017–20

Supercell, Kabam,
EA sign on

Acquired

2021

Microsoft acquires
Two Hat

At scale

100B+

Monthly interactions
processed

Sunsetting

2026

Phase-out
communicated

What's happening to Community Sift

Microsoft has notified existing Community Sift customers that service contracts will not be renewed. This hasn't been announced publicly, but the wind-down is underway.

Here's what you need to know:

The product remains operational during the transition. No hard cutover date has been communicated publicly. Your moderation pipeline continues to work. But the timeline has limits, and starting your migration planning now, even before you've chosen a replacement, gives you options that waiting until the last month won't.

Account teams are the source of truth. Microsoft is managing specific timeline details customer by customer. If you haven't been contacted yet, reach out to your account team directly.

Data and configuration are your most urgent concern. Export your custom rule configurations, language-specific thresholds, severity mappings, and historical moderation analytics now, while you still have uninterrupted access. This documentation becomes your requirements checklist for any replacement platform.

Migration timelines are 2-3 weeks for a smooth transition. Teams that start 20-30 days before cutover, can also run parallel testing, recalibrate scoring thresholds, and train moderation teams on the new system.

What this means practically: the question isn't whether to move. It's how much runway you give yourself. For a structured migration process, our step-by-step migration guide covers API mapping, policy migration, parallel testing, and staged cutover.

Community Sift status: sunsetting

Microsoft has notified existing customers that Community Sift is being phased out. The product remains operational during the transition period. Current customers should contact their Microsoft account team for specific timeline details and begin evaluating replacement platforms.

How alternatives compare

The content moderation market has matured significantly since Community Sift launched. Several platforms can match or exceed its capabilities depending on your specific needs.

Platform Best for Evasion detection Real-time Migration difficulty
Lasso Moderation
Migration guide available
Gaming, platforms, API-first teams Advanced Yes Low
ActiveFence Enterprise threat intelligence Advanced Standard High
CleanSpeak Small-to-mid teams, simple setup Moderate Yes Low
Hive Moderation Visual content moderation Moderate Yes Moderate
GGWP Gaming behavior analytics Moderate Yes Moderate

For detailed breakdowns, see the full alternatives and replacement guide. We sell moderation software at Lasso, so we're obviously not a neutral party here. But we've built the comparison to include where competitors genuinely beat us.

Comparison guide

Evaluating replacement platforms?

We've built side-by-side comparisons with ActiveFence, CleanSpeak, Hive Moderation, GGWP, and more, including where each competitor beats us and what migration looks like.

See all alternatives

Planning your transition

Whether you've already picked a replacement or you're still evaluating, a solid migration process follows the following structure:

Step 1: Audit

Document every custom rule, API integration point, webhook configuration, and language setting in your current Community Sift setup. This becomes your requirements checklist. Teams that start integrating a new tool without fully understanding their current setup always discover gaps mid-migration.

Step 2: Evaluate

Test 2–3 replacement platforms against your actual content. Send real examples, including the tricky stuff: evasion attempts, context-dependent content, edge cases in your specific languages. Compare accuracy, latency, and false positive rates against your Community Sift baseline.

Step 3: Map

For your chosen platform, map every Community Sift API endpoint to its equivalent. Map your custom rules to the new rule engine. Map your moderation team's dashboard workflows to the new interface. Our detailed migration walkthrough covers this step by step.

Step 4: Parallel test

Run both systems on real traffic for 2 weeks. Community Sift stays live and handles production decisions. The new platform runs in shadow mode, classifying the same content without taking action. Compare everything.

Step 5: Cut over

Shift traffic in stages: 10%, then 50%, then 100%. Keep Community Sift warm for 30 days after full cutover in case you need to revert. Then decommission.

Migration guide

Ready to plan your migration?

We've mapped Community Sift's API to Lasso's, built an endpoint-by-endpoint migration reference, and outlined a parallel testing framework so you can switch without downtime.

Read the migration guide

Learn more

Frequently asked questions

Is Community Sift shutting down?

Yes. Microsoft has communicated to existing customers that Community Sift is being phased out in 2026. The exact timeline may vary by customer and contract. If you haven't received direct communication yet, contact your Microsoft account team for specifics. In the meantime, start documenting your current setup and evaluating replacement platforms.

Is Community Sift the same as Sift?

No. Community Sift (by Microsoft/Two Hat) is a content moderation platform for online communities. Sift (sift.com) is a completely different company providing digital fraud prevention. They're not related. If you're looking for fraud prevention, you want sift.com.

Not what you're looking for?

Community Sift (Microsoft) = content moderation. Sift (sift.com) = fraud prevention. GetSift.ai = a different AI company. Three products, three companies.

What happens to my data when Community Sift shuts down?

This depends on your contract terms with Microsoft. Reach out to your account team to clarify data retention, export options, and timelines. As a general best practice, export your historical moderation data, analytics reports, and custom rule configurations as early as possible. Don't wait until access is restricted.

How long do I have to migrate?

That depends on when Microsoft notified you and what timeline they communicated.

What's the best replacement for Community Sift?

It depends on your use case. Gaming studios needing real-time chat moderation with strong evasion detection want a platform that mirrors Community Sift's core strengths. Teams focused on image/video or broader threat intelligence may prioritize differently. We've mapped it all in our alternatives comparison.

What happened to Two Hat?

Two Hat Security was acquired by Microsoft in 2021. The team was integrated into Microsoft's gaming division, and the "Two Hat" brand was retired. The technology they built became Community Sift under Microsoft's umbrella.

Can I still get support for Community Sift?

During the wind-down period, yes: Microsoft is maintaining support for existing customers. Check with your account team for specifics on support availability and SLA changes as the sunset date approaches.


How Lasso Moderation Can Help

At Lasso, we believe that online moderation technology should be affordable, scalable, and easy to use. Our AI-powered moderation platform allows moderators to manage content more efficiently and at scale, ensuring safer and more positive user experiences. From detecting harmful content to filtering spam, our platform helps businesses maintain control, no matter the size of their community.

Book a demo here.

Want to learn more about Content Moderation?

Learn how a platform like Lasso Moderation can help you with moderating your platform. Book a free call with one of our experts.

Protect your brand and safeguard your user experience.

TSPA Logo

© 2026. All rights reserved.