X Open Sourced the Algorithm: Why Reach Is Disappearing and How Phoenix Works (Must Read)
On January 20, 2026, X open-sourced its recommendation algorithm, revealing a system entirely powered by a Grok-based model named Phoenix. This release outlines the specific engagement probabilities and penalties that now determine content visibility.

Sections
On January 20, 2026, the codebase for the X recommendation algorithm was released to the public, confirming a transition to a system completely powered by "Phoenix," a Grok-based AI model designed specifically for social media feeds. This system fundamentally changes how reach is calculated by predicting user engagement probabilities rather than relying on static ranking rules.
The Phoenix Model and Grok Integration
The core of the updated algorithm is the Phoenix model, which analyzes every piece of content posted to the platform. Unlike previous iterations, Phoenix utilizes Large Language Model (LLM) capabilities via Grok to predict the likelihood of a specific user interacting with a post. The system scores content based on the probability of 19 distinct positive actions.
To determine what a user is interested in, the model analyzes the last 128 posts the user has engaged with (liked, replied to, or reposted). This creates a rolling window of context, meaning a user's feed is strictly determined by their most recent behavior. Engaging with specific content types immediately recalibrates the feed to show similar material.
Engagement Scoring and Ranking Formulas
Content visibility is determined by a specific formula that aggregates weighted probabilities against potential negative offsets:
Final Score = Σ (weight_i × P(action_i)) + NEGATIVE_SCORES_OFFSET
The algorithm prioritizes 19 positive engagement signals. Among these, dwelling on a post and watching a video longer than a few seconds carry massive weight. The longer a user performs these actions, the higher the content is boosted. The full list of positive signals includes:
- Likes, Replies, Reposts, and Quotes
- Profile Clicks and Follows
- Shares (Standard, via DM, or Copy Link)
- Clicking on photos or quotes
- Video View Duration and Post Dwell Time
Negative Signals and the Author Diversity Penalty
The documentation highlights strict penalties for negative user feedback. Four specific actions will actively "deboost" an account, potentially hiding it from timelines if these signals accumulate:
- Clicking "Not Interested"
- Blocking the author
- Muting the author
- Reporting the post
The Author Diversity Penalty
A critical change in this update is the formalization of the Author Diversity Penalty. The system is designed to prevent a single creator from dominating a user's feed. Once a post from an author appears in a user's timeline, the probability of that author appearing again on the same day decreases significantly. This means reach scores decay with each subsequent post in a short timeframe, discouraging rapid-fire posting strategies.
Practical Workflow: Adapting to the Phoenix Algorithm
To align with the mechanics of the Phoenix model, creators and marketers must adjust their publishing workflows to prioritize dwell time and minimizing negative signals.
- Step 1: Audit Recent Engagement Patterns. Analyze which content formats are driving "dwell" and "video watch time," as these are now heavily weighted positive signals.
- Step 2: Optimize for Direct Messages. Structure content to encourage sharing via DM, as this is treated as a high-value private signal compared to public engagement.
- Step 3: Implement Frequency Capping. Space out posts significantly to avoid the Author Diversity Penalty; rapid posting will result in diminishing returns on reach.
- Step 4: Cultivate In-Network Followers. Focus on acquiring followers, as content served to an existing follower base does not suffer from Out-Of-Network (OON) penalties.
- Step 5: Sanitize Content for Safety Filters. Review content for controversial keywords or aggressive language that might trigger safety filters or "mute" actions.
Common Mistakes
The open-source documentation reveals several failure patterns that directly reduce algorithmic scoring.
- Mistake: Rapid-fire posting.
Principle: Posting multiple times in quick succession triggers the Author Diversity Penalty, wasting potential reach on decayed impressions. - Mistake: Triggering "Mute" or "Block" actions.
Principle: Polarizing or spammy content that causes users to mute or block an account applies a heavy negative weight to the global reach score. - Mistake: Short, low-context video.
Principle: Videos that do not meet minimum duration thresholds fail to trigger the "Video Quality View" (VQV) boost, missing a key amplification lever. - Mistake: Using muted keywords.
Principle: Including words frequently muted by users will filter content out of timelines entirely for those segments. - Mistake: Engaging with irrelevant content.
Principle: For personal accounts, engaging with content (even hate-clicking) recalibrates the "Last 128 Posts" model, flooding the feed with similar undesirable material.
Analyzing how these algorithmic signals manifest across different creative formats can be streamlined through ad intelligence platforms. AdLibrary.com provides a centralized environment to research high-performing ad creatives and validate hypothesis testing against current algorithmic trends.