A quantitative approach to feature prioritization used by leading product teams worldwide
RICE is a prioritization framework designed to help product teams make objective, data-driven decisions about which features and projects to work on. Developed and popularized by Intercom, it provides a systematic way to evaluate opportunities by scoring them across four key factors: Reach, Impact, Confidence, and Effort.
Unlike subjective prioritization methods that rely on gut feeling or opinion, RICE introduces quantifiable metrics that force teams to think critically about each dimension of a potential project. The result is a numerical score that allows for direct comparison between wildly different initiatives.
Higher scores indicate higher priority projects
RICE forces teams to explicitly consider both the upside (Reach × Impact × Confidence) and the cost (Effort) of every initiative. This balanced approach prevents teams from cherry-picking easy but low-impact work or pursuing high-impact projects that consume disproportionate resources.
Definition: How many people will this impact within a given time period?
Measurement: Number of users/customers per quarter (or month)
Example: "This feature will reach 5,000 customers per quarter" = Reach score of 5,000
Common metrics: Transactions per quarter, users who see the feature, support tickets per month
Definition: How much will this impact each person?
Measurement: Scored on a scale
Example: A critical bug fix might be 3.0; a minor UI improvement might be 0.5
Definition: How confident are you in your Reach and Impact estimates?
Measurement: Percentage
Purpose: Prevents "moonshot" projects with uncertain outcomes from dominating your roadmap. Use this to de-prioritize ideas that sound good but lack evidence.
Definition: How much total work will this require from the entire team?
Measurement: Person-months (total time across all team members)
Example: If 2 engineers work for 3 weeks and 1 designer works for 2 weeks, that's approximately 2.5 person-months
Includes: Design, engineering, testing, project management, and any other team time required
Tip: Use minimum viable scope. Effort for the smallest useful version, not the dream version.
Let's compare five different feature ideas for a SaaS product to see how RICE scoring works in practice:
| Feature | Reach (users/qtr) |
Impact (0.25-3) |
Confidence (%) |
Effort (person-mo) |
RICE Score | Priority |
|---|---|---|---|---|---|---|
| Mobile app redesign | 8,000 | 2.0 | 80% | 6 | 2,133 | 3rd |
| Single sign-on (SSO) | 3,500 | 3.0 | 100% | 4 | 2,625 | 2nd |
| Onboarding tutorial | 12,000 | 2.0 | 80% | 3 | 6,400 | 1st |
| Advanced analytics dashboard | 1,200 | 1.0 | 50% | 5 | 120 | 5th |
| Bulk export feature | 4,000 | 1.0 | 100% | 2 | 2,000 | 4th |
Winner: Onboarding tutorial (6,400) - Despite not having the highest individual scores, it combines high reach with solid impact and reasonable effort, making it the clear priority.
Runner-up: SSO (2,625) - Massive impact (3.0) and high confidence overcome the moderate reach. Critical for enterprise customers.
Deprioritized: Advanced analytics (120) - Low confidence and limited reach result in a poor score despite moderate effort. This might be worth revisiting once you have better data.
Enter your estimates for each component to see the calculated RICE score
Your RICE Score:
Gaming the system: Teams sometimes inflate Reach or Impact to prioritize pet projects. Combat this with transparent scoring sessions and documented assumptions.
Paralysis by analysis: Don't spend hours debating whether Impact is 1.75 or 2.0. Make your best estimate and move on.
Ignoring qualitative factors: RICE is a tool, not a replacement for judgment. Strategic initiatives, compliance requirements, or technical debt may trump scores.
RICE is particularly effective for:
Reach: Accounts affected, not individual users
Impact: Consider revenue impact, not just user value
Note: Weight enterprise customer requests by ARR
Reach: DAU/MAU affected
Impact: Tie to engagement, retention, or monetization metrics
Note: Consider viral/network effects in Impact scoring
Reach: Team members or workflows affected
Impact: Time saved or efficiency gained
Note: Calculate ROI by comparing effort to time saved
RICE works well alongside other prioritization methods:
Use ICE (Impact, Confidence, Ease) for rapid hypothesis prioritization in growth experiments - it's simpler and faster for high-volume testing.
Use Kano Model to categorize features by user satisfaction impact before applying RICE - helps set appropriate Impact scores.
Use Value vs. Effort matrices for executive presentations - stakeholders understand 2x2 matrices more intuitively than RICE scores.
Track these metrics to evaluate whether RICE is improving your prioritization:
RICE transforms subjective prioritization into an objective, repeatable process. By systematically evaluating Reach, Impact, Confidence, and Effort, product teams can make better decisions, align stakeholders, and focus resources on work that matters most.
Remember: RICE is a tool, not a dictator
Use it to inform decisions, spark productive conversations, and build consensus—but always leave room for strategic judgment and qualitative factors that numbers can't capture.