RICE Scoring Framework: Prioritize Features with Data, Not Opinions
RICE is a prioritization framework developed by Intercom that scores features or initiatives based on four factors: Reach, Impact, Confidence, and Effort. The resulting score provides a standardized way to compare unlike items — a redesigned checkout flow versus a new reporting dashboard versus a performance optimization — on the same scale.
RICE Scoring Framework: Prioritize Features with Data
Prioritization meetings without a framework devolve into opinion battles. The loudest stakeholder wins, the HiPPO (Highest Paid Person’s Opinion) dominates, or everything gets marked “high priority” and nothing is actually prioritized. RICE replaces that dysfunction with a numerical scoring system that surfaces the best return on investment.
The Formula
RICE Score = (Reach x Impact x Confidence) / Effort
Each factor is scored individually, then combined. Higher scores indicate higher priority — more value delivered per unit of effort.
The Four Factors
Reach
How many people will this feature affect in a defined time period? Measure Reach in the unit that makes sense for your product: users per month, customers per quarter, transactions per week.
Examples:
- “Redesign checkout flow” — Reach: 15,000 customers/month (everyone who reaches checkout)
- “Add CSV export to admin panel” — Reach: 50 users/month (only internal admins)
- “Improve page load speed by 2 seconds” — Reach: 100,000 users/month (every visitor)
Reach prevents the common trap of building features that matter deeply to one vocal stakeholder but affect almost nobody. A feature used by 50 people needs to have outsized Impact or very low Effort to score well.
Impact
How much will this feature affect each person it reaches? Impact is the hardest factor to quantify, so use a standardized scale:
- 3 = Massive impact (transforms the user experience)
- 2 = High impact (significant improvement)
- 1 = Medium impact (noticeable improvement)
- 0.5 = Low impact (minimal improvement)
- 0.25 = Minimal impact (barely noticeable)
Be honest about Impact. Most features are a 1 (medium) or 0.5 (low). Reserving 3 for truly transformative changes keeps the scale meaningful.
Examples:
- “Redesign checkout flow” — Impact: 2 (reduces cart abandonment significantly)
- “Add CSV export” — Impact: 1 (saves admins 30 minutes per report)
- “Improve page load speed” — Impact: 1 (better experience, lower bounce rate)
Confidence
How confident are you in the Reach and Impact estimates? This factor accounts for uncertainty. Score as a percentage:
- 100% = High confidence — backed by data, user research, or historical precedent
- 80% = Medium confidence — some data, reasonable assumptions
- 50% = Low confidence — gut feeling, no supporting data
- 20% = Speculation — pure guess
Confidence penalizes features where the team is guessing about value. A feature with high potential but no supporting evidence gets discounted, which is appropriate — unvalidated ideas should compete against validated ones at a disadvantage.
Effort
How much work will this feature require? Measure in person-months (or person-weeks for smaller items). Include all work: design, development, QA, documentation, deployment.
Examples:
- “Redesign checkout flow” — Effort: 3 person-months (design + frontend + backend + testing)
- “Add CSV export” — Effort: 0.5 person-months (backend only, straightforward)
- “Improve page load speed” — Effort: 2 person-months (CDN setup + code optimization + testing)
Effort is in the denominator, so higher effort reduces the score. This naturally promotes efficient features — small improvements that affect many users score well.
Worked Example
| Feature | Reach | Impact | Confidence | Effort | RICE Score |
|---|---|---|---|---|---|
| Checkout redesign | 15,000 | 2 | 80% | 3 | 8,000 |
| CSV export | 50 | 1 | 100% | 0.5 | 100 |
| Page speed improvement | 100,000 | 1 | 80% | 2 | 40,000 |
| New onboarding flow | 5,000 | 2 | 50% | 2 | 2,500 |
| Dark mode | 20,000 | 0.5 | 80% | 1 | 8,000 |
Priority order by RICE score: Page speed improvement (40,000) > Checkout redesign (8,000) = Dark mode (8,000) > New onboarding flow (2,500) > CSV export (100).
The page speed improvement scores highest because it reaches every user with meaningful impact. Dark mode ties with checkout redesign despite lower per-user impact because it reaches more people at lower effort. CSV export scores last because its reach is tiny, regardless of how much the 50 admins want it.
Running a RICE Scoring Session
Before the session: Create a spreadsheet with all candidate features listed. Pre-populate Reach estimates using analytics data. Draft initial Effort estimates with the engineering team.
During the session (60-90 minutes):
- Walk through each item and agree on Reach numbers using data where available.
- Discuss and assign Impact scores as a group. This is where the most debate happens — keep it constructive by asking “compared to feature X which we scored a 1, is this more or less impactful?”
- Set Confidence levels honestly. If nobody has data to support an Impact claim, the Confidence should be 50% or lower.
- Validate Effort estimates with engineering. Adjust if the team disagrees with the PM’s initial estimate.
- Calculate scores and review the resulting priority order. Discuss any results that feel wrong — they may indicate a scoring error or a factor you missed.
After the session: Share the scored list with all stakeholders. The RICE score becomes the starting point for sprint planning and roadmap decisions.
RICE Limitations and Adjustments
RICE does not capture strategic value. A feature that supports a key partnership or executive commitment may need to be prioritized regardless of its RICE score. Use RICE as input, not as the sole decision-maker. Strategic overrides should be documented with clear rationale.
RICE does not handle dependencies. Feature A might enable Feature B. If Feature B has a high score but cannot ship without Feature A (which scores low), A needs to move up. Review dependency chains after scoring.
Effort estimates are often wrong. Engineering estimates improve with experience, but even senior teams miss by 50% or more on novel work. Re-score after technical spikes reduce uncertainty.
RICE works best for product features. For infrastructure work, technical debt, and operational improvements, the Reach/Impact framework fits awkwardly. Consider separate prioritization for technical health items using engineering-specific criteria.
RICE and Other Frameworks
RICE works well alongside MoSCoW — use MoSCoW to define the boundary (Must Have vs Should Have) and RICE to rank within each boundary. Combine with the Eisenhower Matrix for time-sensitive decisions where urgency matters alongside value.
The point of RICE is not to produce a perfect ranking. It is to make the reasoning behind prioritization decisions explicit, comparable, and debatable. When the product owner says “I feel like dark mode is more important than page speed,” RICE gives the team a framework to explore whether the feeling matches the data.