Howzit again, tech legends! Ever feel like you’re stuck in a never-ending tug-of-war between building cool new features and sorting out that mountain of technical debt? Damn, it’s a proper struggle! It’s like trying to decide between a braai with mates this weekend or finally fixing that leaky geyser that the wife has been nagging you about for months.
Good news – you don’t have to rely on gut feel or the loudest voice in the standup anymore! Let’s check out how the RICE and ICE frameworks can be your secret weapon for making lekker decisions that keep both your product managers and your future self happy.
The OG Frameworks: What Are We Working With?
Before we go full-on MacGyver and adapt these frameworks for our technical troubles, let’s suss out what they’re actually about.
The ICE Scoring Model: Quick and Simple as a Rooibos Tea
The ICE model is about as straightforward as directions to the nearest Woolies. It stands for:
- Impact: How much this will move the needle (score 1-10)
- Confidence: How sure you are about your prediction (score 1-10)
- Ease: How simple it’ll be to pull off (score 1-10)
Multiply these three together, and voila! You’ve got your ICE score. Beauty of this system? It’s quick as a flash and perfect for when your product owner wants answers “now-now.”
The RICE Scoring Model: Adding Some Extra Spice
RICE takes ICE and adds a dash of extra flavour by throwing in the “Reach” factor. Think of it as ICE’s slightly more sophisticated cousin from Cape Town.
- Reach: How many blokes and ladies will this affect?
- Impact: Same as ICE, but scaled from minimal (0.25) to massive (3)
- Confidence: Your certainty level, usually as a percentage
- Effort: The resources needed, typically in person weeks or story points
The formula goes: (Reach × Impact × Confidence) ÷ Effort
It’s like calculating how much boerewors to buy for a braai based on how many people are coming, how hungry they are, how sure you are they’ll pitch, and how much effort it’ll take to prepare.
Adapting for Technical Decision-Making: Getting Technical
Now for the good stuff! Let’s take these frameworks and give them a proper tech makeover.
Technical Reach: It’s Not Just About Users
For technical work, reach isn’t just about user numbers. Think about:
- Number of system components affected (Is this change as widespread as load shedding?)
- Percentage of codebase impacted (More than the potholes on your daily commute?)
- Number of dev teams whose work would be influenced
- How often that dodgy code gets touched
- Future features that would benefit (Like building a highway that’ll make all future trips faster)
Technical Impact: Quantifying the Goodness
Instead of vague “this code is kak” statements, let’s get specific about technical impact:
- Performance improvements (x% faster page loads)
- Reduction in defects (fewer “ag, sorry man” moments with customers)
- Maintainability scores (how much less you’ll curse at 2AM when on call)
- Build time reductions (fewer coffee breaks during deployments)
- Security improvements (fewer chances for skelms to break in)
Rate these from 0.25 (barely noticeable, like a R1 coin in your pocket) to 3 (massive, like finding a R200 note in your old jacket).
Technical Confidence: Being Honest with Yourself
This is about backing up your technical hunches with some proper evidence:
- Historical data from similar work
- What your team’s greybeards reckon
- Results from spike tests
- Hard metrics confirming the problem exists
- Risk assessment (what could go wrong? Jinne, always think about this!)
Express this as a percentage. 100% means you’ll bet your favourite bakkie on the outcome; 50% means it’s as predictable as Joburg traffic.
Technical Effort vs. Ease: The Hard Yards
For RICE, we’re looking at effort:
- Development time (in person-weeks, not “it’ll be quick” promises)
- Testing complexity (from “shame, it’s simple” to “we’ll need the whole team”)
- Operational risk (will this be as disruptive as a taxi strike?)
- Dependencies on other teams
- Learning curve (is this like learning to drive or learning to fly a helicopter?)
For ICE, flip this into ease, where 10 is “lekker easy” and 1 is “harder than finding parking at the mall on Black Friday.”
Metrics That Actually Mean Something
Let’s get specific about what to measure.
Tracking Technical Debt
- Defect ratio: How many bugs per code size (Higher than the number of mosquitoes at a summer braai? Problem!)
- Code churn: Code that gets rewritten faster than you change your mind about load shedding schedules
- Technical debt ratio: The financial damage calculation
- Code duplication: Are you copy-pasting more than a first-year student?
Feature Development Efficiency
- Integration complexity: How hard is it to fit this new feature in? Like adding a new plug to a power strip or rewiring your whole house?
- Development time vs similar features: Taking longer than it should? Like when a “quick shop” at Pick n Pay somehow takes an hour?
- Regression rate: How often new features break old ones (The technical equivalent of fixing your TV but breaking your DStv in the process)
Making Subjective Stuff Objective
Hardest part of this whole business? Making vague tech feelings into solid numbers.
Quantifying Technical Impact
- Before/after benchmarking: Measure before, implement, measure after. Simple as a gatsby sandwich.
- Historical analysis: “Last time we did something like this, it saved us X hours of debugging.”
- Expert scoring: Get the team to rate independently, then discuss – like a wine tasting but for code.
- Risk-based scoring: How much disaster potential does this fix address?
Future Maintenance Costs
- Technical debt interest: Track time spent fixing issues in specific components
- Complexity trends: Is this part of the system becoming the technical equivalent of Sandton traffic?
- Change frequency mapping: Which parts of code change more often than Cape Town weather?
- Developer feedback: Actually ask your developers which parts make them want to leave for Australia
Balancing Business and Technical Needs
We all know the struggle – business wants features yesterday, while engineers want to rebuild everything from scratch. Here’s how to find the sweet spot:
Integration Strategies
- Combined scoring: Calculate both business and technical scores, then blend them like a good potjie
- Technical health budgeting: “We’re spending 20% of our time on technical health, no arguments.”
- Technical debt thresholds: “When it gets this bad, we HAVE to fix it, shame.”
- Opportunity cost calculation: “If we don’t fix this now, it’ll cost us X weeks next quarter.”
Communication Techniques
- Technical impact storytelling: “This refactoring will let us build features 30% faster next quarter, my bru.”
- Technical debt visualization: Make those problems as visible as Table Mountain on a clear day
- Risk-based advocacy: “If we don’t fix this, there’s a 70% chance of a major outage during Black Friday.”
- Regular technical health reviews: Like going to the doctor for a check-up, but for your codebase
Practical Application: Ready-to-Use Templates
Technical RICE Scoring Template
For each technical initiative, score:
Technical Reach (1-10)
- Number of services affected (1 point per 10% of total)
- Number of teams impacted (0.5 points per team)
- Percentage of codebase affected (1 point per 10%)
Technical Impact (0.25-3)
- 0.25: Minimal – Small improvements with limited effect (like fixing a typo in a comment)
- 0.5: Low – Noticeable improvements in one aspect
- 1: Medium – Significant improvements in multiple aspects
- 2: High – Major improvements enabling new capabilities (now we’re talking!)
- 3: Massive – Transformative improvements (the technical equivalent of finding a shortcut that halves your commute time)
Technical Confidence (0-100%)
- 100%: Proven solution with clear metrics (Bet your braai on it)
- 80%: Well-understood solution with some uncertainty
- 50%: Experimental approach with significant unknowns
- 30%: Highly speculative (Like predicting the Springboks score)
Technical Effort (person-weeks)
- Estimated development time
- Testing requirements
- Operational complexity
- Number of teams involved
Calculate Technical RICE Score: (Reach × Impact × Confidence) ÷ Effort
Technical ICE Scoring Template
If you want a simpler approach:
Technical Impact (1-10)
- System performance improvement (0-3 points)
- Maintainability enhancement (0-3 points)
- Developer productivity impact (0-2 points)
- Risk reduction (0-2 points)
Technical Confidence (1-10)
- Evidence strength (0-3 points)
- Expert consensus (0-3 points)
- Prior experience with similar work (0-2 points)
- Problem understanding (0-2 points)
Technical Ease (1-10)
- Development complexity (reverse scale: 10 = dead simple)
- Testing complexity (reverse scale)
- Operational risk (reverse scale)
- Team familiarity (direct scale: 10 = we could do this in our sleep)
Calculate Technical ICE Score: Impact × Confidence × Ease
Real-World Examples: Seeing It In Action
Case Study 1: Database Migration vs. New Analytics Dashboard
Database Migration:
- Reach: 9 (affects 90% of system components)
- Impact: 2 (high – like upgrading from a regular Uno to a turbo version)
- Confidence: 80% (well-understood approach with some migration risks)
- Effort: 12 person-weeks
- Technical RICE: (9 × 2 × 0.8) ÷ 12 = 1.2
Analytics Dashboard:
- Reach: 3 (affects only reporting components)
- Impact: 1 (medium – useful but not revolutionary)
- Confidence: 100% (dead sure about this one)
- Effort: 4 person-weeks
- Technical RICE: (3 × 1 × 1) ÷ 4 = 0.75
Even though the migration requires more effort than building a new deck for your braai area, it scores higher. That’s your answer, china!
Case Study 2: Refactoring Core Library
Impact: 8 (high impact on maintainability and developer productivity) Confidence: 9 (strong consensus among engineers) Ease: 4 (moderately complex requiring significant testing) Technical ICE: 8 × 9 × 4 = 288
That’s a score higher than a Springbok victory! Worth considering despite the complexity.
Making It Work In Your Organisation
Integration with Development Processes
- For agile teams, incorporate scoring during backlog refinement and sprint planning
- For teams using OKRs, align technical initiatives with quarterly objectives
- For continuous delivery environments, establish regular technical prioritisation checkpoints
- For organisations with separate platform teams, use the frameworks to align platform work with product needs
Evolving the Framework
Like a good potjiekos recipe, this framework should improve over time:
- Review how well your prioritisation decisions worked out
- Track actual outcomes against expected impact
- Refine your scoring criteria based on what you learn
- Adjust component weights as your organisation’s priorities shift
- Document your decisions to ensure consistency (and to show off when you were right!)
There you have it! A proper framework for making technical prioritisation less of a headache than morning traffic on the N1. With these tools, you can turn those vague technical arguments into structured discussions that even your most feature-hungry product manager can understand.
So next time you’re caught between building that shiny new feature and paying down technical debt, you’ll have more than just your gut feel to rely on. You’ll have a lekker system that helps everyone see the bigger picture. Now that’s proper sorted!
Don’t forget to grab a coffee and score those backlog items. Cheers! 🇿🇦






Leave a Reply