I’ve read this piece recently by Casey Cobb about how we can use the Pareto Principle in software engineering. The Pareto Principle, or the 80/20 principle, is how for example 80% of car accidents are caused by 20% of drivers. In this piece Casey goes into how we can use this knowledge in software engineering to ship code faster.

https://projectricochet.com/blog/software-development-pareto-principle-and-80-solution

Applying the Pareto Principle to software then the following must be true:

  • 80% of features are covered by 20% of the codebase
  • 80% of complexity is in 20% of the codebase
  • 80% of engineering time is spent on 20% of the codebase
  • 80% of customers only care about 20% of the features

The key idea of the piece is that if all of the above are true then we should be able to develop 80% of the application in 20% of the time.

The Bad

My initial reaction to it was that it is a way over-simplification of software engineering. It’s not as easy as just stripping out 20% of the code so everything becomes much quicker. For example the deployment process may take time to set up and configure along with other plumbing which is required for the app to run but cannot just be removed. Similarly auth and using external APIs often takes time depending on how good the docs are but those can be core to the use of the entire app, not just sole features.

The Good

However after thinking about it some more I think the core idea of the piece is sound once proper communication is in place. The most important thing is that designers and product managers can lay out requirements more exactly so engineers who are implementing the features can get a better idea of how complex each one is. They can work out which parts are more valuable so can focus on shipping those.

Communication is a two way street though and engineers must communicate back the parts that require extra time and complexity. Maybe something could be simplified slightly or work another way to reduce the eventual complexity. PMs can then decide if features are really worth it, for now at least. Something that may be worth 10 hours of work may not be worth 100 hours.

One More Thing

It is mentioned in an offhand way in the piece how the measure and compare the estimated complexity and the actual complexity. Meaning before a feature is started they estimate how long the feature will take, and then at the end do a mini-retrospective on it to see how long it actually took. This feedback is then used to create more accurate estimations going forward.

It’s not something I’ve thought about before because I don’t really consider accurate estimations to be of a high importance. Instead they should be of a rough guide to judge relatively to other pieces of work if it may take longer or shorter. Over time the team should have a “feel” for how long work should take but without comparing the real vs the estimate we can’t see if our estimations are any good. This feedback loop after the work is done helps us to hone our estimations.

He uses these estimations to build a fine grained picture of how long features take and this in the end helps the team plan out work better. If something takes way longer than expected then maybe that’s a sign of something else going on. One thing I would say is to caution against using these numbers as ways of measuring performance. They are indicators, not benchmarks. Putting too much emphasis on the numbers will just make engineers overestimate to give themselves some breathing room.

Final Words

If there’s one thing to take away from this is that communication between software engineers and PMs is vital so that only the work that needs to be done gets done. Adding features for the sake of it impacts the codebase, reduces team performance and in the end affects the company as a whole. By communicating both ways we can trim excess features, and code, so only what is required gets done.