Estimates

Hey everyone. Adam here with a special announcement.
Starting this podcast meant swapping my technical blogging for podcasting. The Small Batches format forces me to iterate, focus, and distill the core point into a five to ten minute episode.
On the other hand, it’s left me without an avenue to share deeper technical knowledge with you. Showing and discussing code just does not work on audio podcast.
This is why I’m stoked to announce my first technical writing in over three years on my new substack: Software Kaizen!
The first post is on using statical process control with R and Datadog to find your SLOs. The post is loaded with the same theory you find in Small Batches with an added technical how-to.
Moreover, I’m excited to say that friend of the show John Willis, and author of upcoming book on Deming, peer reviewed this post.
So if you want to get started with SPC and go deeper into the practice behind Small Batches go to SoftwareKaizen.substack.com. That’s SoftwareKaizen.substack.com.
Expect more long form technical posts in the future.
Alright, back to Small Batches.

Hello and welcome to Small Batches with me Adam Hawkins. In each episode, I share a small batch of the theory and practices behind software delivery excellency.
Topics include DevOps, lean, continuous delivery, and conversations with industry leaders. Now, let’s begin today’s episode.

I host an open office hours at work. It’s for workstream synchronization and a forum to pick my brain. Most topics relate to on-going project work. I receive the occasional open-end question on best practices.
Last week an engineer added a topic for how my team does estimates. Game on. I was happy to share my position.
My position is simple: I, and my team, do not estimate individual tickets. Instead, I prefer batch size controls, strict WIP limits, clear priorities, and continuously delivering value.
Content
Let me set the stage.
The engineer works in a scrum team with some form of sprint planning and estimating. My team works differently.
Let’s begin with a mental model for “sprints”.
A sprint is a time-box. The aim is delivering as much as possible inside that time box. A planning session uses priorities, backlogs, sizing, and other factors to pull work into the sprint. The team commits to the work, then off they go—locked into the planned deliverables.
Sidebar: more on mental models in the previous episode, with examples that will make sense later on. OK, back to the main thread.
There is a diff between the espoused model versus the model in practice.
My practical experience reduces down to a simple question: how many tickets can we pull into this sprint?
The sprint is the parking lot and team fits as many cars, trucks, vans, and semis in the lot as possible—with just enough room to carefully maneuver each one in and out.
The allure is that estimating work sizes it. Then, we can pull as many different sized items as needed to load up engineers with as many tickets that “fit”.
This mental models creates set the initial conditions for systemic problems that impair delivery. Here’s how.

First, even in the best case, estimates are only accurate in a vacuum.
The estimate looses accuracy the moment it’s put to test in a dynamic system of conflicting priorities, unplanned work, and varying levels of WIP. Actually doing the work happens in this dynamic system.

Second, even in the best case, accurate estimates have no bearing on priority or business outcomes.
Consider a high-profile product launch with queued up marketing campaigns on a set date. Estimating a single ticket as a day, a week, one point, six points, or X-Large t-shirt means nothing regarding the outcome of making the launch date. A team can be 100% accurate in their estimates and completely miss the launch. Estimating cannot tell you how you will hit launch date or what to trade-off against it.

Third, all forms of estimates ultimately reduce to time.
Everyone in value stream innately understands time, so everything filters through that lens. This creates the false assumption that providing team X with Y units of time leads to Z deliverable. It’s a false assumption because estimates are static, but work happens in a dynamic system.

Let’s go deeper. There’s something between the second problem of business outcomes and third problem of time.
Some listeners may think “But, but Adam! We use story points which measure difficulty”. OK cool, but no one outside your team cares and management only understands time. Do you think the executives are planning on points or time?
Others may be thinking “Well actually we estimate in terms of business impact!”. OK cool, but as measured by what exactly? Is this consistent across work items? What’s your %C-A on estimates? And how does these relate to org priorities? Oh, wow, you can answer each of these. But how do you create the business impact? The estimate cannot tell you that. The how is where the value is.
My point here is that estimates have nothing to do with delivering business outcomes over time. They’re a lagging indicator at best, when we have much better leading indicators. This brings me to my fourth point.

Fourth, granular estimation encourages high WIP. WIP is a leading indicator for delivery.
Lower WIP is better. The more we can estimate, the easier it is reach and exceed 100% utilization. This is when things fall over. It’s plain textbook queueing theory.
It typically goes like this. It’s a two week sprint in a team of three engineers. One engineer is assigned a two-week ticket, one is assigned two one-week tickets, and another engineer is given ten one-day tickets. Wonderful! All tickets are assigned and all engineers have tickets. Sprint planning complete.
Congrats! The team has 100% utilization. Any change in the planned work or any unplanned work will break the sprint. Oh, and there’s no pair programming either. That’s whole other line of inquiry, though let’s just say that atomizing tasks to engineers is ill-advised.

Fifth, even in the best case, estimates do not encourage swarming on priorities.
Have you heard this before? “We’re fully loaded with tickets on priority one, but the tickets for priority two work are too big, so let’s pull some smaller low priority tickets instead.”
This has negative side effects. It creates more WIP, confuses the priorities, and creates neglected work when the high priority work interrupts and prevents finishing lower priority work.

Six, extremely large time estimates are accepted.
Teams may estimate a single item in months. Some orgs run this up the chain and decide to block out large chunks of the calendar—sometimes in quarters or even halves of a year for the work. This is a severe red flag for me. It tells me the batch size is way too large and time to value is way off.
I don’t trust anything over a week. There is too much uncertainty after a day, let alone a week. Anything longer is best expressed with the “shrugging intensifies” emoji.

I could continue, though I think this enough to make my point.
This mental model creates the initial conditions for downstream problems. So what’s the alternative?
The alternative is pull-based work, batch size constraints, aggressive WIP limits, and continuously delivering business value.
I am not opposed to estimates. The only estimate that matters to me is if the work fits into a predetermined batch size for cadenced pull.
My team’s batch size is a week. If larger then that, then find a smaller batch that delivers something meaningful. If that’s not possible, then study until you can. Work doesn’t move until then.
Notice what I did there? I took the estimating closer to WIP. Batch size may be proxy for WIP. Controlling the batch size paired with aggressive WIP controls keeps work moving.
Remember: watch the work, not the people.
Reducing and refining the batch sizes means it’s easier and possible to establish standard work. Different work streams may be sorted accordingly with different batch sizes and capacity allocations. Then it’s possible to continuously deliver business value in each batch—better yet even earlier.
This brings us closer a stable pull-based work system. I’ll close this episode with a quote from Dr. Deming’s “The New Economics” that emphasizes what I’m going for:
The average and limits of variation are predictable with a high degree of belief, over the immediate future. Quality and quantity are predictable. Costs are predictable. Just-in-time begins to take on meaning.

All right that’s all for this batch. Visit https://SmallBatches.fm/96 for links on flow, estimates, and ways to support the show.
I hope to have you back again for next episode. So until then, happy shipping!

Creators and Guests

Estimates
Broadcast by