Iterations & Increments
Youâre listening to this podcast because you want to improve. You want to become a better developer, manager, or leader. This podcast is a great start, but now I have something to take you to the next level. You need the official Small Batches Way study guide!
The study guide charts a path to to software delivery excellency from all the best books, ideas, and practices. The path is four parts: Understanding of TDD, Understanding of software architecture, understanding of production operations, and understanding continuous delivery.
Get it for FREE at TheSmallBatchesWay.com.
Hello and welcome to Small Batches with me Adam Hawkins. In each episode, I share a small batch of the theory and practices behind software delivery excellency.
Topics include DevOps, lean, continuous delivery, and conversations with industry leaders. Now, letâs begin todayâs episode.
Dave Farleyâs writes about working iteratively and incrementally in his book Modern Software Engineering. This is simple to say but difficult to comprehend until you see at work.
My practice coaching engineers through new problems has solidified my thinking around this concept.
Now, I can share a practical example of working iteratively and incrementally. This way of working naturally keeps batches small and the engineers moving.
Iâll step through it in this episode.
The high-level objective was adding new telemetry into an existing service. This required three changes. The first two involved recording that an operation started and complete. The third required creating producing telemetry from the records of started and completed operations.
Iâll break this down into iterations and increments.
The mental model here is âHello Worldâ thinking over big bang releases. All changes must be deployable to production. It will take multiple releases to deliver a working system.
Hereâs an outline.
Increment one is shipping a walking skeleton. The walking skeleton only need to support future iterations. The walking skeleton involved creating the code paths without the implementation.
Increment one contains two iterations. The first is making the record keeping work. The second is making the telemetry work.
Increment begins with âHello Worldâ thinking. The first unknown is the interface between our code and the datastore. We know there will two calls: one to record the operation started and another to record it completed.
We used TDD to answer that question. We updated the tests for the two code paths to include expectations against a mock datastore. This was enough to learn what information we need to pass along.
We also knew the code didnât have to do anything, just that we needed the calls in there. So, we left the datastore implementation as a no-op.
That âno opâ is the key. It is OK to deploy no-op changes to production to test your assumptions. If a no-op was not used, then that expands the batch size beyond the bounds of understanding. Remember, small batches.
That âno opâ iteration was deployed to production. Boom, now we know the code path is working. That finished the record keeping end of the walking skeleton. The next iteration is the reporting end.
The uncertainty for the engineers was how to add the job and emit the telemetry. Put on the âHello Worldâ hat again. Whatâs a hello world version of this? Create a job that emits a constant value for the telemetry. âCorrectâ behavior is not essential to the learning.
We created a stub method on the datastore that returned an empty list. That was sufficient to create a cron job that queried the data store, then emitted a constant value of telemetry.
That commit was deployed to production. The engineers learned to find the new telemetry and chart it. It was a a straight line as expected.
Increment zero complete! That provided a running walking skeleton in production. The next increment was changing that constant line chart to something based on real data.
The next iteration only required changing two files: the data store tests and the datastore class itself. Fire up an editor to get the red-green-refactor TDD loop going. This iteration completed when functions for recording start and completions and retrieving history worked as expected.
Tests pass? Great. Commit that change, deploy to production, and observe. Iteration complete. Now the code path is properly recording starts and completions. Finishing the telemetry was the next iteration.
This iteration only required changing two files: the job test and job function. Tests were added for how telemetry was emitted when there is something more than zero records.
Tests pass? Great. Commit the change, deploy to production, and observe. The engineers introduced a fault in an external system to test a failure mode. That changed the history, thus the telemetry. The constant line was no longer straight! That proved the record keeping and telemetry worked in concert.
Iteration complete. Increment complete. Fully functional system delivered to production.
Sure there was more to do. The cron job didnât make any use of time-based filters and the implementation in the datastore wasnât optimal. That didnât matter. What matters is the team had iteratively and incrementally delivered a good-enough solution to production. Enhancements and changes will happen in future increments.
I have three take-aways for you.
One, adopt âhello worldâ thinking what is the simplest possible version what I can ship? We did this initially by shipping a constant value of the telemetry.
Two, leverage âno-opâ changes to step-over nonessential decisions. We did this initially by shipping a no-op version of the data store so we could progress forward.
The second point is only possible with software architecture guided by boundaries. The repository pattern enabled us to this. We could define an interface for writing and reading data without worrying about how that happened. That allowed us to break consuming and implementing the interface into separate iterations.
Three, optimize for learning. Before you start, ask yourself: What do I need to learn? Then identify the short path to learning it by ruthlessly removing the non-essentials.
Hereâs a variation on the last one from Dave Farley:
The best way to start is to assume that what you know is wrong, and what you think, is probably wrong and then figure out how you could find out how wrong it is.
All right thatâs all for this batch. Head over to https://SmallBatches.fm/91 for links to recommended self-study and ways to support the show.
I hope to have you back again for next episode. So until then, happy shipping!