Sometimes you can relate the decisions and leanings of software development with other events of your life. Recently I have had a deja vu (Déjà vu) effect while brainstorming big bang vs piece meal approach to decouple a complex monolith.
A story of failed family holiday
Let me tell you a small story from my childhood. During our summer school holidays, me and my siblings were adamant to go to Shimla (a beautiful hill station in northern India) for our family holiday. After factoring in time/budget for that year, my father put forward a clever proposal as a negotiation bid with 4 teenagers. He suggested that if we wait one more year, we can go to Goa beaches (Southern India). A beach is a rare opportunity of north Indians and something that would boast our little pride in friends. Situation got under control and offer was happily accepted.
All year we bragged in friends about our plans for next year. Next year round, things were not so different, so another promise for Goa holiday was made, but for the next year. That was the only option we have had. Do I think my father was just making false promises to us, not at all. This was an equal promise for himself and he was equally disappointed when we could not do it in successive years. This highlights the very nature of human being to long for doing things in a perfect way, but that has to wait for a suitable time which is seldom now.
What it has to do with Microservices?
Okay enough of a grumpy kids whinging about failed holiday plans. Lets see what on the earth it has to do with the Microservices. In fact it not just Microservices but it applies equally to any refactoring / re-architecture. What I have learnt from those holiday plan is that many times it make sense to do smaller improvements on your legacy application instead of waiting for uncertain eventual re-write.
Big bang is not the answers in many situations:
I have witness many projects waiting forever for for an overhaul or re-write from scratch. In the mean time development teams keep adding to what we call technical debt. Any bit we add to it, makes big-bang even more expensive. And the lucky(?) projects that in fact make it to the complete re-write (for good or bad), often go massively over in budget and time frame. By the time it takes it first breath, it is already due for next re-write. Often such ambitious re-write result in squeezing resources out of new feature development in the meantime. We are living in time where speed to market is becoming critical to business success, so halting new feature development would do exactly opposite.
I am not saying big-bang is always bad, but it has its place. You better be 100% sure that you need it, provided its dependency on excessive budget and time frames.
Our first micro-service attempt:
A complex application I worked on, we call it Majestic Monolith with pride. To check exponentially increasing complexity and to facilitate multi-stream development in parallel, our road map to decouple it to smaller sub-systems was the main technical priority. After careful analysis, out team decided to avoid big bang approach mostly because of the above mentioned reasons. Instead we started with identifying the seams of the various bounded contexts in our application. The next feature lined up for development as top business priority, we developed it as a home for the bounded context it belongs to.
This service in the first iteration was just a separate c# project with its own acceptance test suite. But it was not completely isolated as it shared the same build pipeline and also dependent on some highly coupled monolith components. This was intentional as our team consciously decided to “one baby step at a time” (or small trip to Shimla every holiday)
In the next release, when a new feature was required changes in the same bounded context, this new service became our default choice and even the code from the Monolith started making its way to the new service, as show in the picture below.
There were many benefits observed by the team, especially from the testing team.
- With the clearly defined scope and a separate set of acceptance tests, team have more confidence on the automated tests suites which means savings on the regression cycle.
- Team was able to entertain changes and defect fix in this area much quickly.
- It was easy to trail new concepts and the main enabler was the contained scope of this new service. Developers in team has been practicing TDD for a while, but this time we trialed BDD with controlled scope.
- With first chip on the big piece of rock, cracks (seams) emerged on the surface, validating our plans to for natural division of the system. This knowledge will play crucial role in making decisions down the track.
On the learning side, we realized that we need to lift our game up on the few aspects.
- More upfront planning to make best of the decouple system such as confine failures to save rest of the system from it by using patterns such as Bulkhead, Circuit breaker etc.
- Though the coverage was excellent for the sub system itself from the automated acceptance test suite, but dev and test effort on the integration can be improved with better utilization of some tools and techniques such as Consumer Driven Contracts.
More than it proved the need to decouple the application into smaller sub systems, we were impressed by the benefits to reap from the piece-meal approach. Besides the extraction of the code, there were many other aspects of the service that need to be separated from the big pack. We are now looking at extracting a bit at a time and gradually extract the code into smaller subsystems and eventually strangle the monolith as Martin Fowler described as StranglerApplication.