Recently I have been encountering an increase in how a misunderstood best practice is misapplied to justify a bad decision. I’ve seen a general uptick as I’ve gained experience due to my increased knowledge, but also the increasing diversity of knowledge and experience of developers in the field. One practice in particular has stood out as particularly damaging to the API ecosystem namely the agile tenet of simplicity. This tenet advises the practitioner to add only the functionality and complexity required for the current known requirements. If one were to step back to think about it this would seem like it should be both obvious and harmless practice to follow. How could creating a design with the lowest possible complexity, or cyclomatic complexity for the computer scientist, ever be a bad decision?
We will cross that bridge when we get to it!
I’ve been consistently hearing this argument in regards to adding functional or design complexity to new API development. The practice of simplicity until necessary is generally sound, but fails utterly when applied to API design. The reason is quite simple there is only one chance to design an API, ever. But wait you cry, we can version the API! I’ve previously addressed the poor choice of versioning, nevertheless if you pursue this option the ill-advised use of versioning is a tacit admission of this fact. If there is only one opportunity to define the design of an API, you simply cannot make it any less complex than it will need to be to satisfy the eventual end goals of the API as it evolves.
When best practices go wrong!
The problem comes from the fundamental misunderstanding of the definition of best practices as rules of thumb, not hard and fast rules. Advocates and evangelists loudly tout around the benefits of their process, but often fail to acknowledge the existence of any scenario where their best practice simply isn’t. The argument consistently boils down to, this solution is too complex for now, we will go back and fix it later when we have time. But there is a few subtle built in fallacies which become this approaches Achilles heel.
The first is the belief that with the introduction of this technical debt the price to repay will not grow over time, or at worst will grow linearly. There are certainly situations where this might be the case but it would be the exception not the rule. The term technical debt was coined because of the tendency for the debt to grow like compounding interest or worse. Worse still it is very common that the weight of the legacy system once released would actually prevent you from ever returning to address the problem at all.
The second is the naive assumption that the future will be less busy, the team will maintain a desire fix the flaws, and their fortitude to expend capital to meet the requirements will grow. Case study after case study has proven this is overly optimistic and simply not true. As the cost to fix an implementation or design flaw escalates, the cost benefit tradeoffs with leaving the code in place become ever more biased in the favor of not touching ‘what isn’t broken’.
At the end of the day this is simply the lies told by designers, developers, and stakeholders to themselves and others to justify an increasingly more expensive sub-optimal deliverable.
Assuming your team is stellar and defies the odds by prioritizing the rework process, it following through is still completely dependent upon having the opportunity and control of all dependencies to seamlessly perform the work. If there is even a single client outside of your teams’ immediate control, your ability to complete this work quickly is severely degraded.
Agile: The buzzwordy catalyst and amplifier
There is nothing earth shattering here, but I haven’t even touched on the whole story. In the same paper as ‘cyclomatic complexity’ Arthur McCabe also introduces the concept of essential complexity, or the complexity innately required for the program to do what it intends to accomplish. Under the guise of the tenet of simplicity, the essential complexity is often left unsatisfied because the agile methodology places a burden of proof on additional complexity which is unforgiving and ultimately unsatisfiable. In order to reach the known essential complexity of a program, you first have to prove adding the complexity is actually essential. It’s a classic ‘chicken or an egg’ problem with no answer. Ultimately this will most often result in the process directing your actions to failing to meet essential requirements through a failure to define, justify, or evaluate essentiality of the added complexity.
The business decision, and business imperative to do only the required work for now is deaf to technical concerns outside of the short term, regardless of the costs or savings. This isn’t to say developers should always be in control of these decisions, but it is very important to be aware of the increased importance of communicating technical pitfalls and their costs outside of the technical audience as the process is heavily biased against technical concerns. The adoption of agile practices has actually increased the importance of a highly knowledgeable technical liaison who can push back when shortsighted goals will provide a quick positive payout saddled with a negative longer term value. This is where it all comes back to the misunderstanding of best practices.
These teams are more often being led by practitioners without truly understanding the best practices business purpose. Rigid adherence to, and often weaponization of, ‘best practice’ in these design discussions has only served to hide the inevitable costs associated with poor design until a later date with the debt relentlessly compounding unimpeded.
You can’t put design off, so don’t!
I started this off by saying you can’t iterate away the interaction design, so I want to be very clear what parts of the API design can and cannot be iterated. The design of an API is actually composed of two relatively straightforward and separate concerns, what I will call the interaction design and the semantic design. The interaction design is the complete package of the way a client will interact with your service. It includes security, protocol concerns, message responses, and required handling behavior which cuts across multiple resources among many others. The semantic design encompasses everything else and this can and should be created and enhanced over time as domain requirements change.
Knowing the interaction design of the API is permanent once completed, it’s important to not only get it right, but to ensure the design defines the capability for expansion of specific functionality which will need to change over time, for example the use of a new authentication scheme, or filtering strategy.
It is impossible to list the requirements which will fall under the interaction design of your API, but I provide some questions I’ve used which will help you go through the initial design period of your API to exclude the design and implementation of features which can wait.
- Does this feature change the way a consumer interacts with the API?
- Does this feature change the flow of an interaction with the API?
- Could later introduction of this feature break consumer clients?
- Could later introduction of this feature break cached resource resolution?
With a rigorous initial design session, utilizing these questions you should be able to determine the essential complexity of your API interaction design with much higher accuracy, and prevent cost increases and consumer adoption pain from adding new value to your services in the future.