How to (not) give your first conference talks.

Full disclosure and TL;DR

My talks at RESTFest were not the fairy tale ending of a Cinderella story.  They were really, really bad.  Skip to the bottom end if you would like to watch them, without the results of my retrospective process.

“Luck is what happens when preparation meets opportunity.” – Seneca

If we go by that definition, I was not lucky at RESTFest this year.  My preparation game was seriously lacking.  I had the opportunity, but through hubris, indecision, and a touch of nerves I spoke impassionately and poorly about topics which deeply engage me.

What happened?

During my preparation for the conference I rode the fence and was indecisive about topics I wanted to present.  The conference added a great workshop by Shelby Switzer on hypermedia APIs and clients, which made me feel like my hypermedia talk would be largely redundant.  In short, I squandered the preparation time by I talking myself out of presenting a topic I felt passionate about but I was never fully convinced.  When I arrived, the environment was far more welcoming and supportive than I had dared anticipate was possible and I was convinced to develop and present a long form talk despite conventional wisdom against this type of action.  I wasn’t prepared for speaking in this environment, and my lack of preparation came from an entirely unexpected (by me) vector.

This is where the hubris comes in. For most of my school and professional life I’ve found success in even the most important of speaking scenarios by knowing the material well, compiling a light list of touch points and allowing myself to flow freely through the material.  I was sure, despite all evidence I read to the contrary, I would have no difficulty in ‘shooting from the hip’ in this way.  Ha ha ha… nope.

Speaking at a conference with an unknown audience is very different from any other forms of public speaking I’ve encountered, it requires you as a presenter to have strong confidence in your delivery structure in the absence of a rich interpersonal feedback loop.  I was unwittingly relying on an undeveloped and unknown muscle for reading the audience and adapting my tactics to the audience.  I assumed my confidence in my mastery of the material would be enough to power my presentation with zest to inspire all to immediately pick up the hAPI banners and charge forth.  Ha ha ha … nope.

This perfect vision was about as far away from reality as possible, while still delivering any talk at all.

What the audience got instead was a couple tone deaf lectures, presented with obvious discomfort lacking any semblance of passion or inspirational energy.

To the attendees of RESTFest 2017:  Please accept my deepest apologies for putting you through such a difficult talk.  Please also accept my sincerest gratitude for your benefit of the doubt in allowing me to finish the experience, and the tremendous support you all showed after my talks.  I did my material a disservice, gave an uncomfortable talk, yet you still welcomed me with honest, constructive, and yet reserved feedback.

Seriously, you guys rock.

Sorry Seneca, I’m not buying.

I’m choosing to reject Seneca’s definition in this case, because it doesn’t afford me any obvious path’s forward.  I obviously have future talks, but I had this opportunity and failed to truly capitalize.  Instead I’m looking at this as a win, not for my pride but because I can learn from this win and use it as a base to move forward.

Being such an introspective person, I’ve been repeatedly beating myself up over this failure since I walked away from the podium.  It’s only gotten worse since I have seen proof my initial analysis was accurate.  As little as one year ago I lacked any motivation whatsoever to speak at conferences, yet in September found myself at the very same conference which formed the foundation of my understanding of hAPI architecture.  Then in front of my proxy teachers, colleagues at large, and even someday perhaps my peers I stood and presented my ideas.

Thanks to Ronnie Mitra for offhandedly diagnosing my condition as “Imposter Syndrome” to explain the sudden nerves which nearly froze me in place.  It was a very new experience for me to feel nervous to voice my opinions or ideas.  The monotone lecture way I spoke came as such a stark contrast to my typical passion (bordering on fervor), to put a positive spin on it, that I usually speak with as I discuss anything I care deeply about.  Yet with the support of my family, this community, and my commitment to these goals I’m not running away.  I have more talks in the near future, and it is my hope there are yet more still to come.

Why?

Despite my poor performance at RESTFest these topics are things I’ve become very passionate about, they are all connected and worthy of my time.  Working to help refocus the tech industry on providing value to people; making it easier for developers to help people; expanding the definition of developer to include more people; recruiting more people to this cause of helping people; the connecting thread is a deep seeded calling to help people I’ve uncovered in the last year and a half.  I refuse to look at my poor performance as a failure, because it would be an anchor with a strong pull to stop or change course.  Yet, I’m not standing at an inflection point; I’m standing at a fork in the road between the hard path towards my goals or the easy path towards some consolation destination.  This failure of mine is actually an opportunity to prove my resolve and grow.

“If not now, when? If not you, who?” – Hillel the Elder

I’m choosing to view this as a win, because a lot more somebodies have to do it and having seen the opportunities I can’t willfully abandon them.  I was lucky at RESTFest, Seneca’s definition is not the only one.  I may have struck out, but at least I got the chance to bat in the first place.  Obviously this is a rocky start down this path, but I’m choosing to own it – it’s my rocky start.

I usually like to refrain from discussing things as intimate as this since my thinking sometimes comes off as alternatively grandiose or convoluted, but the recordings are available and I can’t change the past.  I can only control how I respond to it and what I do next.  I’m using this raw disclosure as a way to provide some excuse free context to the videos and a guiding light to keep myself on course.  I’m claiming responsibility for the lack of preparation and defining a path to grow into this speaking world I find myself in.  Sure, I haven’t given myself easy or short term goals, but I now do have a way to objectively track my progress and observe any deviation on the long path to my goals.

Epilogue

If for some reason you have read this far, and you still have the desire to view my talks I’ve included the links below.

Last warning – As of the writing of the post, I’ve only been able to suffer through the first short talk and about 9 minutes of the second.

Stop burning your customers and users.

Human Conversation Services.

Don’t iterate the interaction design of your API.

Recently I have been encountering an increase in how a misunderstood best practice is misapplied to justify a bad decision.  I’ve seen a general uptick as I’ve gained experience due to my increased knowledge, but also the increasing diversity of knowledge and experience of developers in the field. One practice in particular has stood out as particularly damaging to the API ecosystem namely the agile tenet of simplicity.  This tenet advises the practitioner to add only the functionality and complexity required for the current known requirements.  If one were to step back to think about it this would seem like it should be both obvious and harmless practice to follow.  How could creating a design with the lowest possible complexity, or cyclomatic complexity for the computer scientist, ever be a bad decision?

We will cross that bridge when we get to it!

I’ve been consistently hearing this argument in regards to adding functional or design complexity to new API development.  The practice of simplicity until necessary is generally sound, but fails utterly when applied to API design.  The reason is quite simple there is only one chance to design an API, ever.  But wait you cry, we can version the API! I’ve previously addressed the poor choice of versioning, nevertheless if you pursue this option the ill-advised use of versioning is a tacit admission of this fact.  If there is only one opportunity to define the design of an API, you simply cannot make it any less complex than it will need to be to satisfy the eventual end goals of the API as it evolves.

When best practices go wrong!

The problem comes from the fundamental misunderstanding of the definition of best practices as rules of thumb, not hard and fast rules.  Advocates and evangelists loudly tout around the benefits of their process, but often fail to acknowledge the existence of any scenario where their best practice simply isn’t.  The argument consistently boils down to, this solution is too complex for now, we will go back and fix it later when we have time.  But there is a few subtle built in fallacies which become this approaches Achilles heel.
The first is the belief that with the introduction of this technical debt the price to repay will not grow over time, or at worst will grow linearly.  There are certainly situations where this might be the case but it would be the exception not the rule. The term technical debt was coined because of the tendency for the debt to grow like compounding interest or worse.  Worse still it is very common that the weight of the legacy system once released would actually prevent you from ever returning to address the problem at all.
The second is the naive assumption that the future will be less busy, the team will maintain a desire fix the flaws, and their fortitude to expend capital to meet the requirements will grow.  Case study after case study has proven this is overly optimistic and simply not true.  As the cost to fix an implementation or design flaw escalates, the cost benefit tradeoffs with leaving the code in place become ever more biased in the favor of not touching ‘what isn’t broken’.
At the end of the day this is simply the lies told by designers, developers, and stakeholders to themselves and others to justify an increasingly more expensive sub-optimal deliverable.
Assuming your team is stellar and defies the odds by prioritizing the rework process, it following through is still completely dependent upon having the opportunity and control of all dependencies to seamlessly perform the work.  If there is even a single client outside of your teams’ immediate control, your ability to complete this work quickly is severely degraded.

Agile: The buzzwordy catalyst and amplifier

There is nothing earth shattering here, but I haven’t even touched on the whole story.  In the same paper as ‘cyclomatic complexity’ Arthur McCabe also introduces the concept of essential complexity, or the complexity innately required for the program to do what it intends to accomplish.  Under the guise of the tenet of simplicity, the essential complexity is often left unsatisfied because the agile methodology places a burden of proof on additional complexity which is unforgiving and ultimately unsatisfiable.  In order to reach the known essential complexity of a program, you first have to prove adding the complexity is actually essential.  It’s a classic ‘chicken or an egg’ problem with no answer.  Ultimately this will most often result in the process directing your actions to failing to meet essential requirements through a failure to define, justify, or evaluate essentiality of the added complexity.
The business decision, and business imperative to do only the required work for now is deaf to technical concerns outside of the short term, regardless of the costs or savings.  This isn’t to say developers should always be in control of these decisions, but it is very important to be aware of the increased importance of communicating technical pitfalls and their costs outside of the technical audience as the process is heavily biased against technical concerns.  The adoption of agile practices has actually increased the importance of a highly knowledgeable technical liaison who can push back when shortsighted goals will provide a quick positive payout saddled with a negative longer term value.  This is where it all comes back to the misunderstanding of best practices.
These teams are more often being led by practitioners without truly understanding the best practices business purpose.  Rigid adherence to, and often weaponization of, ‘best practice’ in these design discussions has only served to hide the inevitable costs associated with poor design until a later date with the debt relentlessly compounding unimpeded.

You can’t put design off, so don’t!

I started this off by saying you can’t iterate away the interaction design, so I want to be very clear what parts of the API design can and cannot be iterated.  The design of an API is actually composed of two relatively straightforward and separate concerns, what I will call the interaction design and the semantic design.  The interaction design is the complete package of the way a client will interact with your service.  It includes security, protocol concerns, message responses, and required handling behavior which cuts across multiple resources among many others.  The semantic design encompasses everything else and this can and should be created and enhanced over time as domain requirements change.
Knowing the interaction design of the API is permanent once completed, it’s important to not only get it right, but to ensure the design defines the capability for expansion of specific functionality which will need to change over time, for example the use of a new authentication scheme, or filtering strategy.
It is impossible to list the requirements which will fall under the interaction design of your API, but I provide some questions I’ve used which will help you go through the initial design period of your API to exclude the design and implementation of features which can wait.
  •  Does this feature change the way a consumer interacts with the API?
  •  Does this feature change the flow of an interaction with the API?
  •  Could later introduction of this feature break consumer clients?
  •  Could later introduction of this feature break cached resource resolution?
With a rigorous initial design session, utilizing these questions you should be able to determine the essential complexity of your API interaction design with much higher accuracy, and prevent cost increases and consumer adoption pain from adding new value to your services in the future.

Unleashing generic hypermedia API clients

A true restful API has been called many things, hypermedia web APIs, ‘the rest of REST’, HATEOAS – the world’s worst acronym, or perhaps the newest hAPIs.  Regardless of what you call it, this concept has long been proclaimed to solve nearly all of your most difficult design problems when building a web service interface.  There is plenty of evidence to support the claims made by hypermedia evangelists over the years, however one glaring omission is likely the cause for the slow adoption of hypermedia on restful services.  How do you consume this service, and what do all of these link relations mean?  Building an effective hypermedia client is more complex a task than consuming a CRUD API, an extremely difficult question to answer has been when do the benefits outweigh the cost of complexity?  Once past this hurdle, how does a consumer know how to interact with the service?

It is no wonder adoption of a superior design is so slow when a more complex design leads to more complex clients.  The primary selling points for this style are longevity, scalability, and flexibility, however the benefit from these traits is seen over a long period of time making the complexity a difficult tradeoff to evaluate at the start.

We are all very familiar with good, seemingly simple hypermedia clients.  In fact, you are likely using your favorite one right now to read this.  If we know so much about building good hypermedia clients, why are hypermedia APIs still not the de facto standard?

The key to enabling adoption of hypermedia APIs is very simple, make them easier to consume.  The Open API Initiative through the swagger specification has demonstrated the power and appeal of standard formats to enable rapid adoption of best practices in accelerated development cycles. I often will call out the shortcomings of the specifications, but it is critical to understand the cause of the successful proliferation to the web at large. The trick is to apply the lessons learned from this success to driving the adoption of semantic hypermedia.  To make a hypermedia API easier to consume you create generic clients to encapsulate the complexity by establishing and adhering to a strict http behavior profile.  Then you subscribe to or publish a semantic profile of the application adding domain boundaries to the messages and actions.  Finally, allowing clients to tailor their hypermedia through requested goals of supported interaction chains.

Often hypermedia is used to augment CRUD services using binding formats like OAS.  In this scenario it simply can’t be relied on to drive the interaction with the service as it has no guaranteed, or an unbounded, range of responses.  Establishing a range for the hypermedia domain semantics is critical to transition the role of hypermedia from augmentation to the vehicle for application state and resource capabilities.

The takeaway here is simple, if you want to have the robust flexibility offered by hypermedia APIs then your focus should be on enabling strong generic hypermedia clients.  To build strong generic hypermedia clients, you need to adhere to strict service behavioral profiles to isolate the domain from the underlying protocol behavior.

Hypermedia APIs: Use extensive content negotiation

In my last post I touched on how important it was to insulate consumers from the immediacy of a breaking change.  Nothing you can do as a designer will allow you to create the perfect API which will never require change on the first try.  What you can and should do is reduce the likelihood of the occurrence of a breaking change as much as is feasible, and then allow consumers to gradually adopt to the changes on their own schedule.  In this post I’ll discuss the need for extensive content negotiation.

It has been stated, in the comments on these very guidelines no less that there is a striking similarity between the 9th and this the 11th guidelines, as both rely on or discuss content negotiation.  Much like the first guideline to embrace the http protocol, the benefits, constraints, and reasons for content negotiation are sufficiently board to merit multiple discussions to be properly addressed.  It is imperative a designer avoids hypermedia formats which prescribe URL patterning because this could lessen the proper attention being given to resource representation and affordance design.  The goal of this discussion is to address the rest of the content negotiation constraints to prepare your designs for interaction with real traffic volume and diverse consumer demands.

As the API designer, your job is to provide the simplest service you possibly can to your consumers.  CRUD APIs like OAS (swagger) often struggle with complex designs when domain functionality doesn’t map to 4 methods very well.  Other solutions like GraphQL  provide excellent solutions to captive audiences and internal services, but for external consumers often result in the same poor consumer experience. Quite simply the act of consuming the service correctly requires too much knowledge about how the service is built.  So how do you avoid making these same mistakes with hypermedia APIs?  You allow your consumers to interact with your service just about any way they want.  The fact is you will never be able to guess all the particular ways a consumer would want to interact with your service or tailor their requests, so don’t try.  The solution is to build your service as generic as possible and allow the consumers to choose the interaction mediums will be used.

What all should be negotiated?  The short answer is everything you can reasonably support which adds to the consumer experience.  A longer non exhaustive list of potential negotiated points:

  • Hypermedia Format (Content-Type)
  • Filter Strategy
  • Query Strategy
  • Pagination Strategy
  • Cache Control Strategy
  • Goals
  • Vocabulary
  • Sparse Fieldsets
  • Representation or Document Shaping

It’s a long list, does your service really need to support all of those negotiation points?  It should aim to support all of these and more if they are reasonable and feasible to your service domain.  Yes, this adds a lot of complexity but it’s crucial to focus on the consumer experience, and the long term payoff of creating a service which will happily satisfy the consumer needs for years to come.

These negotiation points are all critical to supporting a wide breadth of consumers, but they are also central to providing service flexibility over time.  A service designed from the beginning to be generic, and support a wide range of many different properties already has the capability to support one more option in any particular property.  When a new hypermedia format comes out, or a new standard filter strategy, your service already provides multiple options for this properties and supporting the change is nothing more than plugging in the appropriate functionality.  You can’t know what formats will be wanted in 5 years, but your service has been designed to account for changes over time, and the required upkeep is vastly lower than any alternative presented to date.

Design your API to negotiate with your consumers as much as possible, and you will have an enduring service your consumers will love to use for years.

Hypermedia APIs: Use flexible non-breaking design

In my last post in the series of hypermedia API guidelines, I discussed the need to decouple the design and implementation details of your API from the constraints of any particular format.  You likely aren’t designing your own format, but it is a good decision to avoid formats which require URL patterns, as they can provide confusion and increase the odds a consumer will make calls directly to URLs.  In this post I’d like to go through the follow up guideline to don’t version anything, which will fill in the remaining gaps in dealing with resource and representation change.  To support long term API flexibility, your design should leverage a strict non-breaking change policy, with a managed long lived deprecation process.

As time passes, an APIs design can lose relevance to the piece of reality it is built to model.  Processes change, properties change, and priorities change so it is crucial to maximize flexibility for change over time.  When using hypermedia APIs, it is important to understand the three types of changes you can make to your profile, and the appropriate way to manage each kind.  Optional changes will make modifications to the representations and their actions without any effect on current consumers and their bindings.  Required changes will make additions to the profile which can be gracefully handled by the generic client.  Breaking changes, or removing items from the profile, will require a client update to maintain compatibility.

In traditional statically bound API styles the handling of the optional changes would likely lead directly to consumer client changes as the representations of resources are strongly coupled to the consumer.  However, a generic hypermedia client is intentionally dumb when it comes to the properties of resources, so the addition of any unknown resource simply behaves in the default manner.

The story of required changes is much the same as the optional changes.  The highly coupled service and consumer relationship requires constant maintenance and attention to continue to function.  A hypermedia API consumer client will manage the required changes by standard approaches, generic fields which are required can be flagged to the consumer as invalid without requiring any strong bindings to the consumer client.

In this way, the two changes which represent any difficulty for hypermedia APIs are the required and breaking changes.  In the case of the required changes, a previously valid representation is no longer valid because a new property has been added.  Alternatively, there is a new action has been added to a representation which was not previously expected without a client binding to the action.  The breaking change is a representation or action being removed from the profile which is has previously been required or bound by consumers.  With these definitions, it’s clear the real difficulty is in addressing the breaking changes.  The solution to breaking changes again can be found in the very first guideline I discussed, use the HTTP protocol to advertise change.

Previously in these discussions I have noted how the hypermedia API will manage the range of bounded contexts available to consumers.  Diving into this concept a little further, the primary benefits in supporting a range of bounded contexts is to allow transparent incremental versioning and consumer preference in the resource representations to be utilized.  Many leading tech organizations and methodologies stress the importance of versioning the API, unaware or uncaring of the fact that doing so has sown the seeds of future breaking changes.  By tracking the changes of your representations in the supported vocabularies, your service is able to leverage the HTTP 3xx response code family to inform consumers that change is imminent while still respecting their interaction in the vocabulary they know.  This allows consumers to upgrade gracefully on their own schedule, and greatly reduces the occurrence of high stress deadlines caused by your services’ evolution.  Through nuanced activity tracking and API orchestration, you will have an accurate view on exactly when particular representations or portions of the API are no longer in use.  Allowing you to confidently sunset old functionality knowing it will not likely result in a rude awakening to one of your extremely valuable customers.

By leveraging the protocol in the standard way, we can avoid breaking changes from immediately impacting consumers and requiring their full attention.  As I’ve mentioned elsewhere, creating the good consumer experience is critical to the success of your API, and a great way to keep consumers happy with your service is to not break their clients at 3am on Saturday night.

 

Your API is your product: even if you have a UI

I’ve recently discussed the problems with nearsightedness when designing APIs through the comparison of a OAS (Swagger) API to a hypermedia API.  These discussions have been very technical, targeted at an audience of API designers largely without addressing business and economic ramifications.  In this post I’d like to take a step back to remove my technical hat, and talk about the business, economic, and human benefits of supporting and maturing the proliferation of hypermedia APIs.  I’ll go into some differences between these two options from a business perspective to demonstrate the massive value of hypermedia APIs.  I will end with a little on a related topic, the undue influence on the direction of technology from the venture capital backed world of hyper growth.

Your API is your product.

Let’s talk about the elephant in the room; I would like to address the extremely common misconception that an API is nothing more than your UI app’s gateway to data.  The term REST as the industry understands and uses it, reduces the value proposition of your API development to little more than a gateway between your product, and how you store your product’s data.  It may provide some functionality, enhance performance, and shape the data in a way which is beneficial to the UI app development team, but it provides no value in itself.  Not only does this go directly against the path of progress towards the API economy, but it wastes the opportunity to save time and money on redundant parallel effort.  You have driven initial costs up with duplicated development effort, and maintenance costs also go up due to similar bugs in many places.  Perhaps the most critical effect of this mistake is you have almost certainly increased the time to market for your entire solution.

Your customer is everyone outside your API.

It may be difficult to look at your API as anything more than a means to provide your real products with the data they need to create value, but this thinking is guaranteed to hit your bottom line in a big way.  The API is your product, and anyone interacting with it is your customer.  The internal team developing your new mobile app?  Your customer.  The group responsible to maintain your web application?  Your customer.  The outside parties looking to utilize your service, without having to use your mobile and web apps?  Your customer.  Each of these groups share the exact same goal, they want to utilize the functionality your API provides as quickly and easily as possible.  If your own employees, and your customers all share the same goal, you are missing a tremendous opportunity to capture efficiency gains by making your API easier to use.  Anyone using your API regardless of their affiliation with your company wants to learn as little as possible about your API to meet their goals.

Solutions like OAS offer short term benefits to your product.  Developers can quickly get up and running, there is ample documentation explaining the ins and outs of your system, and users can utilize this to leverage your products very quickly.  The catch is these solutions offer extremely poor performance in the long term.  Over time your customers will need to constantly maintain their code, continue to need to read and understand your business models to meet their goals, and their quick solutions turn into a nightmare of legacy code to fix.  The ample documentation you thought was such a victory becomes a high barrier to entry as your service matures.  The result is an extremely dissatisfied customer, one who won’t refer your product to a friend or colleague, who is only looking for a better opportunity to present itself before they leave your product in the past.  When they are gone, they aren’t likely to come back, they already know how bad it is to use your product.

When you decide to use a product like OAS to form the foundation of your API, you prioritize your needs over the needs of your customers.  The short term benefits of OAS disappear quickly, but the long term negative effects to your business and brand will be extremely costly to remedy.  A product which puts the priorities and goals of the customer front and center will drive referential sales, creating buzz and goodwill in the marketplace surrounding your brand.  If you want to create long term goodwill and revenue security, you need to prioritize how people feel about using your product.

You can’t sell a mega product.

Technology is forcing businesses to change how they sell their products to the market.  The concept of selling an entire package of solutions is quickly yielding to selling smaller incremental sets of solutions which can be independently acquired and used when the customer needs them.  Digital products are quickly becoming commoditized to the logic and value they add to your customers’ business processes.  If your business model is not changing, you will likely soon find your target market has dried up.  Inferior, but modularized products will take the place of your products as customers learn to carve out the functionality they need without unnecessary costs or complexity of larger bundled solutions.  Every organization is in a race to the bottom on cost to provide products and services, if you force a customer to buy products they don’t need, you are continuously inviting them to seek alternative products.  When you promote goodwill and engagement with your customers, the easiest sales channel of current customers gets even easier to sell enhanced services.  Your customers are more likely to buy additional products from you when they are already satisfied with their current solutions.

You can’t sell a mega product, but you might be able to sell a customer all the parts of one.  Fixed API designs like OAS make segmenting your products difficult and unintuitive, while requiring a lot of management overhead which cuts deeply into the margin for the product.  The dynamic design of hypermedia APIs allows your product to be segmented naturally.  This enables marketing initiatives to directly target specific functionality and customer pain points, while requiring very little additional overhead to reduce the profits.  If your segmentation isn’t intuitive and it isn’t easy to determine where the functional and license boundaries lie, your customer experience suffers dragging your future sales potential down with it.

Your business probably isn’t hyper growth.

It is difficult to look at the success of companies like Facebook, Netflix, Amazon, and Uber and resist the temptation to copy the way they operate, however its very likely the needs of this niche market do not match your organization or your industries’ needs.  The move fast and break things mentality of Silicon Valley and other venture capital funded startup hubs pairs extremely well to the short term benefits of API designs like OAS.  In the venture world a problem two weeks away can feel like two or three lifetimes.  Companies who intend to hyper scale for acquisition aren’t concerned with their customer’s success in 2 and a half years, because after they sell in 2 years it will be someone else’s problem.  Google and Amazon build, try, and sunset so many products that it’s folly to waste time concerning themselves with long term benefits to themselves let alone customers.  If you are reading this then there is a very good chance your needs don’t align to such short term goals.  Trying to operate using the same tools and methods as these hyper scale companies will do a disservice to your customers and your brand.  Your business model likely is concerned with your customer satisfaction in two, five, even 10 years.  Hyper scale companies have, and continue to develop, good tools for their needs, my advice is to look carefully to see if those tools fit your needs, because it’s likely they don’t.

Hypermedia APIs are simultaneously an extremely proven design, and an unexplored frontier.  The internet itself runs upon the very same principles of a well-designed hypermedia API.  Developer tools for this space do currently lag behind the alternatives like OAS.  Fortunately, they have the same potential for speed to market, prototyping, and integration tools.  Investments in developing hypermedia APIs and the tooling around them are investments for the future on the scale of decades.  Hyper scale companies have created tooling which prioritizes their goals of short term gains, if your company is not primarily interested in short term gains then it is up to you to create the tools which prioritize the long term benefits to match your goals.  The long term benefits to your company and your customers of developing a hypermedia API have no equal, there isn’t even a good alternative to compare.  If your business is concerned about long term market sustainability, revenue, and customer retention you should be looking into hypermedia APIs.

Hypermedia APIs: Decouple your design from a format

In my last post I discussed the start of the next group of guidelines with the use of vocabulary provided goals to curate hypermedia interactions with the service.  This exciting idea will allow truly domain driven interaction with your service, while remaining stateless and easy to consume.  The next guideline is more of a cautionary tale, and that is the design should be decoupled from the hypermedia format of choice.

When hypermedia is discussed today, I imagine the conversation ventures around the room discussing the different available formats.  You’ll hear mention of HAL, JSON API, JSON-LD, Hydra, Siren, and Collection+JSON among others.  The pros and cons of each decision are weighed, and eventually a consensus is reached and the team decides to use ‘X’ format.  The particular format picked is irrelevant to this discussion, however there is a chance the format picked will include something it shouldn’t; it will include specifications for URL patterns.  The problem with this is as the hard won victory of building the hypermedia API and client allows near effortless consumption of the service, formats which specify URL patterns, greatly increase the odds a consumer will cheat and bind to a URL besides the root.

However, this isn’t the only concern for supporting a hypermedia format at specific URL patterns.  Suppose you had a requirement to support another format as well?  Not a problem, that format can just use these URLs, as hypermedia makes the URLs irrelevant.  What happens when the requirement for a 3rd format comes in, and this one also has a specific format it requires?  Well things start to break down here, and the service needs to start managing the context between multiple endpoints which are synonymous with each other.  This creates a variety of problems you really don’t want to have to deal with, like reduced caching and cache inconsistency as different URLs aren’t cached as the same resource.

The easy, short, and best answer is to simply avoid utilizing the formats which prescribe a URL pattern, if possible leave out those portions of the format which you can, and if that doesn’t work hopefully one of the other fantastic alternatives will provide the right set of attributes to fit your initial use case.

Hypermedia APIs: The user has goals so listen!

In my last post I addressed the worst acronym ever, HATEOAS, and how to truly have hypermedia drive the stateful interaction of your application.  The discussion rounded out the more standard guidelines for creating hypermedia APIs, creating a nice foundation for understanding for the next four guidelines which are part of the forthcoming hAPI specification to drive adoption of hypermedia APIs through reduced complexity and better tools.  In this first post in the series addressing the next stage of hypermedia APIs, I would like to address goals.  Specifically, I would like to address the goals of the consumers of your API, and put some serious effort behind helping them achieve their goals.

If you take a step back and look at the interaction of a CRUD, you should be able to see a usage pattern which relies heavily on understanding the model of the service provider’s implementation to understand what interactions are required, and in which order to accomplish a larger goal.  Due to my history in the financial sector and common familiarity with banking, I tend to use the creation of a checking account as a good example to demonstrate the issue.

Suppose you had the following CRUD APIs:

/account
/address
/person

If you wanted to create a new checking account, considering documentation like OAS doesn’t show larger interaction arcs, what would you do?  The most likely implementation scenario requires you to create a person with an address, and then use this person and address to create an account.  The problem is I had to reason my way through to this conclusion as the service implementer.  As a consumer I don’t, and shouldn’t be forced to, care about the concerns of implementing a service in order to consume it.  This was an easy enough example, so you might be inclined to shrug this off as manageable.  If you have not, well then I’m sorry for your trouble because you have been through the same pain I have, it wasn’t fun.  If however, you have yet been spared from the joy of integrating a service designed by someone else’s undocumented and internal data model of confusion, then I have concocted just the elixir for you to prove just how real this problem is.

Assuming at this point, we’re all on the same page regarding the previously mentioned pain we need to look at the hypermedia solution, which is without a doubt a much friendlier interface to a very similar albeit muted frustration.  You see, despite the ease of engagement with the service being significantly better, I am still required to understand the domain model and internal implementation and composition structure enough to make value judgements to determine how to crawl the service intelligently.  As a consumer correctly discovering the service, I have two choices, try everything, or try everything while guessing at relationships.  Neither of which sound particularly appealing, but at least it’s better than using CRUD.

I propose a third option, as part of the vocabulary for the service define domain relevant goals which can be provided by a consumer to express their domain intent for consuming the service.  In the banking example above, it would be much easier if profile linked by the home document contained a goal of “new-customer-create-account” which I could provide to the server in order to tailor the hypermedia of the responses in order to steer my client towards the new account goal.  Hypermedia APIs are a great leap forward in usability, by using goals with hypermedia we can greatly reduce the interaction difficulty and enhance the speed at which we can integrate and release new APIs.

The hypermedia API designer should not only look to create the appropriate vocabulary for the service, but also look to encapsulate the larger goals of the domain to provide stateless hypermedia curation for their consumers!  Together this will allow you to reduce the amount of knowledge a consumer needs to have about a particular domain or implementation in order to successfully consume it.

Hypermedia APIs: hypermedia is the state.

In my previous post in this guidelines series I discussed many reasons why versioning should not be introduced into your API, despite the existence of convenient tricks to hide some of the side effects for a time.  Many leading tech organizations argue the opposite, which might be reasonable when the likelihood of any particular API aging long enough to get past v1 is extremely low.  However, these organizations aren’t in the business of creating reliable, flexible, and enduring APIs for consumers in the long term.  In this post I’ll discuss perhaps the most mispronounced and ill-conceived acronym in an industry obsessed with acronyms: HATEOAS.

The terrible and often thrown about acronym stands for “Hypermedia as the engine of application state” and is possibly the most frustrating part of Roy Fielding’s entire dissertation.  Once understood the concept is simple, the problem is for better or more likely worse this short sentence represents the entire discussion of hypermedia in the paper.  As one of the primary fundamental tenants to a RESTful application, Roy spent precious little time in his dissertation to expand upon this rather arcane phrase.  To be fair to Dr. Fielding, he is quoted at saying he did want to actually graduate, his dissertation is foundational to many movements within technology, and an in depth discussion of HATEOAS in his paper would have been a large additional undertaking.  With that in mind, I’ll go through it in a quick manner.

The concept boils down to a very simple principle, the state transitions in the application should be invoked by hypermedia driven links.  You can throw the endless discussions on ‘nounifying’ verbs, and shoehorning data models into a CRUD paradigm to match the 4 commonly used HTTP methods.  Your resources are your resources, and their representations are whatever is required by the domain models, and that’s good because as previously discussed you should spend a lot of attention on your resource representations.  By clearly separating stateful transitions from your representations you maintain a stateless interaction and greatly reduce the complexity of your representation design.  This still leaves us with the need to present the current actions available to any particular resource to a consumer of the API.

The good news is, we’ve already set the stage with all the requirements to utilize hypermedia to control the actions of the resources through the vocabulary definition in the profile.  Regardless of the hypermedia format you are going to use, there are two components to the hypermedia you need to present for resource actions, these are the link, the rel name.  The link is URL for the consumer to follow to submit the next request.  This link may be templated in the certain cases but should generally be provided to the consumer fully composed.  This URL is provided by the service and is not constructed by a consumer, however it can be augmented with query parameters like filtering, sorting, sparse field sets, and more.  The rel name is a word in the vocabulary which corresponds to an action a particular resource can take.

The final piece of the puzzle is how do we take the vocabulary of resource and action representations and turn it into an engine of application state.  Up until this point the actions described in the vocabulary have consisted solely of a name, the rest of the definition is the type of transition and the messages or templated messages to send.  In general terms, you have safe, unsafe, and idempotent actions which can be performed on resources.  Through the use of a protocol binding profile we can get a good mapping of the profile semantics to HTTP request types.

With all of these pieces in place, we have all of the components necessary to start our engines!  The hypermedia in a representation is included dynamically in the message as the state of the resource requires to give the user choices in interacting with that resource.  Suppose you had a collection of person resources, your home document guides your hypermedia client to the root of the person resources, which is a collection of all person resources.  In this case it would be helpful to include such links as self, profile, and next to provide both documentation context to the client if necessary through profile, and an easy way for the client to know how to interact with the collection immediately.  Perhaps the first person in the collection was of interest, and the client navigates to the link for the individual person resource.  Suppose the person had a ‘doing’ property which could be sitting, standing, walking or running, and the current value of this person resource is standing.  The server would be able to include hypermedia controls with names ‘sit’, ‘walk’, and ‘run’.  The exciting part is that the client can navigate entirely based on the vocabulary presented to it, as the message and protocol bindings are provided, the client is merely responsible for composing the message as described from the parts available to it in links provided by the service itself.  The client composes a run message, and submits it using the appropriately bound http method to the URL provided in the link and the resource state transition is handled without the client ever having to consider anything about the protocol itself.

With the power of these orchestrated interactions using the right clients you can quickly interact with a hypermedia API with very little previous knowledge of the service or it’s domain.  With this type of interaction model, you don’t try to squeeze behavior into a 4 verb vocabulary, you write the vocabulary and messages which makes sense for your task, and use a protocol binding to directly map them to interactions.  Now we have a more in depth understanding of the mechanism which makes the URL pattern irrelevant to the consumer.

Hypermedia APIs: Swagger is not user friendly

As a developer designing and implementing APIs for the past five years, integrating external services has always been a key component of developing the products.

As you sit in your design meeting, scrum, or stand-up the moment a new integration is mentioned, you can see the pause spread throughout the room and trepidation go through each colleague’s mind.  It is painfully obvious everyone is sharing variations on the same thoughts and questions.  How good is the documentation for this service? How long will it take to wade through the idiosyncrasies and bugs to a stable implementation?  Without knowing the specifics, everyone in the room is instantly aware of the landmines waiting for them.

These common concerns are entirely with cause, the quality range for services you may have to integrate provides a near limitless combination of difficulties.  The service being entirely undocumented isn’t even the worst case, as untrustworthy but thorough documentation can be much worse than discovery by trial and error.

Unfortunately, when implementing our own services, we often overlook or deprioritize the ease of use of our designs for the end user.  It’s an easy trap to fall into with deadlines and deliverables, that is precisely why it is so important we use designs and tools which make this simple.  Through the specification wars of the last 5 years, the CRUD-REST industry has settled on the Swagger specification (Open API Specification – OAS) as the standard for application design. While this represents real improvement over snowflake services, the use of a vocabulary driven hypermedia approach gives us all the beneficial properties of OAS as well as the long term benefits of flexibility, adaptability, and easing the burden on initial design perfection.

There are two primary problems with the solution provided by OAS namely, it tightly couples clients to the service through URLs, and requires orchestrating client changes in step with the service changes.

The first problem is easier to understand, by hardcoding the resource heirarchy to a URL and specific representation you now require tight and explicit versioning for clients to safely consume the service.  Any developer familiar with SOAP web services should be able to notice the similarities to OAS as the WSDL for a SOAP-like service without an envelope, using curly braces, and 3 extra HTTP methods.  The same arguments against the tight binding of the interface in SOAP services are becoming increasingly relevant when discussion the cons of OAS services.  The ramifications for this are immediately felt, but similar tooling has silenced detractors enough to satisfy the majority into adoption of this specification.

The second problem is much more nuanced, but far more frustrating to contend with as it is not immediately felt.  The design of SOAP and OAS lend themselves well to situations where the same group or company has control over both the service and the client.  If you distribute an SDK to wrap your service calls, or you distribute your own mobile applications, or support web applications under your control then the negative effects of the style aren’t felt until you need to perform the first major upgrade to these clients.  In this situation you can manage the negatives to a degree.  This difficulty is entirely unnecessary, but resisting the temptation to wait and deal with that problem when it comes up is hard to do.  You certainly are aware the process will be difficult while consuming resources and time, but the time and resources you are committing to the change management are in the future and your current deadlines are fast approaching.

The worst effects of this ill-advised tradeoff is felt when you are not in control of any portion of your APIs consumers.  This will be felt in cases as small as an internal microservices architecture or as large as your companies external APIs, and it will hit your bottom line directly.  If you deploy microservices which are tightly coupled to URLs and representations, you will need to manage the service dependency trees to fully deploy changes.  Assuming no changes made in one service results in a break in another, you have invited the complexity of a massive organization like Netflix to solve relatively small problem.  If this does result in a breaking change, you have lost a large portion of the benefits of a microservices architecture in tightly coupling two or more services which should be independent.  The benefits of the architectural style to the development team are obvious, but you may lose more time and resources managing the DevOps than you gain from development.  If your public facing APIs are forced to change, frequently requiring your consumers to modify their clients to meet your needs then you shouldn’t be surprised to see some of those clients explore or exit to your competitors.  When breaking changes are introduced this forces a slow release pattern, and requires your consumers as well as your team internally to manage multiple versions of your API.  As the difficulty of maintaining an integration with your service increases, the likelihood of your clients looking for alternative providers goes up from a real chance to a near certainty.

The obvious question you are probably asking is how is hypermedia any different?  If my clients bind to a domain vocabulary hasn’t this just moved the binding point with the same result?

The answer is no.  When transitioning from a CRUD API to a hypermedia API, you have moved from the realm of statically binding consumers to services to dynamic binding.  Hypermedia APIs by their nature should to be discovered at each use.  Hypermedia consumer clients should only ever have the root URL of the service statically bound.  The vocabularies can and should change over time to support changes to the understanding of the domain, or actual changes to the domain itself.  However, it is now possible to gracefully support clients as they migrate themselves at their own pace to newer portions of the vocabulary.  The client is no longer responsible for managing which version or effective version of your service they are interacting with on per call basis, the service handles this for the consumer.  Architecturally it may be necessary or easier for deployment to include multiple effective versions of a service to support this graceful transition, the key takeaway is the consumer is completely unaware of these URL changes.  The consumer is simply discovering, caching, and composing resource representations with metadata through links by their interaction with the service.  Any changes made would propagate to all clients by the end of the maximum caching period delay set by the service.  Any interactions with the service with now malformed or expired resource representations or moved resources can be managed by the ETag headers and HTTP 3xx response codes.  Clients are bound to the vocabulary, which means they are simply looking for resources and link rel-names they know, while caching information to reduce extra calls to the service for resource and service metadata.

This is a slightly more complex integration model, but the development of libraries to manage the increased complexity can release consumers from even more of their burdens, allowing them to focus on their true goal whether it is creating a UI or consuming the service for some useful purpose.