How to (not) give your first conference talks.

Full disclosure and TL;DR

My talks at RESTFest were not the fairy tale ending of a Cinderella story.  They were really, really bad.  Skip to the bottom end if you would like to watch them, without the results of my retrospective process.

“Luck is what happens when preparation meets opportunity.” – Seneca

If we go by that definition, I was not lucky at RESTFest this year.  My preparation game was seriously lacking.  I had the opportunity, but through hubris, indecision, and a touch of nerves I spoke impassionately and poorly about topics which deeply engage me.

What happened?

During my preparation for the conference I rode the fence and was indecisive about topics I wanted to present.  The conference added a great workshop by Shelby Switzer on hypermedia APIs and clients, which made me feel like my hypermedia talk would be largely redundant.  In short, I squandered the preparation time by I talking myself out of presenting a topic I felt passionate about but I was never fully convinced.  When I arrived, the environment was far more welcoming and supportive than I had dared anticipate was possible and I was convinced to develop and present a long form talk despite conventional wisdom against this type of action.  I wasn’t prepared for speaking in this environment, and my lack of preparation came from an entirely unexpected (by me) vector.

This is where the hubris comes in. For most of my school and professional life I’ve found success in even the most important of speaking scenarios by knowing the material well, compiling a light list of touch points and allowing myself to flow freely through the material.  I was sure, despite all evidence I read to the contrary, I would have no difficulty in ‘shooting from the hip’ in this way.  Ha ha ha… nope.

Speaking at a conference with an unknown audience is very different from any other forms of public speaking I’ve encountered, it requires you as a presenter to have strong confidence in your delivery structure in the absence of a rich interpersonal feedback loop.  I was unwittingly relying on an undeveloped and unknown muscle for reading the audience and adapting my tactics to the audience.  I assumed my confidence in my mastery of the material would be enough to power my presentation with zest to inspire all to immediately pick up the hAPI banners and charge forth.  Ha ha ha … nope.

This perfect vision was about as far away from reality as possible, while still delivering any talk at all.

What the audience got instead was a couple tone deaf lectures, presented with obvious discomfort lacking any semblance of passion or inspirational energy.

To the attendees of RESTFest 2017:  Please accept my deepest apologies for putting you through such a difficult talk.  Please also accept my sincerest gratitude for your benefit of the doubt in allowing me to finish the experience, and the tremendous support you all showed after my talks.  I did my material a disservice, gave an uncomfortable talk, yet you still welcomed me with honest, constructive, and yet reserved feedback.

Seriously, you guys rock.

Sorry Seneca, I’m not buying.

I’m choosing to reject Seneca’s definition in this case, because it doesn’t afford me any obvious path’s forward.  I obviously have future talks, but I had this opportunity and failed to truly capitalize.  Instead I’m looking at this as a win, not for my pride but because I can learn from this win and use it as a base to move forward.

Being such an introspective person, I’ve been repeatedly beating myself up over this failure since I walked away from the podium.  It’s only gotten worse since I have seen proof my initial analysis was accurate.  As little as one year ago I lacked any motivation whatsoever to speak at conferences, yet in September found myself at the very same conference which formed the foundation of my understanding of hAPI architecture.  Then in front of my proxy teachers, colleagues at large, and even someday perhaps my peers I stood and presented my ideas.

Thanks to Ronnie Mitra for offhandedly diagnosing my condition as “Imposter Syndrome” to explain the sudden nerves which nearly froze me in place.  It was a very new experience for me to feel nervous to voice my opinions or ideas.  The monotone lecture way I spoke came as such a stark contrast to my typical passion (bordering on fervor), to put a positive spin on it, that I usually speak with as I discuss anything I care deeply about.  Yet with the support of my family, this community, and my commitment to these goals I’m not running away.  I have more talks in the near future, and it is my hope there are yet more still to come.

Why?

Despite my poor performance at RESTFest these topics are things I’ve become very passionate about, they are all connected and worthy of my time.  Working to help refocus the tech industry on providing value to people; making it easier for developers to help people; expanding the definition of developer to include more people; recruiting more people to this cause of helping people; the connecting thread is a deep seeded calling to help people I’ve uncovered in the last year and a half.  I refuse to look at my poor performance as a failure, because it would be an anchor with a strong pull to stop or change course.  Yet, I’m not standing at an inflection point; I’m standing at a fork in the road between the hard path towards my goals or the easy path towards some consolation destination.  This failure of mine is actually an opportunity to prove my resolve and grow.

“If not now, when? If not you, who?” – Hillel the Elder

I’m choosing to view this as a win, because a lot more somebodies have to do it and having seen the opportunities I can’t willfully abandon them.  I was lucky at RESTFest, Seneca’s definition is not the only one.  I may have struck out, but at least I got the chance to bat in the first place.  Obviously this is a rocky start down this path, but I’m choosing to own it – it’s my rocky start.

I usually like to refrain from discussing things as intimate as this since my thinking sometimes comes off as alternatively grandiose or convoluted, but the recordings are available and I can’t change the past.  I can only control how I respond to it and what I do next.  I’m using this raw disclosure as a way to provide some excuse free context to the videos and a guiding light to keep myself on course.  I’m claiming responsibility for the lack of preparation and defining a path to grow into this speaking world I find myself in.  Sure, I haven’t given myself easy or short term goals, but I now do have a way to objectively track my progress and observe any deviation on the long path to my goals.

Epilogue

If for some reason you have read this far, and you still have the desire to view my talks I’ve included the links below.

Last warning – As of the writing of the post, I’ve only been able to suffer through the first short talk and about 9 minutes of the second.

Stop burning your customers and users.

Human Conversation Services.

A pragmatic review of OAS 3

Disclaimer

Before I go any further I want to address the elephant in the room. Obviously I consider myself a hypermedia evangelist and I’m aware it is easy to make ivory tower arguments from this perspective. I am also an application architect which requires frank pragmatism where today’s OK solution is generally much preferred to next year’s better one.  In most of my previous posts I’ve focused my discussions on the distance between where we are as an industry, where I think we should go, and why it’s important.

Getting started

As part of my process of preparing for my upcoming talks at APIStrat on API Documentation and Hypermedia Clients, I’ve been reviewing the specification in depth for highlights and talking points.

On one of my first forays into the new world of twitter, I rather tongue-in-cheekily(https://twitter.com/hibaymj/status/865054487119089665) pointed out as a hypermedia evangelist my issue with the specification.  Going back, I probably would express the thought differently, but the crux of the issue is OAS does not support late binding.

I’ll get back to this point later, because first I want to talk about the highlights of the specification to acknowledge and applaud the hard work put into such a large undertaking.  Looking back on the state of the art of APIs only 10 years ago, it’s easy to see the vast improvements our current standards and tooling provide.

At this point I’m going to assume most have googled for the changes to the format in OAS 3.  My aim with this post is not to focus on changes, but evaluate OAS as it exists in the current version.

The Great Stuff

Servers Object

This is a very powerful element for the API designer which allows design time orchestration constraints to be placed on the operation of the services. This can greatly enhance the utility of OAS for use in many scenarios, including but not limited to: API Gateways, Microservices orchestration, and enabling implicit support for CQRS designs on separate infrastructure without intermediary.

Components

My previous experience with OAS 1.2 lead to a lot of redundancy, which the components structure of the current version very elegantly eliminates.  The elegance stems from the design choice of composition over definition allowing for reuse without redundancy.  It simplifies the definition of the bodies, headers, request, and response components as reuse becomes a matter composition.  The examples section is a developer experience approval multiplier, which is welcome and should be strongly encouraged.

Linking

As a hypermedia evangelist, my approval of this section should be not come as a surprise.  It mirrors in concept many of the beneficial aspects of an external profile definition like ALPS and is a welcome addition to the spec.

Callbacks

The standardization of the discovery or submission of webhook endpoints within the application contract itself is a very good step in supporting increased interoperability, internally and between organizations.

Runtime Expressions

With the inclusion of this well-defined runtime expression format, OAS removes a large amount of ambiguity for consumers and tool developers. This allows the API designer to add a lot of value enhancing the ease of use for consumers and integrators.

A Mixed Bag

These items are included simply because a tools utility isn’t determined when it is created.  The optional nature of the definition or use cases of the response object and the discriminator open them up the potential of unnecessary ambiguity and misuse.

Responses Object

All of the benefits I mentioned in the components section also apply to the responses object. My concern centers around the enumeration of the different expected responses.  The authors deserve credit in immediately pointing out this shouldn’t be relied on as the full range of possible responses.  My experience has shown that designers, tool developers, and end consumers are prone to missing the fine print or assumption, subsequently over relying on these types of features.

Discriminator

For the purpose it serves I think the discriminator as defined is a very elegant solution which helps to differentiate OAS from standard CRUD.  It allows for the use of hierarchical and non-hierarchical polymorphism alike, for more concise and reusable designs.  However, it still fundamentally ties the API to design time defined data formats.

Room for Improvement

The Extension Mechanism

With obvious resemblance to the now long deprecated format of custom HTTP headers, this section should follow the specs own well designed components format.  This upgrade could use the composition rules defined within the spec to allow much better support from tooling developers, and more consistent interoperability.

It’s All Static

While the authors have done an excellent job removing a lot of static portions out of the spec, it is still fundamentally static at its core.  Fortunately the static nature of the format is largely limited to a small section of the document thus allowing designers and developers much more room to innovate after design time.

Intertwined Protocol and Application Design

In computer science it is always immensely difficult to know precisely where to create boundaries for improved separation of concerns.  The OAS specification was not created from an ivory tower bubble.  It was created to solve real problems in real time.  Unfortunately, it still bears scars from this period by mixing protocol design concerns with application design concerns.  Each application design component is also able to declare protocol properties in a mix which wouldn’t allow for protocol portability.  If protocol concerns like HTTP headers and response codes were abstracted to external definitions or formats, then the reuse of OAS could bridge nearly all relevant protocols.  However, there would be one thing left to prevent the specification portability – the path.

Path Is The Base Abstraction

Getting back to the point raised in my cheeky tweet.  By using the URL path as the primary abstraction the specification creates the possibility of many future; operational, developmental, and maintenance issues.  Recently even the quickly growing GraphQL community has joined voices with hypermedia proponents to point out how this subtle design flaw can develop into severe issues.

Bringing It All Together

The purpose of this post isn’t pointing out all the flaws in OAS but to give a pragmatic review of the state of the specification.  If you want to see a more in depth analysis take a look at Swagger isn’t user friendly.

In the end, if you’re going to opt for an alternative to hypermedia then OAS is about as close as you can get at this point.  The ecosystem fits extremely well in the wide berth between a single user service and massive scale where every byte counts.  If your service design hasn’t been updated in the last 10 years or is nonstandard, it’s very likely OAS 3 would be a massive improvement and represents a today’s best ‘good enough’ solution.

Some of these necessary improvements are easy to handle, others will require more finesse to mitigate if they are addressed at all.  One thing is clear if your project is still using custom API designs, or spend too much time managing older service designs, and you don’t have time to contribute to a hypermedia alternative then OAS is worth your serious consideration.

A RESTed thank you!

Last week from Thursday through Saturday I had the privilege to attend RESTFest to speak, listen, and learn.  Much thanks has already gone out to the organizers Benjamin, Mike, Shelby, and Ryan they deserve it and more for their efforts to organize, finance, and run the event as smoothly as they did.

About a year ago I started down this path towards a deeper understanding of REST services and API design, and while this is the first time I’ve attended rest fast the talks from the past provided a very strong educational foundation. This understanding allowed me to direct the next stages of my research through specifications and papers. I have a strong belief in the responsibility of the successful to strengthen the ladder they climb behind them to allow more follow.  Over the next few weeks I’m sure to glean even more knowledge from my time in Greenville, however it’s obvious the organizers and veteran attendees of RESTFest have built a very strong ladder.

In hindsight it was probably a mistake that I decided to rewrite my talks to fit extra-session conversations on different API design approaches.  While the material I wanted to present was the same or similar, I didn’t have the opportunity to rehearse, scrutinize the potential reactions to the slides, or practice delivery.  I do believe there was room for improvement on the delivery, organization of slides, and focus of the talks.  However despite the potential shortcomings I faced nothing but helpful and considered feedback from others to improve my delivery and presentation.

The uniquely safe environment they created gave someone like me the opportunity to learn firsthand the differences between speaking at a lunch and learn and a conference among many other things.  Lesson’s ranged from a touch of speaker wisdom to limitations of AWS managed services. In my mind all of these constructive critiques and utter lack of negativity only help to further demonstrate the safe educational environment at RESTFest.

Thank you those who financially, logistically, vocally, or in any other capacity have helped make RESTFest past and present happen.  I can say unequivocally I wouldn’t have the knowledge I have now without your effort and support.

RESTFest was fun, it was educational, it was productive, it was nerdy, and I liked it.

See ya next year Greenville!

Don’t iterate the interaction design of your API.

Recently I have been encountering an increase in how a misunderstood best practice is misapplied to justify a bad decision.  I’ve seen a general uptick as I’ve gained experience due to my increased knowledge, but also the increasing diversity of knowledge and experience of developers in the field. One practice in particular has stood out as particularly damaging to the API ecosystem namely the agile tenet of simplicity.  This tenet advises the practitioner to add only the functionality and complexity required for the current known requirements.  If one were to step back to think about it this would seem like it should be both obvious and harmless practice to follow.  How could creating a design with the lowest possible complexity, or cyclomatic complexity for the computer scientist, ever be a bad decision?

We will cross that bridge when we get to it!

I’ve been consistently hearing this argument in regards to adding functional or design complexity to new API development.  The practice of simplicity until necessary is generally sound, but fails utterly when applied to API design.  The reason is quite simple there is only one chance to design an API, ever.  But wait you cry, we can version the API! I’ve previously addressed the poor choice of versioning, nevertheless if you pursue this option the ill-advised use of versioning is a tacit admission of this fact.  If there is only one opportunity to define the design of an API, you simply cannot make it any less complex than it will need to be to satisfy the eventual end goals of the API as it evolves.

When best practices go wrong!

The problem comes from the fundamental misunderstanding of the definition of best practices as rules of thumb, not hard and fast rules.  Advocates and evangelists loudly tout around the benefits of their process, but often fail to acknowledge the existence of any scenario where their best practice simply isn’t.  The argument consistently boils down to, this solution is too complex for now, we will go back and fix it later when we have time.  But there is a few subtle built in fallacies which become this approaches Achilles heel.
The first is the belief that with the introduction of this technical debt the price to repay will not grow over time, or at worst will grow linearly.  There are certainly situations where this might be the case but it would be the exception not the rule. The term technical debt was coined because of the tendency for the debt to grow like compounding interest or worse.  Worse still it is very common that the weight of the legacy system once released would actually prevent you from ever returning to address the problem at all.
The second is the naive assumption that the future will be less busy, the team will maintain a desire fix the flaws, and their fortitude to expend capital to meet the requirements will grow.  Case study after case study has proven this is overly optimistic and simply not true.  As the cost to fix an implementation or design flaw escalates, the cost benefit tradeoffs with leaving the code in place become ever more biased in the favor of not touching ‘what isn’t broken’.
At the end of the day this is simply the lies told by designers, developers, and stakeholders to themselves and others to justify an increasingly more expensive sub-optimal deliverable.
Assuming your team is stellar and defies the odds by prioritizing the rework process, it following through is still completely dependent upon having the opportunity and control of all dependencies to seamlessly perform the work.  If there is even a single client outside of your teams’ immediate control, your ability to complete this work quickly is severely degraded.

Agile: The buzzwordy catalyst and amplifier

There is nothing earth shattering here, but I haven’t even touched on the whole story.  In the same paper as ‘cyclomatic complexity’ Arthur McCabe also introduces the concept of essential complexity, or the complexity innately required for the program to do what it intends to accomplish.  Under the guise of the tenet of simplicity, the essential complexity is often left unsatisfied because the agile methodology places a burden of proof on additional complexity which is unforgiving and ultimately unsatisfiable.  In order to reach the known essential complexity of a program, you first have to prove adding the complexity is actually essential.  It’s a classic ‘chicken or an egg’ problem with no answer.  Ultimately this will most often result in the process directing your actions to failing to meet essential requirements through a failure to define, justify, or evaluate essentiality of the added complexity.
The business decision, and business imperative to do only the required work for now is deaf to technical concerns outside of the short term, regardless of the costs or savings.  This isn’t to say developers should always be in control of these decisions, but it is very important to be aware of the increased importance of communicating technical pitfalls and their costs outside of the technical audience as the process is heavily biased against technical concerns.  The adoption of agile practices has actually increased the importance of a highly knowledgeable technical liaison who can push back when shortsighted goals will provide a quick positive payout saddled with a negative longer term value.  This is where it all comes back to the misunderstanding of best practices.
These teams are more often being led by practitioners without truly understanding the best practices business purpose.  Rigid adherence to, and often weaponization of, ‘best practice’ in these design discussions has only served to hide the inevitable costs associated with poor design until a later date with the debt relentlessly compounding unimpeded.

You can’t put design off, so don’t!

I started this off by saying you can’t iterate away the interaction design, so I want to be very clear what parts of the API design can and cannot be iterated.  The design of an API is actually composed of two relatively straightforward and separate concerns, what I will call the interaction design and the semantic design.  The interaction design is the complete package of the way a client will interact with your service.  It includes security, protocol concerns, message responses, and required handling behavior which cuts across multiple resources among many others.  The semantic design encompasses everything else and this can and should be created and enhanced over time as domain requirements change.
Knowing the interaction design of the API is permanent once completed, it’s important to not only get it right, but to ensure the design defines the capability for expansion of specific functionality which will need to change over time, for example the use of a new authentication scheme, or filtering strategy.
It is impossible to list the requirements which will fall under the interaction design of your API, but I provide some questions I’ve used which will help you go through the initial design period of your API to exclude the design and implementation of features which can wait.
  •  Does this feature change the way a consumer interacts with the API?
  •  Does this feature change the flow of an interaction with the API?
  •  Could later introduction of this feature break consumer clients?
  •  Could later introduction of this feature break cached resource resolution?
With a rigorous initial design session, utilizing these questions you should be able to determine the essential complexity of your API interaction design with much higher accuracy, and prevent cost increases and consumer adoption pain from adding new value to your services in the future.

Unleashing generic hypermedia API clients

A true restful API has been called many things, hypermedia web APIs, ‘the rest of REST’, HATEOAS – the world’s worst acronym, or perhaps the newest hAPIs.  Regardless of what you call it, this concept has long been proclaimed to solve nearly all of your most difficult design problems when building a web service interface.  There is plenty of evidence to support the claims made by hypermedia evangelists over the years, however one glaring omission is likely the cause for the slow adoption of hypermedia on restful services.  How do you consume this service, and what do all of these link relations mean?  Building an effective hypermedia client is more complex a task than consuming a CRUD API, an extremely difficult question to answer has been when do the benefits outweigh the cost of complexity?  Once past this hurdle, how does a consumer know how to interact with the service?

It is no wonder adoption of a superior design is so slow when a more complex design leads to more complex clients.  The primary selling points for this style are longevity, scalability, and flexibility, however the benefit from these traits is seen over a long period of time making the complexity a difficult tradeoff to evaluate at the start.

We are all very familiar with good, seemingly simple hypermedia clients.  In fact, you are likely using your favorite one right now to read this.  If we know so much about building good hypermedia clients, why are hypermedia APIs still not the de facto standard?

The key to enabling adoption of hypermedia APIs is very simple, make them easier to consume.  The Open API Initiative through the swagger specification has demonstrated the power and appeal of standard formats to enable rapid adoption of best practices in accelerated development cycles. I often will call out the shortcomings of the specifications, but it is critical to understand the cause of the successful proliferation to the web at large. The trick is to apply the lessons learned from this success to driving the adoption of semantic hypermedia.  To make a hypermedia API easier to consume you create generic clients to encapsulate the complexity by establishing and adhering to a strict http behavior profile.  Then you subscribe to or publish a semantic profile of the application adding domain boundaries to the messages and actions.  Finally, allowing clients to tailor their hypermedia through requested goals of supported interaction chains.

Often hypermedia is used to augment CRUD services using binding formats like OAS.  In this scenario it simply can’t be relied on to drive the interaction with the service as it has no guaranteed, or an unbounded, range of responses.  Establishing a range for the hypermedia domain semantics is critical to transition the role of hypermedia from augmentation to the vehicle for application state and resource capabilities.

The takeaway here is simple, if you want to have the robust flexibility offered by hypermedia APIs then your focus should be on enabling strong generic hypermedia clients.  To build strong generic hypermedia clients, you need to adhere to strict service behavioral profiles to isolate the domain from the underlying protocol behavior.

Hypermedia APIs: Use extensive content negotiation

In my last post I touched on how important it was to insulate consumers from the immediacy of a breaking change.  Nothing you can do as a designer will allow you to create the perfect API which will never require change on the first try.  What you can and should do is reduce the likelihood of the occurrence of a breaking change as much as is feasible, and then allow consumers to gradually adopt to the changes on their own schedule.  In this post I’ll discuss the need for extensive content negotiation.

It has been stated, in the comments on these very guidelines no less that there is a striking similarity between the 9th and this the 11th guidelines, as both rely on or discuss content negotiation.  Much like the first guideline to embrace the http protocol, the benefits, constraints, and reasons for content negotiation are sufficiently board to merit multiple discussions to be properly addressed.  It is imperative a designer avoids hypermedia formats which prescribe URL patterning because this could lessen the proper attention being given to resource representation and affordance design.  The goal of this discussion is to address the rest of the content negotiation constraints to prepare your designs for interaction with real traffic volume and diverse consumer demands.

As the API designer, your job is to provide the simplest service you possibly can to your consumers.  CRUD APIs like OAS (swagger) often struggle with complex designs when domain functionality doesn’t map to 4 methods very well.  Other solutions like GraphQL  provide excellent solutions to captive audiences and internal services, but for external consumers often result in the same poor consumer experience. Quite simply the act of consuming the service correctly requires too much knowledge about how the service is built.  So how do you avoid making these same mistakes with hypermedia APIs?  You allow your consumers to interact with your service just about any way they want.  The fact is you will never be able to guess all the particular ways a consumer would want to interact with your service or tailor their requests, so don’t try.  The solution is to build your service as generic as possible and allow the consumers to choose the interaction mediums will be used.

What all should be negotiated?  The short answer is everything you can reasonably support which adds to the consumer experience.  A longer non exhaustive list of potential negotiated points:

  • Hypermedia Format (Content-Type)
  • Filter Strategy
  • Query Strategy
  • Pagination Strategy
  • Cache Control Strategy
  • Goals
  • Vocabulary
  • Sparse Fieldsets
  • Representation or Document Shaping

It’s a long list, does your service really need to support all of those negotiation points?  It should aim to support all of these and more if they are reasonable and feasible to your service domain.  Yes, this adds a lot of complexity but it’s crucial to focus on the consumer experience, and the long term payoff of creating a service which will happily satisfy the consumer needs for years to come.

These negotiation points are all critical to supporting a wide breadth of consumers, but they are also central to providing service flexibility over time.  A service designed from the beginning to be generic, and support a wide range of many different properties already has the capability to support one more option in any particular property.  When a new hypermedia format comes out, or a new standard filter strategy, your service already provides multiple options for this properties and supporting the change is nothing more than plugging in the appropriate functionality.  You can’t know what formats will be wanted in 5 years, but your service has been designed to account for changes over time, and the required upkeep is vastly lower than any alternative presented to date.

Design your API to negotiate with your consumers as much as possible, and you will have an enduring service your consumers will love to use for years.

Hypermedia APIs: Use flexible non-breaking design

In my last post in the series of hypermedia API guidelines, I discussed the need to decouple the design and implementation details of your API from the constraints of any particular format.  You likely aren’t designing your own format, but it is a good decision to avoid formats which require URL patterns, as they can provide confusion and increase the odds a consumer will make calls directly to URLs.  In this post I’d like to go through the follow up guideline to don’t version anything, which will fill in the remaining gaps in dealing with resource and representation change.  To support long term API flexibility, your design should leverage a strict non-breaking change policy, with a managed long lived deprecation process.

As time passes, an APIs design can lose relevance to the piece of reality it is built to model.  Processes change, properties change, and priorities change so it is crucial to maximize flexibility for change over time.  When using hypermedia APIs, it is important to understand the three types of changes you can make to your profile, and the appropriate way to manage each kind.  Optional changes will make modifications to the representations and their actions without any effect on current consumers and their bindings.  Required changes will make additions to the profile which can be gracefully handled by the generic client.  Breaking changes, or removing items from the profile, will require a client update to maintain compatibility.

In traditional statically bound API styles the handling of the optional changes would likely lead directly to consumer client changes as the representations of resources are strongly coupled to the consumer.  However, a generic hypermedia client is intentionally dumb when it comes to the properties of resources, so the addition of any unknown resource simply behaves in the default manner.

The story of required changes is much the same as the optional changes.  The highly coupled service and consumer relationship requires constant maintenance and attention to continue to function.  A hypermedia API consumer client will manage the required changes by standard approaches, generic fields which are required can be flagged to the consumer as invalid without requiring any strong bindings to the consumer client.

In this way, the two changes which represent any difficulty for hypermedia APIs are the required and breaking changes.  In the case of the required changes, a previously valid representation is no longer valid because a new property has been added.  Alternatively, there is a new action has been added to a representation which was not previously expected without a client binding to the action.  The breaking change is a representation or action being removed from the profile which is has previously been required or bound by consumers.  With these definitions, it’s clear the real difficulty is in addressing the breaking changes.  The solution to breaking changes again can be found in the very first guideline I discussed, use the HTTP protocol to advertise change.

Previously in these discussions I have noted how the hypermedia API will manage the range of bounded contexts available to consumers.  Diving into this concept a little further, the primary benefits in supporting a range of bounded contexts is to allow transparent incremental versioning and consumer preference in the resource representations to be utilized.  Many leading tech organizations and methodologies stress the importance of versioning the API, unaware or uncaring of the fact that doing so has sown the seeds of future breaking changes.  By tracking the changes of your representations in the supported vocabularies, your service is able to leverage the HTTP 3xx response code family to inform consumers that change is imminent while still respecting their interaction in the vocabulary they know.  This allows consumers to upgrade gracefully on their own schedule, and greatly reduces the occurrence of high stress deadlines caused by your services’ evolution.  Through nuanced activity tracking and API orchestration, you will have an accurate view on exactly when particular representations or portions of the API are no longer in use.  Allowing you to confidently sunset old functionality knowing it will not likely result in a rude awakening to one of your extremely valuable customers.

By leveraging the protocol in the standard way, we can avoid breaking changes from immediately impacting consumers and requiring their full attention.  As I’ve mentioned elsewhere, creating the good consumer experience is critical to the success of your API, and a great way to keep consumers happy with your service is to not break their clients at 3am on Saturday night.

 

Your API is your product: even if you have a UI

I’ve recently discussed the problems with nearsightedness when designing APIs through the comparison of a OAS (Swagger) API to a hypermedia API.  These discussions have been very technical, targeted at an audience of API designers largely without addressing business and economic ramifications.  In this post I’d like to take a step back to remove my technical hat, and talk about the business, economic, and human benefits of supporting and maturing the proliferation of hypermedia APIs.  I’ll go into some differences between these two options from a business perspective to demonstrate the massive value of hypermedia APIs.  I will end with a little on a related topic, the undue influence on the direction of technology from the venture capital backed world of hyper growth.

Your API is your product.

Let’s talk about the elephant in the room; I would like to address the extremely common misconception that an API is nothing more than your UI app’s gateway to data.  The term REST as the industry understands and uses it, reduces the value proposition of your API development to little more than a gateway between your product, and how you store your product’s data.  It may provide some functionality, enhance performance, and shape the data in a way which is beneficial to the UI app development team, but it provides no value in itself.  Not only does this go directly against the path of progress towards the API economy, but it wastes the opportunity to save time and money on redundant parallel effort.  You have driven initial costs up with duplicated development effort, and maintenance costs also go up due to similar bugs in many places.  Perhaps the most critical effect of this mistake is you have almost certainly increased the time to market for your entire solution.

Your customer is everyone outside your API.

It may be difficult to look at your API as anything more than a means to provide your real products with the data they need to create value, but this thinking is guaranteed to hit your bottom line in a big way.  The API is your product, and anyone interacting with it is your customer.  The internal team developing your new mobile app?  Your customer.  The group responsible to maintain your web application?  Your customer.  The outside parties looking to utilize your service, without having to use your mobile and web apps?  Your customer.  Each of these groups share the exact same goal, they want to utilize the functionality your API provides as quickly and easily as possible.  If your own employees, and your customers all share the same goal, you are missing a tremendous opportunity to capture efficiency gains by making your API easier to use.  Anyone using your API regardless of their affiliation with your company wants to learn as little as possible about your API to meet their goals.

Solutions like OAS offer short term benefits to your product.  Developers can quickly get up and running, there is ample documentation explaining the ins and outs of your system, and users can utilize this to leverage your products very quickly.  The catch is these solutions offer extremely poor performance in the long term.  Over time your customers will need to constantly maintain their code, continue to need to read and understand your business models to meet their goals, and their quick solutions turn into a nightmare of legacy code to fix.  The ample documentation you thought was such a victory becomes a high barrier to entry as your service matures.  The result is an extremely dissatisfied customer, one who won’t refer your product to a friend or colleague, who is only looking for a better opportunity to present itself before they leave your product in the past.  When they are gone, they aren’t likely to come back, they already know how bad it is to use your product.

When you decide to use a product like OAS to form the foundation of your API, you prioritize your needs over the needs of your customers.  The short term benefits of OAS disappear quickly, but the long term negative effects to your business and brand will be extremely costly to remedy.  A product which puts the priorities and goals of the customer front and center will drive referential sales, creating buzz and goodwill in the marketplace surrounding your brand.  If you want to create long term goodwill and revenue security, you need to prioritize how people feel about using your product.

You can’t sell a mega product.

Technology is forcing businesses to change how they sell their products to the market.  The concept of selling an entire package of solutions is quickly yielding to selling smaller incremental sets of solutions which can be independently acquired and used when the customer needs them.  Digital products are quickly becoming commoditized to the logic and value they add to your customers’ business processes.  If your business model is not changing, you will likely soon find your target market has dried up.  Inferior, but modularized products will take the place of your products as customers learn to carve out the functionality they need without unnecessary costs or complexity of larger bundled solutions.  Every organization is in a race to the bottom on cost to provide products and services, if you force a customer to buy products they don’t need, you are continuously inviting them to seek alternative products.  When you promote goodwill and engagement with your customers, the easiest sales channel of current customers gets even easier to sell enhanced services.  Your customers are more likely to buy additional products from you when they are already satisfied with their current solutions.

You can’t sell a mega product, but you might be able to sell a customer all the parts of one.  Fixed API designs like OAS make segmenting your products difficult and unintuitive, while requiring a lot of management overhead which cuts deeply into the margin for the product.  The dynamic design of hypermedia APIs allows your product to be segmented naturally.  This enables marketing initiatives to directly target specific functionality and customer pain points, while requiring very little additional overhead to reduce the profits.  If your segmentation isn’t intuitive and it isn’t easy to determine where the functional and license boundaries lie, your customer experience suffers dragging your future sales potential down with it.

Your business probably isn’t hyper growth.

It is difficult to look at the success of companies like Facebook, Netflix, Amazon, and Uber and resist the temptation to copy the way they operate, however its very likely the needs of this niche market do not match your organization or your industries’ needs.  The move fast and break things mentality of Silicon Valley and other venture capital funded startup hubs pairs extremely well to the short term benefits of API designs like OAS.  In the venture world a problem two weeks away can feel like two or three lifetimes.  Companies who intend to hyper scale for acquisition aren’t concerned with their customer’s success in 2 and a half years, because after they sell in 2 years it will be someone else’s problem.  Google and Amazon build, try, and sunset so many products that it’s folly to waste time concerning themselves with long term benefits to themselves let alone customers.  If you are reading this then there is a very good chance your needs don’t align to such short term goals.  Trying to operate using the same tools and methods as these hyper scale companies will do a disservice to your customers and your brand.  Your business model likely is concerned with your customer satisfaction in two, five, even 10 years.  Hyper scale companies have, and continue to develop, good tools for their needs, my advice is to look carefully to see if those tools fit your needs, because it’s likely they don’t.

Hypermedia APIs are simultaneously an extremely proven design, and an unexplored frontier.  The internet itself runs upon the very same principles of a well-designed hypermedia API.  Developer tools for this space do currently lag behind the alternatives like OAS.  Fortunately, they have the same potential for speed to market, prototyping, and integration tools.  Investments in developing hypermedia APIs and the tooling around them are investments for the future on the scale of decades.  Hyper scale companies have created tooling which prioritizes their goals of short term gains, if your company is not primarily interested in short term gains then it is up to you to create the tools which prioritize the long term benefits to match your goals.  The long term benefits to your company and your customers of developing a hypermedia API have no equal, there isn’t even a good alternative to compare.  If your business is concerned about long term market sustainability, revenue, and customer retention you should be looking into hypermedia APIs.

Hypermedia APIs: Decouple your design from a format

In my last post I discussed the start of the next group of guidelines with the use of vocabulary provided goals to curate hypermedia interactions with the service.  This exciting idea will allow truly domain driven interaction with your service, while remaining stateless and easy to consume.  The next guideline is more of a cautionary tale, and that is the design should be decoupled from the hypermedia format of choice.

When hypermedia is discussed today, I imagine the conversation ventures around the room discussing the different available formats.  You’ll hear mention of HAL, JSON API, JSON-LD, Hydra, Siren, and Collection+JSON among others.  The pros and cons of each decision are weighed, and eventually a consensus is reached and the team decides to use ‘X’ format.  The particular format picked is irrelevant to this discussion, however there is a chance the format picked will include something it shouldn’t; it will include specifications for URL patterns.  The problem with this is as the hard won victory of building the hypermedia API and client allows near effortless consumption of the service, formats which specify URL patterns, greatly increase the odds a consumer will cheat and bind to a URL besides the root.

However, this isn’t the only concern for supporting a hypermedia format at specific URL patterns.  Suppose you had a requirement to support another format as well?  Not a problem, that format can just use these URLs, as hypermedia makes the URLs irrelevant.  What happens when the requirement for a 3rd format comes in, and this one also has a specific format it requires?  Well things start to break down here, and the service needs to start managing the context between multiple endpoints which are synonymous with each other.  This creates a variety of problems you really don’t want to have to deal with, like reduced caching and cache inconsistency as different URLs aren’t cached as the same resource.

The easy, short, and best answer is to simply avoid utilizing the formats which prescribe a URL pattern, if possible leave out those portions of the format which you can, and if that doesn’t work hopefully one of the other fantastic alternatives will provide the right set of attributes to fit your initial use case.

Hypermedia APIs: The user has goals so listen!

In my last post I addressed the worst acronym ever, HATEOAS, and how to truly have hypermedia drive the stateful interaction of your application.  The discussion rounded out the more standard guidelines for creating hypermedia APIs, creating a nice foundation for understanding for the next four guidelines which are part of the forthcoming hAPI specification to drive adoption of hypermedia APIs through reduced complexity and better tools.  In this first post in the series addressing the next stage of hypermedia APIs, I would like to address goals.  Specifically, I would like to address the goals of the consumers of your API, and put some serious effort behind helping them achieve their goals.

If you take a step back and look at the interaction of a CRUD, you should be able to see a usage pattern which relies heavily on understanding the model of the service provider’s implementation to understand what interactions are required, and in which order to accomplish a larger goal.  Due to my history in the financial sector and common familiarity with banking, I tend to use the creation of a checking account as a good example to demonstrate the issue.

Suppose you had the following CRUD APIs:

/account
/address
/person

If you wanted to create a new checking account, considering documentation like OAS doesn’t show larger interaction arcs, what would you do?  The most likely implementation scenario requires you to create a person with an address, and then use this person and address to create an account.  The problem is I had to reason my way through to this conclusion as the service implementer.  As a consumer I don’t, and shouldn’t be forced to, care about the concerns of implementing a service in order to consume it.  This was an easy enough example, so you might be inclined to shrug this off as manageable.  If you have not, well then I’m sorry for your trouble because you have been through the same pain I have, it wasn’t fun.  If however, you have yet been spared from the joy of integrating a service designed by someone else’s undocumented and internal data model of confusion, then I have concocted just the elixir for you to prove just how real this problem is.

Assuming at this point, we’re all on the same page regarding the previously mentioned pain we need to look at the hypermedia solution, which is without a doubt a much friendlier interface to a very similar albeit muted frustration.  You see, despite the ease of engagement with the service being significantly better, I am still required to understand the domain model and internal implementation and composition structure enough to make value judgements to determine how to crawl the service intelligently.  As a consumer correctly discovering the service, I have two choices, try everything, or try everything while guessing at relationships.  Neither of which sound particularly appealing, but at least it’s better than using CRUD.

I propose a third option, as part of the vocabulary for the service define domain relevant goals which can be provided by a consumer to express their domain intent for consuming the service.  In the banking example above, it would be much easier if profile linked by the home document contained a goal of “new-customer-create-account” which I could provide to the server in order to tailor the hypermedia of the responses in order to steer my client towards the new account goal.  Hypermedia APIs are a great leap forward in usability, by using goals with hypermedia we can greatly reduce the interaction difficulty and enhance the speed at which we can integrate and release new APIs.

The hypermedia API designer should not only look to create the appropriate vocabulary for the service, but also look to encapsulate the larger goals of the domain to provide stateless hypermedia curation for their consumers!  Together this will allow you to reduce the amount of knowledge a consumer needs to have about a particular domain or implementation in order to successfully consume it.