Design Patterns In HTTP
We’ve made the case on this blog that REST is the wrong way to try and understand HTTP. We’ve also said that it’s worthy of study if only because the Web runs over HTTP. We’ve even contributed a few introductory blog posts to the subject.
But that begs the question—what’s the big picture here? Is HTTP’s success due to the Web or is it the other way around? Is there some brilliant insight that HTTP captures or is Fielding’s dissertation on REST, which he didn’t expect anyone to read (pretty typical for a dissertation), the best we can hope for?
The Big Picture
HTTP is a distributed hypermedia application protocol. The thing we call the Web is the main application using it. If you want to build a different application, it makes sense to build a protocol for that application. Right?
Which is, in fact, what many developers do, tunneling custom application protocols via HTTP. And it’s an understandable instinct. Tim Berners-Lee’s original design documents discuss why HTTP isn’t itself based on an RPC implementation, as does Fielding’s dissertation. And Fielding even goes so far as to argue that HTTP is not a general purpose applications protocol.
Hypermedia Means Information Graph
But here’s where the law of unintended consequences kicks in. The phrase distributed hypermedia obscures the thing it describes. What we’re really talking about here is an information graph overlaid onto a network graph. Which is an incredibly general purposes thing, even more so because the network in questions is the Internet.
Tim Berners-Lee was entirely aware of this from the start, which is probably why he’s still talking about the importance of linked data. Unfortunately, Berners-Lee tied these ideas to RDF, OWL, and SPARQL, none of which have taken off. But Berners-Lee and Fielding had also embedded this idea in another technology he invented, one which had already taken off – HTTP. HTTP was optimized, not for hypertext, but hypermedia, including data. Features like content negotiation and caching didn’t particularly care what they were negotiating or caching, be it hypertext, images, video, or … data.
Unsurprisingly, a protocol optimized for information graphs overlaid onto the Internet turns out to be pretty general purpose. That insight is exactly the reason why frameworks like Relay and Falcor are graph-based. (And why they’re misguided. They’re tunneling one graph-based protocol via another, while implementing a strict subset of its features). We’ll appeal here to the Pareto Principle and say that 80% of Internet applications can be mapped into an information graph overlaid onto the Internet. More succinctly, most Internet applications can be modeled as distributed hypermedia. Thus, HTTP is not merely an applications protocol for distributed hypermedia, but for most Internet applications.
Or, it could be, anyway. Much like going from a database to an object model, modeling an application as an information graph requires deliberate and sometimes significant effort. For many developers, the payoff for making this effort is unclear and so naturally they’re reluctant to do it. On the other hand, Falcor and Relay, combined, had nearly 40,000 downloads this past month alone. So it’s equally clear that developers are willing to make the effort once they understand the payoff.
In fact, we can pretty much just lift the pitches made by Falcor and Relay and adapt them for use with HTTP. Here’s the What Is Falcor? section from the Falcor Web site with the changes necessary make it about HTTP:
FalcorHTTP is middlewarea protocol. It is not a replacement for your application server, database, or MVC framework. Instead FalcorHTTP can be used to optimize communication between the layers of a new or existing application.
We can do a similar exercise with Relay, but let’s instead look at the three main benefits that Relay promises:
Declarative: “Never again communicate with your data store using an imperative API.” HTTP has precisely the same design goal—to model an application as distributed hypermedia, with a uniform interface.
Colocation: “Queries live next to the views that rely on them, so you can easily reason about your app.” Again, HTTP has the same design goal of allowing the client to drive interactions with the server via content negotiation, and to cache information locally as much as possible.
Mutations: “Relay lets you mutate data on the client and server using GraphQL mutations, and offers automatic data consistency, optimistic updates, and error handling.” HTTP provides the same features via caching and push. HTTP does not provide error handling, but standardizes error conditions that can be uniformly handled, but if we were writing a framework, we could provide that.
Fast! Reliable! Secure!
In short, what’s missing here is the open source project page that promotes HTTP with the gusto of any modern open source project. And, of course, the libraries that leverage the features of HTTP instead of encouraging developers to write their own custom protocols. Even the best of the existing HTTP libraries fail to do this in a way that makes developers lives simpler. This is why developers are flocking to Relay and Falcor instead of learning more about HTTP. Even though HTTP has similar design goals, and even though Relay and Falcor are running over HTTP, it’s actually easier for developers to get results with Relay and Falcor.
We can say this with confidence because we’ve spent a lot of time experimenting with frameworks like Resourceful and hapi, looking at efforts like Swagger and RAML, and even building our own tools. (One of which we’re really excited about, but that’s another post.) It’s revealing that both Resourceful and hapi don’t even mention HTTP or REST anymore on their promo pages. Swagger and RAML simply boast that they can help you build REST APIs without really explaining why you would bother. (And also get the concept of a resource wrong, so…yeah.)
What would the promo page for a real HTTP framework look like, one that aimed to be as helpful as Falcor or Relay? Maybe a bit like this:
Simple: reason more easily about your code, knowing clients won’t get unexpected responses from the server.
Reliable: Validate requests and responses using well-defined schemas, to catch problems in development instead of production.
Secure: Schema validation and an extensible authorization mechanisms make it easy to catch bad requests, even before they reach your server!
Adaptable: Clients can upgrade incrementally, at the resource level, rather than all at once, big-bang style.
We could go on. All we’re doing, obviously, is taking the features supported by HTTP and putting them in terms of the real, often overlooked, benefits.
Design Patterns In HTTP
Back in 1994, Design Patterns was published. At the time, it was influential within the software development community. There was an inevitable backlash, but the basic idea is sound. Basically, a design pattern is just a structured description of how to solve a particular class of programming problem. Since network programming is a subclass of programming, we can make use of design patterns to help us there, too.
As it happens, this is a big part of what’s missing when we talk about HTTP. We first noticed this after publishing our post describing HTTP as a distributed key-value store. For a lot of developers, that insight was an epiphany. But since HTTP isn’t actually a distributed key-value store, it’s more accurate to talk about that as a design pattern. And HTTP is loaded with design patterns for networked applications, which, again, isn’t surprising, since most networked applications map pretty well into distributed hypermedia applications, which is what HTTP was designed to support. What’s missing is the description of each such design pattern.
We say “content negotiation” but we don’t talk about how it helps address the versioning problem. We talk about caching without talking about how it helps optimize network traffic. We take that as obvious…but if it was obvious, we’d (a) have more frameworks to help us do that and (b) Relay and Falcor would probably be those frameworks instead of the frameworks they are.
Here’s how a design pattern for content negotiation might look (adapted from SOAPatterns.org):
How can servers introduce incremental changes without breaking existing clients?
As requirements change and new clients are introduced, clients may have disparate data requirements.
Each client includes their preferred format (such as schema version) in each request. The server provides each client its preferred format.
Particularly useful in place of API versioning. Data types can be versioned more incrementally, allowing clients to upgrade gradually to newer versions.
Also useful for negotiating compression, language, and character encodings, and even allowable requests.
HTTP supports this through custom media types and the use of the
Caches must maintain caching metadata to ensure cached responses match the client request.
Clients must specify supported formats in each request.
Here’s a partial list of other such design patterns: self-describing messages, edge caching, content-based caching, partial updates, global identifiers, parameterized identifiers, load balancer, security gateway, message decorator, discovery, reflection. Again, this shouldn’t be surprising. HTTP supports everything the Web needed, because, if it hadn’t, the Web would just be known today as the failed successor to Gopher. Since the Web is a distributed hypermedia application, also known as an information graph, stored across a network, HTTP also has everything most Internet applications need. We just either don’t know it’s there or don’t know how to make use of it.
Create More Web
Solving that problem that is obviously beyond a single blog post. Maybe even beyond the scope of several blog posts. Perhaps it requires an entire book…we’ll stop there since we still haven’t published our book on remote work. (It’s coming, we swear.) Still, it’s worth doing, because the real payoff of building HTTP-based hypermedia applications is that they create more Web, which means your application doesn’t live on an island, but is part of that giant information graph we call the Web.