HTTP Is The New Lisp
This is Greenspun’s Tenth Rule:
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
You can tell how old it is by the fact that C and Fortran are given as examples of popular languages. We don’t personally know this Greenspun person, but let’s just assume he hung out with Ada and Alan back in the day.
At any rate, this quote inspired us awhile back to invent the rather immodestly named Yoder’s Tenth Rule:
Any sufficiently complicated
C or Fortran programAPI contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common LispHTTP.
In other words, HTTP is the new Lisp.
We’ve Already Got One
HTTP addresses the concerns common to network applications. These are things like naming, authentication, caching and compression, and so on. These features were designed by an a pioneer of open source, whose resume now include something about, oh, you know, designing the network application protocol for the Web.
You’d be justified in thinking that the first place we should look when confronted with a new requirement for an HTTP-based API is, well, within HTTP itself. But we don’t do that.
Case Study: Resumable Uploads
Let’s consider an increasingly common scenario: resumable uploads. This scenario hits close to home for us, because we’ve implemented this feature for several clients now, (turns out sharing videos is a thing the kids are into these days). Amazon, Google, and Dropbox, have all taken a crack at implementing this feature and they’ve all done it differently. So we have to re-implement it each time.
Mind you, they are all functionally equivalent. If our experience is at all representative of the rest of the industry, the lack of a standard interface for this is likely costing the industry tens, or even hundreds, of millions of dollars, money that might have otherwise made its way to you, the developer, in the form of higher wages. (Yes, open standards can make you wealthier. Who knew?)
And it didn’t have to be that way.
Instead, we could be living in a world where all APIs offered the same interface for resumable uploads, just as they do for, say, caching and compression. But in order to understand why that didn’t happen, let’s first take a step back so that we can see the larger problem. Namely, the tendency to dismiss the design and architecture of HTTP because we believe we can do better.
Dropbox Knew Better
We’re going to make an example out of Dropbox. We use Dropbox, it’s a great service, and the people who built their API are undoubtedly wonderful people. But they would also have a better API today if they’d taken the time to better understand HTTP.
One of the most basic ideas in HTTP is that resources all have a uniform interface. Resources as diverse as videos and bank accounts can all be treated the same way, using a common set of methods. This idea of a uniform interface is fundamental to the design of HTTP.
In an API for uploading and downloading files,
you’d send a
PUT request to the file URL to upload it
GET request to that same URL to download it.
But, Dropbox chose to have developers
This is effectively a remote procedure call via HTTP.
Which is interesting exactly because HTTP’s design
is most definitely not based on RPCs.
In fact, the RPC approach Dropbox chose is, more or less,
completely the opposite of having a uniform interface.
Hopefully it’s uncontroversial to suppose that the authors of HTTP were familiar with RPCs, and that they could have based the design of HTTP on RPCs if they had thought that was a good idea. But they didn’t think it was a good idea, and we might even expect they had compelling reasons for thinking that.
Regardless, the API designers at Dropbox believed they knew better.
Again, this isn’t about hating on Dropbox. Lots of other APIs are effectively RPC-based. But Dropbox is a successful company that presumably has the resources to study HTTP and build the API in any style they choose. And this is version 2 of the API, so we can reasonably assume going the RPC route was a considered decision. And that they really do believe that an RPC-style approach is a better choice than the key-value semantics baked into HTTP.
It’s safe to say, Dropbox was not looking to HTTP for answers.
Dropbox Versus HTTP
This brings us to the Dropbox API for resumable uploads:
/upload_session/start, which creates an upload session and uploads the first part.
From there you can add additional parts with
Finally, you call
/upload_session/finish, with the last part.
This makes perfect sense. Start, append, finish. What could be simpler?
Maybe that’s the wrong question. Suppose for a moment that Dropbox had been looking to HTTP for answers. In this alternate universe, they might have come across the specification describing range requests:
Hypertext Transfer Protocol (HTTP) clients often encounter interrupted data transfers as a result of canceled requests or dropped connections. When a client has stored a partial representation, it is desirable to request the remainder of that representation in a subsequent request rather than transfer the entire representation. Likewise, devices with limited local storage might benefit from being able to request only a subset of a larger representation, such as a single page of a very large document, or the dimensions of an embedded image.
Translation: HTTP supports resumable downloads—out of the box. But we need to support resumable uploads. Now, contrary to popular belief, HTTP is, by design, minimally specified, and the designers apparently didn’t feel there was any compelling need to specify an interface for resumable uploads. One plausible reason for this was to simplify things for implementors. Resumable downloads, as specified, can take advantage of HTTP caching. So there was a potential performance advantage in including them in the spec, at minimal cost. Even then, it’s only an optional spec. And since HTTP does not address write caching (because that would require replication between caches, a significant increase in implementation complexity), there was perhaps no benefit to addressing resumable uploads.
But how would that help us with resumable downloads? Well, HTTP explicitly allows for extensions to the core protocol. (In fact, range requests are defined as an optional extension.) Range request are already specified in detail. To further quote from the spec:
The following are examples of Content-Range values in which the selected representation contains a total of 1234 bytes:
The first 500 bytes:
Content-Range: bytes 0-499/1234
The second 500 bytes:
Content-Range: bytes 500-999/1234
All except for the first 500 bytes:
Content-Range: bytes 500-1233/1234
The last 500 bytes:
Content-Range: bytes 734-1233/1234
We’d need only extend HTTP to allow range requests for uploads. Our API would now offer support for both resumable uploads and downloads (which would be handy on a mobile client), arbitrary byte ranges, and range caching.
More importantly, by incrementally extending HTTP, instead of layering an RPC API on top of it, we’ve made it possible for other vendors to use the exact same interface. Not just a compatible interface, the same interface. Again, think of how HTTP handles caching and compression. We don’t think of each API as providing an interface for caching and compression, we think in terms of the API supporting those features within HTTP.
Dropbox didn’t do this because they weren’t looking for it, which is the deeper lesson here. Of course, it’s not all on Dropbox. Anyone else could have done the same thing.
Like say, Google.
Google Versus HTTP
Google also whiffed on using range requests in their Cloud Storage API. Instead of an idiosyncratic RPC API, they introduced new HTTP headers—which appear to do the exact same thing.
Here’s a quote from the Cloud Storage API docs on performing resumuable uploads:
Add the following HTTP headers:
X-Upload-Content-Type. Required. Set to the MIME type of the file data, which will be transferred in subsequent requests.
X-Upload-Content-Length. Optional. Set to the number of bytes of file data, which will be transferred in subsequent requests.
Content-Type. Required if you have metadata for the file. Set to application/json; charset=UTF-8.
Ironically, in the version of the spec Google was likely using
when they designed this feature
(a similar interface was used in older APIs),
Content-Length was only a few paragraphs above
It’s like losing the spelling bee by misspelling penumbra
after you just nailed feuilleton.
Update: Nicolas Grilly pointed out
(thank you, Nicolas)
that Google’s Cloud Storage API does, in fact,
allow you to use
in exactly the manner we’re suggesting.
However, this capability is introduced
in a section on error-handling,
after describing the RPC-style API.
You have to study the example—again, in a section on error handling—to realize
that the RPC-style API is redundant.
This is even more like our spelling bee metaphor,
in that you end up implementing both APIs on the client
(presuming you want error handling).
Still, it’s exciting to see a step in the right direction.
Update: It turns out the
is there, just in a tab that we didn’t notice.
Still, we’re going to stick to our guns here
is given second billing to the RPC approach.
(Thanks again to Nicolas for pointing this out.)
Amazon Versus HTTP
We hate to spoil the suspense, but Amazon S3 does more or less the same thing as Dropbox. Not to the point where you can just use the same code, of course. That would be too easy. However, it is close enough that we can be confident that extending range requests to support uploads would have worked. Just like it would have worked for Dropbox. Or Google.
You May Say I’m A Dreamer
Imagine for a moment that one of these three vendors had spent enough time studying the HTTP specification to realize they could simply (if not trivially) extend it to support resumable uploads. Imagine further that, seeing this shining beacon of interoperability, the other two vendors adopted the same approach. At that point, we’d have a de facto standard for resumable uploads. Which, with Google, Amazon, and Dropbox behind it, would likely have become an actual standard, much like the current spec for resumable downloads.
Indulge us by continuing just bit further down that rabbit hole. Whether an actual or de facto standard (and, remember, either way, we’re talking about only a modest extension to the current standard), it wouldn’t take long for Web frameworks like Rails, Django, or Express (on the server), and Backbone, Ember, Angular, and so on (on the client), adding support for it, too. We can even imagine iOS and Android adding support. Why wouldn’t they?
After all, the effort in supporting the standard would be worthwhile—even obligatory—no different than supporting gzip compression or JSON content types, because you could use it to handle large file uploads for virtually any service, be it Dropbox, S3, or Google Storage! The end result would have effectively been that resumable uploads would be built-in to the Web, not different, in principle, than caching and compression.
Hiding In Plain Sight—For Two Decades
Let’s look again at the HTTP spec,
specifically the section concerning the
It tells us that after making a range request,
we can expect a response which looks more or less like this:
HTTP/1.1 206 Partial content Date: Wed, 15 Nov 1995 06:25:24 GMT Last-Modified: Wed, 15 Nov 1995 04:58:08 GMT Content-Range: bytes 21010-47021/47022 Content-Length: 26012 Content-Type: image/gif
This HTTP response message says:
Good news! The server can and is responding to your range request, for the 26,012 bytes between the 21,010th and the 47,021st bytes, inclusive, for an image that is probably the first-ever picture of a cute kitten on the Internet, given that it was last modified roughly twenty years ago.
This last piece of information is especially exciting because it implies that range requests have been part of the standard since 1995!
We can infer, thanks to Yoder’s Tenth Rule, that there are developers somewhere, right now, younger than the features of HTTP they are badly re-implementing.
If A Spec Is Superseded, But No One Read It…
We can all remember a time when this was all we knew about HTTP:
POST, and sometimes
PUT. Oh, and
DELETE, I think?
200is okay. Three-hundreds do something with forms. Four-hundreds are bad. Five-hundreds are worse.
419, I’m a teapot.
As we’ve seen, this mentality is limiting. HTTP is a sophisticated and wildly successful protocol. Combine all the calls made to all the development frameworks ever written and they still would only amount to a fraction of the calls made via HTTP in the past hour. SWAG. At the very least, we ought to be able learn something from it. Right?
As an industry, we’ve been insisting for two decades now that we’re quite certain that tunneling custom (typically RPC-based) protocols via HTTP is for sure better than simply using HTTP as it was designed. We justify that by saying we know what our applications need, and all we need is a simple RPC mechanism.
Let’s move on to our next case study: one of the most widely used features of HTTP, Basic Auth.
Browser Vendors Versus HTTP
If you’re a developer, you use HTTP Basic Auth with APIs all the time.
If you’re not a developer, the only experience you’ve ever had with HTTP Basic Auth is an ugly and annoying modal dialog.
There is, of course, nothing in the spec for Basic Auth which says that it has to be implemented as an ugly and annoying modal dialog. The spec speaks entirely in terms of cryptography, security concerns, and HTTP headers.
But for some reason, browser vendors adopted this unpleasant convention. The result: nobody wants to use Basic Auth in a browser. Why would they? I mean, it’s not only annoying and ugly, but you can’t log out once you log in.
Instead, we re-build this functionality into every application and framework we write, in virtually every language we write in. The scale of redundant effort here makes our resumable uploads scenario look like a mere byte in a Lord Of The Rings stream—the entire trilogy. Ibid.
Sure, Basic Auth isn’t perfect. Indeed, some people really hate it. But it’s good enough for the GitHub API, so it’s good enough for a lot of other scenarios, too. The only critical problem with Basic Auth is how browsers have implemented it.
Imagine if you could just use an HTML form to send an HTTP Basic Auth request. How much time might that have a saved over the past two decades? How much money? (And, remember, this kind of drag on our collective productivity ultimately depresses our wages. So this shit is literally costing you money.)
It Worked Out Okay For Lisp
Today, HTTP is the new Lisp. It’s that thing that we condemn ourselves to endlessly and badly reimplementing. We’ve collectively chosen to disregard HTTP’s design and architecture, in spite of the fundamental role it’s played in the growth of the Web, and it’s unprecedented success as an applications protocol.
This is the browser vendors, like Mozilla and Microsoft, and API vendors, like Google and Amazon, but it’s also the rest of us, because we’re not demanding, or even asking for, better support for HTTP. And we’re not making use of, or even learning from, what’s already there.
Changing the way we think is hard. Nevertheless, we hope one day to strike down our variant of Greenspun’s Tenth Rule. Please, take some time to at least understand HTTP—the new specs make that easier than ever—and maybe we can redirect all that time and money into building unique and beautiful things. There are also a few books on the subject.
After all, in the end, it worked out okay for Lisp.