In Defense of JavaScript

Sir Wilfred knows JavaScript isn’t broken.

A long-running theme on this blog is that the Open Web matters. And, further, that the Web isn’t broken, it’s just fine, thank you very much. This includes JavaScript, which is, of course, a central part of the Open Web stack. Although it might be more fun to laugh about peculiar edge cases, or bemoan the size of the most recent spec, the reality is that JavaScript is a powerful and widely deployed language. And, ladies and gentlemen of the jury, we will show not only that JavaScript is not broken, but, further, that the recent evolution of the language is nothing short of genius.

The Case For The Prosecution

As part of our defense of JavaScript, we’ll start with the opposing thesis, which is that it’s completely broken. It’s full of misfeatures dating back to the origins of the Web. Instead of simply replacing it, we just keep adding more features, many of which are hamstrung by the need for backward compatibility. The result is inconsistent and buggy implementations, a collision of conflicting idioms, and a language that tries to be all things to all people. At this point, JavaScript is more a compile target than a real language. Did we learn nothing from C++?

That’s my summary of the case for the prosecution. How’d I do?

Web Assembly Is The New JavaScript

Let’s take these one at a time, starting with the backwards compatibility argument. Why not simply introduce a new language? Surely, the magnitude of the changes for ES6 are comparable in scope to a new language. And surely some of the compiler technology used underneath the hood can be repurposed for a language that isn’t handicapped by what amounts to ancient history (1995) in the computer industry. So why not make a clean start of things?

Here’s what you missed: that’s exactly what is happening.

Hello, Web Assembly. From Brendan Eich’s blog:

It’s by now a cliché that JS has become the assembly language of the Web. Rather, JS is one syntax for a portable and safe machine language, let’s say. Today I’m pleased to announce that cross-browser work has begun on WebAssembly, a new intermediate representation for safe code on the Web.

This work, which an Open Web effort, supported by all the major browser vendors, is even better than a new language, because it offers a compile target instead of a language. In other words, developers will have their choice of languages. Of course, this is simply a refinement to a process that’s already happening. But it’s nonetheless a tremendous step forward:

[…] once browsers support the WebAssembly syntax natively, JS and [Web Assembly] can diverge, without introducing unsafe or inappropriate features into JS just for use by compilers sourcing a few radically different programming languages.

The Light Of A Thousand CoffeeScripts

In other words, JavaScript won’t have to be all things to all people anymore. And languages targeting the browser can go off and find themselves, without worrying about mom and dad.

This is important: part of CoffeeScript’s success was also a huge constraint. It’s just JavaScript. That was never really true—just look at the generated JavaScript when you use extends—but the point is that the underlying semantics of the languages are the same.

Instead of fighting the notion of JavaScript as a compile target, Web Assembly embraces it. And, in the process, it frees up JavaScript to be itself, too. Which is a language with a long history, probably billions of dollars worth of running code, and dozens of real-world use cases that warrant standardization to ensure interoperability into the future.

There Aren’t Any New Features, Just New Standards

Does JavaScript have some weird misfeatures? Yes. It’s had them for many years. And still became the most widely deployed language in the world. Maybe there were better languages out there. Maybe the world isn’t fair. But the fact is, we have all this running, working (mostly) code out there. We have three choices:

No one is really arguing for the second option because that’s an obviously terrible idea. So really we’re talking about either leaving the language alone or updating it without breaking existing code.

However, here’s the next thing you missed: the first option isn’t really an option. Why? Because developers were already pursuing the third option en masse. JavaScript’s ongoing evolution is simply a reflection of what developers are already doing with it.

Take a look at the number of JavaScript Promise libraries out there. Or how about transpilers for dealing with asynchronous functions. And, of course, there’s our own favorite, CoffeeScript, from which ES6 borrowed liberally.

So what we really have are two completely different choices: extend the standard to support some or all of this evolution, or don’t. The advantage of standardization is pretty clear: increased interoperability. Instead of a dozen slightly incompatible Promise libraries, there’s one standard everyone can build upon. Instead of requiring transpilers to simplify asynchronous programming,you just add support for it into the language. And so on.

The disadvantage of standardization? Uh…is there one? Since backwards compatibility is preserved, everything that was already working still works. And if you want to keep doing what you were doing before, you’re free to do so. The only difference is that you now have the option to take advantage of standard libraries and language constructs. Oh, and the spec is larger. But honestly, how many of you are really affected by that?

Complexity Is A Shell Game

I suppose one disadvantage is that the underlying implementations get more complex, but this is really six of one and a half-dozen of the other. The extensions to the standard are largely (entirely?) based on existing use. We haven’t increased the amount of complexity— we’ve just moved it from libraries and transpilers into the JavaScript engines.

The real world is complex. That’s why every major language platform ends up turning into a gas giant: Java, C++, CLOS, and, of course, JavaScript. And there are the reactions to this complexity. People fled the complexity of C++ to Java, and the from Java to Ruby and Python and, with the emergence of Node.js, ironically, from Ruby and Python to JavaScript.

And some of us fled JavaScript to CoffeeScript. Others have embraced TypeScript or Clojure. Which is all fine. But it doesn’t mean that the complexity wasn’t there. We use lots of the new features from within CoffeeScript, including, significantly, Generator functions and Promises. Why? Because, in our experience, they can dramatically simplify asynchronous programming. That’s an artifact of real world complexity that every language that wants to support concurrency must handle somehow. Maybe you were happy handling it some other way and you don’t like the choices that were made. But, again, there’s nothing that prevents you from continuing to do what you were already doing.

Maybe JavaScript Actually Got It Right

In other words, mature languages don’t bloat because of design-by-committee. They bloat because the real world is complicated, interoperability is valuable, and breaking backwards compatibility isn’t really an option. In fact, you could make a pretty good argument that JavaScript has bloated pretty close to minimally in the face of these constraints. Take, for example, one of the more bizarre-looking new features, the interface for accessing iterators:

var a = [1..5]
var i = a[Symbol.iterator]()

That looks pretty weird, right? But here’s the thing. Lots of people have extended the Array prototoype. That was probably not a great idea, but they did it, and now that code is out there. And you can’t just add a function to the global namespace because, again, that would break a lot of existing code. So, while we might prefer an interface like this:

var i = a.createIterator()

or

var i = createIterator(a)

This would likely break existing code. So the folks involved in ES6 came up with a clever, if ugly, workaround. And since iterators would mostly be used in the context of the language’s looping constructs or libraries, that particular interface would not often be used directly. Of course, there are edge-cases, like everyone over in CoffeeScript-land, where the looping constructs don’t know about ES6 iterators. Which sucks, but I can still recognize that this was probably the optimal path forward.

Ladies And Gentlemen Of The Jury…

The bottom line is we have extensions to the language that will encourage both simplicity and interoperability and that are probably close to minimal in light of real-world use. And, crucially, you can still use whatever subset of JavaScript you prefer, if that’s what you want.

Even if you’re just implacably unhappy, we now have an entirely new language embedded inside the old one that will ultimately offer a clean-slate for language designers to replace JavaScript.

And all this has been done in the form of open standards, meaning no single vendor controls them or can manipulate them for it’s own ends. That’s pretty important, kids.

It’s pretty much the best of all worlds, actually. So maybe we did learn from C++, after all. (Or maybe C++ is just the world’s most misunderstood programming language. But I’ll save that rant for another day.)