Picture this: you’re waiting on some code to compile. You’re using one of those fancy package managers and/or build tools, all your dependencies are compiled and/or installed automatically!! Yay! Right? Meanwhile, there’s screenfuls of text scrolling by your terminal, often at alarming speeds… there’s
configure checking for the presence of like 100 different system headers… or something (come to think of it, does anyone actually know what all those checks are for??) … now there’s some Haskell code being compiled, more screenfuls of text… oop, some code just randomly downloaded from the internet… now some tests running. Seems this dependency has some “doctests”, which means it’s actually parsing and running Haskell code embedded as comments in the code. Gee, that’s rather silly… why is this build process even running the tests if I haven’t modified the code?? I wonder if there’s some sort of flag to disable that. You know what, forget it, just let the damn tests run, by the time I… oop, now it’s installing and some huge C library, what… on earth? I know I’m going to be using max like .01% of that library in my code, why do I have to download and compile all of it up front??! This is getting ridiculous. A few minutes later… Oh, geez, random bloated C dependency I won’t even use is missing some obscure file and crashes during the build… apparently my environment wasn’t configured properly. You stupid computer!! Why did you even start ‘building’ if the program was effectively ill-typed?? You just wasted like 10 minutes of my time when I could have been actually creating something! Google searches… trying various stuff. Hmm, maybe I’ll try IRC. No surprise, your question gets ignored in IRC… multiple times, while some people argue vehemently about the color of the bikeshed. (Blue! No, red!!) Five hours later, you’re mentally checked out and strongly considering a career switch to basket weaving.
Good grief! Is all this junk really necessary? No. Compilation, linking, the whole notion of “building” programs being a phase separate from the act of editing them is a relic from the punchcard era. We should eliminate these needless distinctions. Programs being edited should be immediately available for evaluation, without the programmer having to do anything special. Sounds like I want “continuous compilation” or “continuous builds”, right? Actually, no, I want no builds at all. I want to eliminate concepts that no longer serve a useful purpose.
Is this really possible? Yes. Let’s return to first principles. Why do we “compile” code? Well, a few things happen during compilation:
Then there’s often a third step, linking, that actually takes these separately compiled files and resolves all references between them, producing something you can actually evaluate. Sometimes we wait until runtime to even check whether the references exist and generate arcane errors if something is missing!
In Unison, the editor constrains the program to be well-formed and well-typed. So there is no need for a separate parsing or typechecking phase; we get instantaneuous feedback while editing, and programs typecheck (and of course are well-formed) by construction. So when I say let’s stop “building” and “compiling” code, I’m not saying “let’s throw out static types”, static types are awesome! But let’s not write a bunch of code as a blob of text then submit it to the typechecker as a separate phase from editing.
What about this second phase, emitting some sort of binary “compiled” form of our program to a file? What about a separate linking phase? Are those still needed? Well, no! Or rather, they are still needed, but they aren’t separate phases and this work can be interleaved with the evaluation of our program, rather than done in entirety up front while the programmer sits and waits.
It is useful to convert code to some form that can be efficiently executed. But we don’t need to block the user and do all this work up front, and we certainly don’t need to dump the result of this work to a file. We can do the translation on demand, along with linking, while the program is evaluating, and if the translation takes a while we can run it in the background while temporarily relying on a simple interpreter. Obviously, this isn’t a new idea, this is what existing JIT compilers do. I’m just looking at this from the perspective of the overall developer experience.
Here’s how things (will) work in Unison:
cdinto this directory, issue this command… unless you’re on Mac, in which case first install X, Y, and Z, make sure it’s a Tuesday, not a full moon, and you didn’t have a burrito for lunch”. There are no separate build tools equipped with their own ad hoc configuration languages. If someone has written Unison code you’d like to use, you obtain a link to a Unison hash, instruct your Unison node to sync that hash (all missing dependencies are synced automatically of course), and immediately you have something you can evaluate, use, and build upon. The hash is also used to verify that you have indeed received the reference you specified.
This is a nicer UX, since the user isn’t ever blocked on ‘compilation’ and can run programs as fast as they can edit them, but it’s also potentially more efficient, since JIT compilers have much more information at their disposal! (They have information about the program after it’s been linked, and various runtime statistics.)
I don’t think any of this is new or particularly controversial… yet we continue using the same old metaphors and thinking from a bygone era of computing. Why?comments powered by Disqus