Month: June 2022

Snapshotting is like compiling but better

Snapshotting is like compiling but better

TL;DR: The final output of a traditional compiler like GCC bears a family resemblance to a Microvium snapshot, but the snapshotting paradigm is both easier to use and more powerful because it allows real application code to run at build time and its state to persist until runtime.

What is snapshotting?

My Microvium JavaScript engine is built on the paradigm of creating a VM snapshot as the deployable build artifact rather than creating a traditional compiled binary. As a developer, when you run the Microvium engine on your desktop with a command like microvium main.js it will execute the script until all the top-level code is complete and then output a snapshot file containing the final VM state. The snapshot file can then be “resumed” on an embedded device using Microvium’s embedded C library (for more details, see Getting Started). Although Microvium is designed especially for microcontrollers, the principle of snapshotting goes beyond the embedded space.

Comparing to GCC

For this post, I’ll mostly compare Microvium to GCC:

gcc main.c         # Compile a C program with GCC
microvium main.js  # "Compile" a Microvium program

These two commands are analagous. Both produce a single file as the result, and this file is what you want to deploy to the target environment. In the case of GCC, the output is of course the executable (e.g. a.out), while in the case of Microvium, it’s the snapshot (main.mvm-bc).

Both of these commands do some kind of compilation as part of the process. GCC translates your function code to machine instructions, while Microvium translates to virtual machine instructions (bytecode instructions).

Constants

Both the GCC output and the Microvium output have a section for constants, including function code. You may be familiar with this as the .text section. Among other things, this contains constant values, such as:

// JavaScript
const x = 42;
// C
const int x = 42;

… but better

You can do this in Microvium but not in C:

const x = foo();

function foo() {
  return 42;
}

In C, it’s a compile-time error to call runtime functions for the calculation of constants. But in Microvium, this is perfectly legal since there is no distinction between compile-time and runtime — there is only really runtime before the snapshot and runtime after the snapshot. Although informally we may refer to the former as “compile-time” and the latter as “runtime”.

Apart from just being cleaner and easier to use, the Microvium snapshotting paradigm here allows computationally-intensive constants to be calculated at build time, using arbitrary functions and libraries that might also be useful at runtime.

Variable initializers… but better

For non-constant variables, both GCC and Microvium have an output section for the initial1 value of all the variables, which is copied into RAM at runtime. You may know this traditionally as the .data section.

But a major difference between them is that a Microvium snapshot also contains heap data.

// JavaScript
let arr = [1, 2, 3];

// C
int* arr = malloc(3 * sizeof(int));  // ! Can't do this (in top-level code)

The best you could do in C for the above example would be to have an init function that runs early in the program to set up the initial runtime state of the program. Snapshotting is better because this initial structure can be established at build time.

Modules … but better

Microvium and C both have support for structuring your code in multiple files which get bundled into the same output artifact:

import { foo } from './foo.js'
#include "foo.h"

With the #include here, the dependent module implementation (e.g. foo.c) is not automatically compiled and linked into the program by GCC — you need to separately list foo.c to be compiled by GCC, or orchestrate the dependencies using a makefile.

But in the case of a Microvium import, the import statement itself is executed at build time, performing the module resolution, loading, parsing, and linking at build time, as well as executing the top-level code of the imported module. The top-level code of the imported module may in turn import other modules, transitively importing the whole module graph and executing its top-level initialization code.

Preprocessor… but better

Both C and Microvium support “compile-time” logic:

// C
#if USE_FOO_1
#define foo foo1
#else
#define foo foo2
#endif
// JavaScript
const foo = USE_FOO_1 ? foo1 : foo2;

Of course, you already knew that, because all the examples so far have demonstrated the fact that snapshotting allows you to run JavaScript at compile time. But I want to emphasize some of the key reasons why the snapshotting paradigm is better for this:

  • You don’t need two different programming languages (e.g. the preprocessor language and the C language).
  • Your “compile-time” code has the full power of the main language.
  • Your runtime code carries over the state from your compile-time code.

So in some sense, this unifies the preprocessor language with the main language. This applies similarly to other “compile-time languages”, such as makefiles and linker scripts. Or if you’re coming from the JS world, consider how the snapshotting paradigm obviates the need for a webpack.config (see Snapshotting vs Bundling).

But what about using the preprocessor to conditionally include different runtime logic, such as for different devices? For example, consider the following C code:

int myFunction() {
#if SOME_CONDITION
  doX();
#else
  doY();
#endif 
}

Of course, it’s easy to see how this example might translate to JS:

function myFunction(someCondition) {
  if (someCondition) {
    doX();
  } else {
    doY();
  }
}

Now we have the bonus that our unit tests can inject someCondition to test both cases.

But doesn’t this code mean that now we have both doX and doY branches at runtime, taking up ROM space? (and someCondition)

That’s why I’ve also been developing the experimental Microvium Boost: an optimizer that analyzes a snapshot and removes unused branches of code. For example, if the analysis shows that someCondition is always true in your program, it can remove it as a parameter from myFunction and also remove the call to doY as dead code. This is still experimental but has shown significant success so far.

Host exports… but better

So far we’ve been considering an executable output from GCC, but it would be more accurate to compare a Microvium snapshot with a compiled shared library (e.g. DLL). Like a shared library, Microvium snapshots do not have a single entry point but may contain many exported functions to be resolved at runtime on the device.

A Windows DLL suits the analogy better than a Linux shared library, so in this section the examples will use msbuild rather than GCC.

Both a Microvium snapshot and a DLL binary contain a section for dynamic linking information — a table that associates relevant functions in the DLL with a number2 so that the host program using the DLL/snapshot at runtime can find them.

In the case of a DLL, you can provide the compiler with a DEF file that tells the compiler what to put into the DLL export table. If you wanted to export the functions foo, bar, and baz from the DLL with IDs 1, 2, and 3 respectively, your DEF file might look like this:

LIBRARY   MyLibrary
EXPORTS
   foo   @1
   bar   @2
   baz   @3
// C
int foo() { return 42; }
int bar() { return 43; }
int baz() { return 44; }

The equivalent in Microvium would be as follows:

// JavaScript
function foo() { return 42; }
function bar() { return 43; }
function baz() { return 44; }

vmExport(1, foo);
vmExport(2, bar);
vmExport(3, baz);

You may have noticed a recurring theme in this post: the Microvium snapshotting paradigm doesn’t require a whole new language in order to do different build-time tasks. In this case, Microvium doesn’t require a DEF file (or a special __declspec(dllexport) language extension), since vmExport is just a normal function. This is just simpler and more natural.

Another recurring theme here is that the snapshotting approach is more powerful, allowing you to do things that are impossible or impractical in the traditional paradigm. Take a look at the following example in Microvium:

// JavaScript
for (let i = 1; i <= 3; i++) {
  vmExport(i, () => 41 + i);
}

This has the same overall effect as the previous code, adding 3 functions to the export table of the deployed binary with IDs 1, 2, and 3 and which return 42, 43, and 44 respectively. But having vmExport be a normal function means that now we have the full power of the language for orchestrating these exports, or for writing an abstraction layer over the export system, or outsourcing this logic to a third-party library.

Side note: a more subtle point in this example for advanced readers is its code cohesion. The single line of code mixes both compile-time and runtime code (vmExport(i,...) and ()=> 41 + i respectively), but keeps the related parts of both in close proximity. This is the difference between temporal cohesion (grouping code by when its run) vs functional cohesion (grouping code by what feature it relates to) (see Wikipedia). A common disadvantage of having separate build-time or deploy-time code (e.g. a DEF file, makefile, linker script, webpack.config, dockerfile, terraform file, etc) is that it pushes you into temporal cohesion, which in turn damages modularity and reusability.

Conclusion

The idea of deploying a snapshot rather than a traditional compiled binary opens up a whole new paradigm for software development. The end result is very similar — a binary image with sections for different memory spaces, compiled function code, constants, initial variable values, and export/import tables — but the snapshotting paradigm is both simpler and more powerful.


  1. In the context of Microvium, the word “initial” here refers to the initial state when the snapshot is resumed, not the initial state when the program is started, since the program starts at build time, and variables in JavaScript start with the value undefined. 

  2. DLL exports can be by name or number, but Microvium exports are only by number, for efficiency reasons, so that’s what I’m using in the analogy here. 

Microvium is very small

Microvium is very small

TL;DR: The Microvium JavaScript engine for microcontrollers takes less than 16 kB of ROM and 64 bytes of RAM per VM while idle, making it possibly the smallest JavaScript engine to date with more language features than engines 4x its size.


I’ve designed Microvium from the ground up with the intention for it to be tiny, and it’s been an absolute success in that sense. Microvium may be the smallest JavaScript engine out there, but it still packs a punch in terms of features.

*Edit: this post was originally written when Microvium was around 8.2 kB of ROM. Since then, new features have been added. As of August 2023, Microvium is now 12 kB.

Does size matter?

Size often matters in small MCU devices. A large proportion of microcontroller models available on the market still have less than 64 kB of flash and less than 2 kB of RAM. These are still used because they’re smaller, cheaper, and have lower power than their larger counterparts. All the microcontrollers I’ve worked with in my career as a firmware engineer have had ≤ 16 kB RAM.

Some might say that you shouldn’t even want JavaScript on such small devices, and certainly in some cases that would be true. But as I pointed out in my last post, juggling multiple operations in firmware can be both easier and more memory efficient if the high-level logic is described in terms of a language like JavaScript, even if that’s the only thing you’re using it for.

Even on larger devices, do you really want to dedicate a large chunk of it to a JavaScript engine? A smaller engine is a smaller commitment to make — a lower barrier to entry.

How does it compare?

If I Google “smallest JavaScript engine for microcontrollers”, the first one on the list is Elk. Elk is indeed pretty tiny. For me, it compiles to just 11.5 kB of flash1. Microvium compiled with the same settings compiles to about 12 kB — in the same ballpark.

What about RAM?

The amount of RAM Elk uses is not pre-defined — you give it a buffer of RAM of any size you want, but it needs to be at least 96 bytes for the VM kernel state. Microvium takes 36 bytes for the kernel state.

But where there’s a massive difference in memory requirement is that Elk requires all of its memory allocated upfront, and keeps it for the lifetime of the VM. If your script’s peak memory in Elk is 1 kB then you need to give it a 1 kB buffer at startup, so its idle memory usage is 1 kB. Microvium on the other hand uses malloc and free to allocate when needed and free when not needed. Its idle memory usage can be as low as 88 bytes. In typical firmware, idle memory is much more important than peak memory, as I explained in my last post.

What about the feature set? This is another area where Microvium and Elk diverge significantly. The following table shows the differences:

MicroviumElk
var, const (Elk supports let only)
do, switch, for
Computed member access a[b]
Arrow functions, closures
trycatch
asyncawait
Modules
Snapshotting
Uses intermediate bytecode (better performance)
Parser at runtime
ROM12 kB11.5 kB
Idle RAM88 BLots
Peak kernel RAM36 B96 B
Slot size (size of simple variables)2 B8 B

The only thing that Elk can do that Microvium can’t do is execute strings of JavaScript text at runtime. So if your use case involves having human users directly provide scripts to the device, without any intermediate tools that could pre-process the script, then you can’t use Microvium and you might want to use Elk, mJS, or a larger engine like XS. On the other hand, if your use case has at any point a place where you can preprocess scripts before downloading them to the device then you can use Microvium.

Comparing with mJS

But Cesanta, the maker of Elk, also made a larger JS engine with more features: mJS, which is probably the closest match to Microvium in terms of feature set. mJS lets you write for-loops and switch statements for example.

Since they’re closely matched for intent and features, I did a more detailed comparison of mJS and Microvium here. But here’s a summary:

MicroviummJSElk
var, const (mJS supports let only)
Template strings
Arrow functions and closures
trycatch
asyncawait
ES Modules
(but mJS does support a non-standard load function)
do, switch, for
Computed member access a[b]
Uses intermediate bytecode (better performance)
Some builtin-functions
Parser at runtime
ROM12 kB45.6 kB11.5 kB
Slot size2 B8 B8 B

I’ve lumped “some builtin-functions” into one box because it’s not a language feature as such. mJS has a number of builtin functions that Microvium doesn’t have – most notably print, ffi, s2o, JSON.stringify, JSON.parse and Object.create. You can implement these yourself in Microvium quite easily without modifying the engine (or find implementations online), and it gives you the option of choosing what you want rather than having all that space forced on you2.

In terms of features, mJS is a more “realistic” JavaScript engine, compared to Elk’s minimalistic approach. I wouldn’t want to write any substantial real-world JavaScript without a for-loop for example. Like Microvium, mJS also precompiles the scripts to bytecode and then executes the bytecode, which results in much better performance than trying to parse on the fly. Engines like Elk that parse as they execute also have the unexpected characteristic that comments and whitespace slow them down at runtime.

But the added features in mJS means it costs a lot more in terms of ROM space — about 4x more than Elk and Microvium.

Microvium still has more core language features than mJS, making it arguably a more pleasant language to work in. These features are actually quite useful in certain scenarios:

  • Proper ES module support is important for code organization and means that your Microvium modules can also be imported into a node.js or browser environment. You can have the same algorithms shared by your edge devices (microcontrollers), backend servers, and web interfaces, to give your users a unified experience.
  • Closures are fundamental to callback-style asynchronous code, as I explained in my previous post.

Conclusion

I’m obviously somewhat biased since Microvium is my own creation, but the overall picture I get is this:

  • Microvium is the smallest JavaScript engine that I’m aware of3
  • In this tiny size, Microvium actually supports more core language features than engines more than 4x its size. Some of these features are really useful for writing real-world JS apps.
  • Having said that, Microvium has fewer built-in functions — it’s more of a pay-as-go philosophy where your upfront commitment is much less and you bring in support for what you need when you need it.
  • The big trade-off is that Microvium doesn’t have a parser at runtime. In the rare case that you really need a parser at runtime, Microvium simply won’t work for you.

Something that made me smile is this note by one of the authors of mJS in a blog posts:

That makes mJS fit into less than 50k of flash space (!) and less than 1k of RAM (!!). That is hard to beat.

https://mongoose-os.com/blog/mjs-a-new-approach-to-embedded-scripting/

I have great respect for the authors of mJS and what they’ve done, which makes me all the more proud that Microvium is able to knock this out of the ballpark, beating what the seasoned professionals have called “hard to beat”. Of course, this comes with some tradeoffs (no parser and no builtin functions), but I’ve achieved my objective of making a JavaScript engine that has a super-low upfront commitment and will squeeze into the tiniest of free spaces, all while still including most of the language features I consider to be important for real-world JavaScript apps.


  1. All of the sizes quoted in this post are when targeting the 32-bit ARM Cortex M0 using GCC with optimization for size. I’m measuring these sizes in June 2022, and of course they may change over time. 

  2. The ffi in mJS is something that would need to be a built-in in most engines but Microvium’s unique snapshotting approach makes it possible to implement the ffi as a library just like any of the other functions 

  3. Please let me know if you know of a smaller JS engine than Microvium.