Category: Microvium

Microvium Closure Variable Indexing

Microvium Closure Variable Indexing

TL;DR: In many situations, a program in Microvium bytecode can access closure variables with just a single-byte bytecode instruction. The instruction contains a 4-bit index that either indexes into the current lexical scope or recursively overflows to the next-outer lexical scope, cascading up the lexical scope chain until the variable is found. In this post, I discuss some of the journey and the details of how this works.


What is a closure variable?

A closure is a function that accesses variables outside its local lexical environment. See the MDN article on this for a better explanation.

What I mean by a “closure variable” in this post is a local variable that is accessed by a closure1. For example, in the following code, callback is a closure because it accesses x which is outside its own local variables, and correspondingly x is a “closure variable” by this definition because it’s accessed by callback:

function foo() {
  let x = 10;
  setTimeout(callback, 1000);
  function callback() { 
    console.log(x);
  }
}

In Microvium (and other JavaScript engines), closure variables are treated differently from normal local variables because they need to outlive the stack frame in which they are declared. The variable x here needs to survive beyond the return of foo. This is done by allocating the slots for these variables on the heap instead of the stack.

If a variable is demoted to the heap, all access to that variable is via the heap, even if it’s being accessed locally. For example:

function foo() {
  let x = 10;
  console.log(x);  // <--- this is also accessing `x` on the heap
  setTimeout(callback, 1000);
  function callback() { 
    console.log(x);
  }
}

There is a static analysis pass in Microvium that decides whether a variable is a closure variable or not. In principle, it’s safe to allocate all variables on the heap rather than doing this analysis, but the heap is expensive, so as an optimization some variables can be promoted to the stack if they are not used by closures.

For the sake of the rest of this blog post, I will pretend that all variables are closure variables, even if I do not show the closure that accesses them. Or equivalently, I will pretend that there is no optimization analysis to determine which variables can be promoted to the stack.

Background: how it might have worked

Before I get to describing how it does work today, let me describe how I previously implemented it before having my eureka moment. You can skip this section if you want to get straight to the answer instead of going through the journey as I did.

The state of the virtual machine needs to keep track of the current lexical scope. So let’s add a new machine register called scope which points to the current lexical scope.

A goal with Microvium is to keep the engine implementation small. So rather than introducing a new allocation type for environment records and new bytecode instructions to access them, maybe we can reuse an existing type and existing instruction.

A natural solution that might come to mind is to use an Object, where the property keys of the object correspond to the variable names. Microvium has existing bytecode instructions ObjectGet and ObjectSet to get and set properties on an object.

However, Objects are very expensive at runtime. Each property is stored in memory as a key-value pair taking 4 bytes, and property lookup is a linear-time search through the properties to find the one with the right key.

Since we can statically determine the number of variables, a better choice of container for our variables would be a fixed-length array, where we statically compute an index for each variable rather than using its name. In Microvium, a fixed-length array is quite efficient, having constant-time random access to any slot by its index and each slot only consumes 2 bytes.

So let’s think about compiling the following JavaScript to IL:

let x, y , z;
x = 10;
y = 20;
z = 30;

We will have a fixed-length array with 3 slots for these 3 variables, and our new scope register will point to this fixed-length array to say that this is the current scope. Whenever we need to access one of these variables, we will read and write to the array.

I’m showing the allocation header here for completeness. These arrays are heap-allocated, so they each require this implicit memory slot for information about the size and type of the allocation.

So, the following is the IL sequence that might be produced for the above JavaScript:

// let x, y, z;
ArrayNew(3)         // Allocate 3 slots on the heap
StoreReg('scope')   // Save the array in the "scope" register

// x = 10;
LoadReg('scope')    // Fetch the current scope
Literal(10)
ArraySet(0)         // Set the first slot in the array

// x = 20;
LoadReg('scope')    // Fetch the current scope
Literal(20)
ArraySet(1)         // Set the second slot in the array

// x = 30;
LoadReg('scope')    // Fetch the current scope
Literal(30)
ArraySet(2)         // Set the third slot in the array

Side note: Microvium IL is based on a stack machine (see Wikipedia). The instruction Literal(10) pushes the value 10 to the top of the stack. The instruction ArraySet(0) pops the literal value off the stack, and pops the array reference off the stack (which was previously pushed by LoadReg('scope')) and then assigns the 0th slot in the array.

Nested scopes

What if there are nested lexical scopes, as in the following JavaScript code:

function foo() {
  let x;
  function bar() {
    let y;
    function baz() {
      let z;
      x = 10;
      y = 20;
      z = 30;
    }    
  }
}

In the above code, z is accessed in the same scope in which it is declared, as before. But x and y are accessed from parent (outer) scopes relative to the expression that accesses it. So we need a way for the IL to access parent scopes.

Remember that the inner scopes can be instantiated multiple times for each single instantiation of an outer scope. For example, if bar() is called twice within foo, then there will be 2 instances of variable y for every one instance of variable x. So we can’t just put x, y, and z in the same array. We need each lexical scope to be in its own array, and we need a way to reference an outer scope from an inner scope.

This might seem like we need to abandon the fixed-length array as the underlying storage for these variables, since these don’t naturally form chains. But if we think about it a moment, we could reserve one of the slots in the fixed-length array as a pointer to the parent fixed-length array. Let’s mentally reserve the first slot in each array as the pointer to its parent scope.

In the above JS, we have 3 distinct lexical scopes, and so we may land up with a scope chain as follows:

Now, the IL to read variable x may look as follows:

LoadReg('scope')  // Get the current scope (the one containing z)
ArrayGet(0)       // Read the parent scope (the one containing y)
ArrayGet(0)       // Read the parent scope (the one containing x)
ArrayGet(1)       // Read variable x

Each of these instructions encodes to 1 byte, so this is a 4-byte sequence in total.

I thought this was a pretty good solution. It didn’t add any extra instructions to the IL instruction set, and so didn’t make the engine any bigger. An important goal in Microvium is to keep the engine small.

The final solution

In the end, I decided that closures were too important to have it cost so many instructions to read and write to them. Rather than emitting multiple IL instructions just to read or write a single variable, it seemed quite logical to bake this behavior into the engine itself, and add two additional instructions to the IL instruction set: LoadScoped and StoreScoped. I wanted to keep these instructions both compact and efficient, and that’s where the design challenge is interesting.

Consider the following JavaScript with a few more variables to make the pattern clear:

function foo() {
  let a;
  let b;
  function bar() {
    let c;
    let d;
    function baz() {
      let e;
      let f;
      a = 10;
      b = 20;
      c = 30;
      d = 40;
      e = 50;
      f = 60;
    }
  }
}

The assignment statements in this example, such as a = 10, are accessing the variables in one of 3 different lexical scopes. How can we design the instruction format for LoadScoped (and StoreScoped) so that the bytecode can specify which scope is being accessed as well as which variable in that scope?

The eureka moment for me was when I realized that I could use a single mapped index that specifies both the scope and the variable within that scope. I’ll represent this mapping in the following diagram, with the index on the left and the variable it maps to on the right (the allocation headers are omitted here for clarity).

The scopes in this design form a waterfall. Small indexes are accessing the inner-most lexical scope. Larger indexes “overflow” into the next outer closure scope, repeating until the variable is found.

So, the IL LoadScope(1) would load variable e, and LoadScope(8) would load variable b, for example. We can determine the indexes through static analysis — numbering the variables in the closest scope first and then the ones in the next closest, etc2.

Implementation in C

My original concern with adding new bytecode instructions was that it would make the engine much bigger and more complicated. But this design can be implemented efficiently in the engine, adding only a little bit more complexity. See the following C code for finding the variable with index index3:

uint16_t* arr = registers->scope;
do {
  // The length of the array is in its header word
  uint16_t len = arr[-1] & 0xFFF;

  // Is the variable in this scope?
  if (index < len) return arr[index];

  // Otherwise, cascade/overflow to the outer scope
  arr = arr[0];
  index -= len;
} while (1);

Actual Instruction format

For the curious, the actual bytecode instruction format for LoadScoped in its simplest form is just 8 bits, where the upper nibble is the opcode (“LoadScoped”) and the lower nibble is the 4-bit index, allowing up to 15 closure variables to be addressed.

StoreScoped is the same, but with a different opcode.

For the even-more-curious, if your code has more variables than will fit in the 4-bit index, there are a16-bit and 24-bit4 instruction variants as well, as per the following snippet of the Microvium technical documentation:

Conclusion

I’m very satisfied with the final design so far. It brings closures up as first-class citizens in Microvium by making closure variable access almost as space-efficient as local variables, in terms of both bytecode space and memory usage, while also having only a small CPU cost. I think this is important because closures are very common in real-world JavaScript code, especially when programming with a functional style, and also because closures may in future form the foundation for the implementation of other features such as generators and async-await.

This has been a good reminder to me that it’s worth thinking deeply about designs and considering different options, rather than jumping straight in and implementing whatever comes to mind first. In this case, the final design was both simpler to implement and more efficient.

There’s a lot I haven’t talked about in this post, such as how the array is created in the first place and how the scope register changes as control moves between different lexical scopes. But that can be for another time.


  1. If you have a better name for a “closure variable”, please let me know 

  2. Note that one complexity with this design is that the index for a single variable is different depending on which scope the code is accessing it from. The index is not an absolute property of the variable itself. 

  3. This is not the actual code. The actual code needs to deal with error checking and multiple address spaces, for example. 

  4. Why would anyone ever need to address more than 255 closure-scoped variables? I heard a story that the C# compiler assumed that nobody would ever need more than 65536 variables in a single function, but that assumption was violated by a code generator that was generating massive functions. So I’m cautious about saying “nobody will ever need more than 255 variables” when it’s only 2 lines of code to support it. Maybe I’ll change my mind in the future. 

New Garbage Collector!

New Garbage Collector!

It’s only been a few weeks since the Alpha release of Microvium and I’ve already almost rewritten the engine! There are two major changes I wanted to make to improve performance:

  1. I’ve moved to a great new garbage collection algorithm which is much faster and has a smaller memory footprint in both RAM and codespace.
  2. I’ve changed the data model to allow Microvium values to embed native pointers on 16-bit platforms.

The Old Data Model

To understand the new data model, it will help if I first explain the old one, so you can see how they compare.

Microvium “slots” are 16-bit. The term “slot” is what I’m using to refer to a memory location that holds a value. For example, a variable is a “slot”, and an object property has a slot for its value (and another slot for its key), etc.

In the old data model, Microvium reserved the top 2 bits of the 16-bit value to be a tag, leaving a 14-bit space for data associated with that tag.

The 2-bit tag tells you how the 14-bits should be interpreted:

  • Tag 00₂: the data is a 14-bit signed integer
  • Tag 01₂: the data is a 14-bit offset into heap memory
  • Tag 10₂: the data is a 14-bit offset into data memory
  • Tag 11₂: the data is a 14-bit offset into the bytecode

Having values fit into 16-bits is convenient for the kind of device that Microvium targets: small microcontrollers with < 64kB of memory. Many of these are 16-bit architectures, because 16-bits is all you need to address all your memory, and moving to 32-bits increases the silicon size and power consumption.

14-bit integers are large enough for many purposes, such as counters, loop variables, indexes, byte manipulation, etc. For situations where more than 14 bits are required, Microvium has the above-mentioned “offset” values which point from the 16-bit slot to a larger memory allocation, somewhere in ROM or RAM. These larger memory allocations can hold any number of other value types, such as arrays, strings, 32-bit integers, function bytecode, etc.

Depending on the characteristics of the referenced data, they could be stored in either ROM or RAM. This design uses offsets rather than explicit pointers for 2 reasons:

  1. Not all devices can fit a pointer into 14-bits. In fact, most can’t.
  2. When a pointer is stored in ROM (i.e. in the bytecode image), it can’t know ahead of time what its address is (since the image could be placed anywhere in ROM).

Side point: what I’ve called “data memory” and “heap memory” here are both memory regions in RAM, but data memory is permanently allocated while heap memory is garbage collected. This is analogous to the difference between the following two allocations in C:

MyStruct myVariable1; // Permanently allocated
MyStruct* myVariable2 = malloc(sizeof(MyStruct)); // Could be freed later

This 16-bit data model seems pretty clever (or at least to me it did), but there are a number of issues with it and optimizations we can make. The first issue is that each of these respective offsets can only be 14-bit, giving only an addressable space of up to 16kB. On the whole, I don’t mind Microvium targetting use cases where applications can’t exceed 16kB, but it does feel awfully tight. This is compounded by the fact that the engine itself compiles to about 7kB1, so the engine is necessarily quite chunky relative to the largest bytecode programs it can run.

The other issues are easiest to see when comparing it to the new data model:

The New Data Model

The new data model uses a number of clever tricks to improve both performance and addressable size on 16-bit architectures.

In the new model, the tag is not a fixed size. It is 1 or 2 bits, and now in the lower position of the word rather than the upper.

If the lowest bit is a 0, the word is interpreted as a 16-bit pointer:

This makes the assumption then that pointers are always even, which is a good assumption since Microvium heap allocations are always 2-byte aligned. This means that once checking the low bit, we don’t need to do any bit manipulation to extract the pointer value: the value is already a 16-bit pointer.

This means that — at least on 16-bit architectures — a native pointer can be stored directly in a memory slot (variable) without any need for encoding and decoding. This speeds up a lot of things, but especially the garbage collector (which I’ll talk about in a moment). This also means that Microvium can now address 4x the amount of RAM than it could previously since it can address 64kB of space.

This speedup is quite significant. The heap in Microvium is not allocated contiguously from the host (doing so would make it difficult to expand the heap dynamically when more space is needed), so resolving a 14-bit heap offset, as per the old scheme, involved a function call with fairly complex logic to determine what heap fragment the offset belongs to, and to find the allocation being referenced. With the new scheme, this all goes away.

Bytecode (ROM) Pointers

Not everything can be addressed by 16-bits, even on a 16-bit architecture. In particular, ROM is not necessarily addressable with 16-bit pointers, since ROM may be larger than RAM, or it may be stored in a completely different address space (or even on an external device such as a serial flash).

Microvium intends to cater for these common situations, so the next kind of value that can be stored in a slot is as follows, having the lower 2 bits with a value of 01₂ and the upper 15 bits be an offset into the bytecode image:

Again, this makes use of the fact that bytecode addresses will be even (this was not necessarily true before, but I’ve made it true in this latest version). So to convert this value to an offset, we just have to shift right by 1 bit.

This only gives us up to 32kB of addressable bytecode ROM, but this is still pretty good. The bytecode file itself can be up to 64kB in size, but the ROM segment is limited to 32kB. I think this is more than acceptable for the kind of use cases that Microvium is currently intended for.

This leaves us with the last tag, 11₂, to represent integers:

We still only have signed 14-bit integers, like before. But even for these, this scheme is more efficient than the old! In the old scheme, to decode a 14-bit integer from the 16-bit word, we needed to perform a sign-extension, since the upper 2 bits (tag bits) were always zero, even when representing negative values. A 14-bit-to-16-bit sign extension typically involves multiple operations. In this new scheme, we need only perform a single signed-right-shift by 2 bits, and the sign will automatically be correct!

The New Garbage Collector

I like it when code gets smaller instead of bigger, and the new garbage collector (GC) is one of those cases. And with the new data model, the garbage collector is an order of magnitude faster than it used to be.

All the principles and objectives from my previous garbage-collection post still apply. In fact, I was well into the new design by the time I finished the implementation of the old GC, but I wanted to get the alpha release out before embarking wholesale on a major overhaul. This animation from last time summarizes the kind of environment that Microvium is meant to be operating in, where the circles represent different modules or tasks, and only one or a few at a time are expected to be active/scheduled, taking center stage and consuming their peak memory2:

The key takeaway from the aforementioned post is that the memory consumed by a task for only a short burst is typically significantly cheaper than the amount of memory it permanently reserves. This is because the average permanent memory per task is multiplied by the total number of tasks, while average peak memory per task is only multiplied by the number of simultaneously-active tasks running at their peak.

Single-threaded environments, of which I’m a huge fan, are even better for this since they enforce that there is only one “simultaneously-active” task — micro-tasks are scheduled serially and so each can consume all the resources for a short time, as long as they clean up thoroughly before the next microtask in the queue. If microtasks are kept short (e.g. less than 1ms) then responsiveness is still good and the system can still be “real-time-enough” for many real-world applications.

The new garbage collector is based on Cheney’s Algorithm (wiki). The general idea is this:

  • When it’s time to garbage collect, the GC finds reachable (live) allocations and moves them to a new virtual heap.
  • It then discards the old heap all at once.

The coolest new feature of the new Microvium GC is that it costs nothing at all to free an allocation! Dead allocations are simply never visited by this algorithm, and just get freed all in one go when the algorithm discards the old heap.

All living allocations have a cost during collection (this is true of all GC algorithms3 ). But also recall from my last post on GCs, that microcontrollers can have a 1000x more processing power than memory4, relative to desktop machines, so the fact that living allocations incur some (small) amount of processing time during every collection is probably not as impactful as the fact that living allocations continue to consume memory after the collection. In other words, don’t worry about how much CPU overhead a garbage collector has, and instead worry about designing your scripts such that their idle memory consumption is small, as you should be doing regardless.

This algorithm makes it fairly cheap to make tons of temporary allocations for intermediate computations or for passing data between functions, etc. And then when the task’s turn completes and the GC is run to clean up, these temporaries are discarded with no processing or memory overhead.

Since allocations in the Microvium heap are contiguous5, creating new allocations is also cheap — it’s almost just bumping a pointer forward6.

In the following animation, blocks represent allocations, and a block turns gray to represent that the allocation has become unreachable7 — blue blocks are live (reachable) allocations. The program8 can allocate quickly and cheaply by just bumping the allocation cursor/pointer forwards. Dead objects remain on the heap until collection. The collector then copies the live allocations to a new space and the old space is discarded in one fell swoop.

With the new data model, tracing the reachability graph is almost trivial: for each variable, check if the lowest bit is a zero. If it is, then the variable holds a 16-bit pointer to a reachable allocation. If not, the variable holds anything else — an integer or a pointer to ROM (which doesn’t need tracing). Reachable objects are copied into the new memory space and their internal slots are in turn investigated the same way.

Collection with this algorithm is quick and proceeds in a single pass with constant memory overhead — no mucking about with mark bits, free lists, work queues/stacks, etc.

Bonus: Array and Object compaction

As a bonus, this new implementation of the GC also compacts arrays and objects on the fly.

Dynamically-sized arrays consume more space than needed so that appending to them is quick: every time they run out of space, their capacity is doubled to make room for twice as many elements. The GC algorithm truncates the capacity of these arrays on the fly, so they use the smallest possible amount of space.

The case is similar for objects. Dynamic objects in Microvium are represented in memory by a linked-list of property cells (key-value pairs with a next pointer), so that appending new properties is quick. But the linked list consumes twice the amount of space of an equivalent contiguous array, so the GC compacts these linked lists into contiguous property arrays on the fly. This gives the best of both worlds, making it efficient to link a new property cell onto an existing object, but having that property cell subsumed into to the contiguous arrays during GC compaction so that each property takes the smallest possible space while dormant9.


  1. This is measured on an msp430, with floating-point support disabled. Presumably this number will grow over time 

  2. Read the full post for a more complete explanation 

  3. But can be mitigated on desktop/server machines by the use of generational garbage collectors, an unnecessary complexity for MCUs. 

  4. To put it another way, consider that many MCUs have the CPU power to loop through their entire RAM in the order of milliseconds, while desktop-class machines would do the same in the order of seconds 

  5. At least, contiguous within the large chunks of memory allocated from the host as the heap grows 

  6. Bumping the allocation pointer forward, checking for overflow, and assigning the 2-byte allocation header. 

  7. Unreachable means that there is no way for your program/script to access it anymore, so it is subject to garbage collection 

  8. The program is called the mutator in GC parlance. 

  9. A property in compacted form is 4 bytes: a 2-byte key and a 2-byte value. A property in linked-list form consumes extra bytes for the linked list pointers and the allocation header of each cell. 

Microvium Boost
It's like magic

Microvium Boost
It's like magic

TL;DR: This Microvium plugin (in development) optimizes Microvium bytecode by statically determining which variables and properties are accessible and how they might be accessed (read vs write), to decide whether to safely store them in ROM or remove them completely from the bytecode.


With the recent alpha release of Microvium, I’ve since turned my attention to a complimentary piece of the puzzle, which I’m calling Microvium Boost (the name may change).

Microvium Boost is an optimization plugin for Microvium1. It hooks into the Microvium pipeline to optimize a snapshot of bytecode. As a recap, the snapshot is the file that you would download to target MCU; it’s a capture of the full running state of the Microvium virtual machine — see the Concepts page in the Microvium documentation.

Like a typical executable file, a bytecode file contains separate sections for data variables versus constants and functions. I’ve called these sections data and rom2 (these are closely analogous to the .data and .text segments produced by a C compiler). When a virtual machine is restored (loaded) on the target MCU, the data section is copied into RAM so the program can execute it, while the rom section is accessed directly from the bytecode image in ROM during execution (assuming you store the bytecode in ROM).

If I can summarize what Microvium Boost does in a single sentence, it’s this:

Microvium Boost determines the best memory section for each element in the snapshot bytecode, or if it can be completely discarded.

For some things, the best memory section is obvious. For example, function code is immutable and will always be stored in ROM. In the future, frozen objects could also probably be put straight into ROM without any analysis, but Microvium does not currently support freezing.

For all other variables and objects in the snapshot, we can store them in ROM if they are not going to ever be mutated at runtime3. It’s the job of Microvium Boost to determine this ahead of time through static analysis.

A related problem that Microvium Boost solves is choosing whether an element in the snapshot needs to be kept at all. For example, it removes functions that are never called, and objects and variables that are never accessed.

This is a notoriously difficult problem to solve accurately. Take a look at a previous post of mine where I demonstrated that Webpack’s static analysis for tree shaking is quite easy to fool into producing erroneous results.

The best way to show the kind of thing that Microvium Boost can do is by a few examples.

Examples

Moving values to ROM

The following is a simple example4. Here we have a script that creates an object with two properties and then exports a function to the host5 which returns one of those properties.

var obj = { // Stored in ROM by optimization
  x: 5,
  y: 6  // Removed by optimization
};

function run() {
  var temp = obj.y;  // Removed by optimization
  return obj.x;
}

// Export the `run` function to be called by the MCU host
exportValue(0, run); 

In the above case, Microvium Boost determines that object obj is not mutated (not just the variable, but the object itself), and that property y and variable6 temp are not used at all in a particular application.

It’s worth highlighting that var temp = obj.y does not count as a use of property y, since temp is not used. What counts as “usage” is determined lazily, with only those values used in I/O being real usage7, and all other intermediate values only being tentatively needed in case they aid in the computation of I/O.

Built-in Functions and Objects

There is another optimization that Microvium Boost does in the previous example. There are actually a number of built-in library functions that are also culled from the bytecode because they aren’t used (for example, Array.push is not used anywhere here).

This is quite an important feature of Microvium Boost: the ability to have a rich common library of useful functions that any particular script might employ, and have the unused parts of it dropped from the snapshot on a case-by-case basis depending on usage.

Function Parameters

Here’s another interesting example, with the optimization results noted a comments:

var w = 1; // Removed by optimization
var x = 2;
var y = 3;
var z = 4; // Removed by optimization

function run() {
  // Arguments `w` and `z` are removed by optimization
  var result = bar(w, x, y, z); 
  return result;
}

// Parameters `a`, `d` and `e` are removed by optimization
function bar(a, b, c, d, e, f) {
  return b + c + f;
}

exportValue(0, run);

These optimization opportunities might seem obvious when you read the code, but it’s actually quite a difficult problem to solve in the face of multiple possible callers and ambiguity in the function target. To highlight this complexity, consider if we changed run to have the following implementation, which also optimizes just fine with Microvium Boost:

function run(b) {
  var f;

  if (b) f = bar;
  else f = foo; // (Assuming that we define a function `foo`)

  // Arguments `w` and `z` are removed by optimization
  var result = f(w, x, y, z); 
  return result;
}

In this example, the call f(w, x, y, z) might either be calling bar or foo, and it’s impossible to know ahead of time which it is because it depends on the value b passed from the host. If we (the optimizer) remove a parameter from bar, we equally need to remove it from foo so that the parameters for f are consistent. And in general, bar and foo might similarly be called from multiple call sites in the program, so removing a parameter from bar may have implications that affect calls to foo elsewhere.

How does it work?

Microvium Boost is a whole-program optimizer that works differently to any other optimizer I know of. I’ve been working on the concepts behind the algorithm for some years now — they are borrowed from the type inference algorithm for MetalScript.

Naturally, an algorithm like this is beyond what I can explain properly in a blog post, but here is the general idea.

There are two steps to the algorithm:

  1. Determine the dependency relationship between elements of the snapshot (i.e. the program), by stepping sequentially through the source code8. For example, a call operation is related to the corresponding function(s) it calls, such that both have consistent expectations about the parameters to be passed.
  2. Use the relationships to determine consistent facts about the snapshot. For example, to decide whether a particular parameter can be culled from a particular function signature.

The following diagram is the resulting dependency graph for the “Function Parameters” example earlier. I’ve left the labels off the graph just to give a general sense of it9.

In simplified terms, each node in the graph corresponds to an element in the snapshot, such as an instruction, parameter, property, or global variable. The arrows are the inferred directional relationships between these elements. For example, an instruction which reads a global variable is related to the variable that it reads (if the former is not culled, the latter cannot be culled).

Nodes colored solid blue are those determined to be required in the final, optimized snapshot, while the empty circles represent elements that can be culled (much of the culled program is off-screen).

In this particular diagram, as annotated below, the solid-blue root node on the left is the particular IL instruction that returns control to the host: it passes the final value to the host (it is a point of output IO), and so the instruction that generates result cannot be culled from the snapshot. The dependency graph then pulls in a cascade of other operations and parameters that are transitively required in order to produce the single return value, ending on the far right of the graph with the globals variables x and y, which are the ultimate leaf dependencies required to compute the aforementioned output return value to the host. The rest of the nodes are not too important to this discussion at a high level except that they connect the result to the variables x and y from which the result is ultimately derived.

Here’s the corresponding source code again for reference:

var w = 1; // Removed by optimization
var x = 2;
var y = 3;
var z = 4; // Removed by optimization

function run() {
  // Arguments `w` and `z` are removed by optimization
  var result = bar(w, x, y, z); 
  return result;
}

// Parameters `a`, `d` and `e` are removed by optimization
function bar(a, b, c, d, e, f) {
  return b + c + f;
}

exportValue(0, run);

If you changed the source code by removing return result, indeed Microvium Boost will now safely cull global variables x and y, since that whole island of the dependency graph is only anchored by the data required by return operation.

It’s Magic

I subtitled this post “It’s Magic” because of Microvium Boost’s ability to deal with dynamic information, which seems almost impossible to me, despite the fact that I wrote the code for it!

We could make the example a bit more dynamic to illustrate this by making the call to bar indirect, as in the following code:

var w = 1;
var x = 2;
var y = 3;
var z = 4;

function run() {
  var result = call(bar, w, x, y, z);
  return result;
}

function call(func, arg1, arg2, arg3, arg4) {
  return func(arg1, arg2, arg3, arg4);
}

function bar(a, b, c, d, e, f) {
  return b + c + f;
}

exportValue(0, run);

The inferred dependency graph for this looks roughly the same, with an extra layer of nodes between the root return operation and the leaf global variables from which the returned information ultimately feeds. And just like before, it determines that w and z are not used, along with the corresponding arguments to call and bar.

Why Microvium Boost?

This kind of analysis is expensive to compute, but I think it’s well suited to the kind of scenario that Microvium is targeting, where the machine performing the optimization is thousands of times more powerful than the microcontroller device on which the optimized bytecode will run; where every byte and every instruction counts.

I think Microvium Boost enables a style of script programming that was previously infeasible for these kinds of situations.


  1. Microvium Boost is not part of the open-source Microvium codebase 

  2. There is actually a third section in Microvium which you don’t see in a C compiler, which is called gc and is the initial state of the heap in the same way that data is the initial state of the global variables. The principles that apply to data extend to gc 

  3. I believe the XS engine from Moddable takes a different approach, which is to store everything in ROM *until* it’s mutated at runtime, but this requires extra runtime overhead and indirection 

  4. All these are working examples, mostly copied from the Microvium Boost automated test cases 

  5. The host of a typical Microvum program will be the surrounding C firmware on which the VM runs. 

  6. My choice of var over let or const for these examples is for readers who aren’t familiar with JavaScript, and may mistake the meaning of const or be confused by the name let

  7. Values used to discriminate control paths are counted as a special case of I/O, since most control paths eventually lead back to the host. 

  8. Actually, by stepping through the instructions in the intermediate language (IL)  

  9. Just ask me if you want to see the full, labeled graph. 

Microvium Alpha Release!

Microvium Alpha Release!

Microvium version 0.0.9 is published on npm, marking the first alpha release of Microvium! ?

I’m pleased with how quickly this has come together in the 4 months since I started the project back in February. Be ready for good things still to come! Subscribe to my blog if you want to be notified of new developments.

Please feel free to take a look, play around with it, and let me know your thoughts:

Here I have it running on a 16-bit device with 16 kB of ROM1 and 2 kB of RAM. This is roughly the scale of device that Microvium is primarily focused on.

The basic functionality is complete, but be aware that it’s not intended for production systems as of yet — I’m calling this an “alpha” release because the functionality is present but the testing is not complete and there are likely a few inevitable wrinkles still to be surfaced and ironed out.


  1. This device actually has 16 kB of FRAM, not flash, but I’m not using the read-write capabilities of the FRAM in this example. 

Microvium Garbage Collector

Microvium Garbage Collector

Edit: This garbage collector is superseded by the new-and-improved one. The design principles and objectives covered in this post still apply


The Microvium garbage collector is here!

The garbage collector handles the automatic freeing of unused heap memory in the VM.

Microvium makes some interesting and perhaps unique tradeoffs with garbage collection which I’ll focus on in this post.

I’ll divide this post into two parts to cater for different audiences: I’ll start with a summary of the features at a high level, for those who don’t care about the details and design decisions, and then I’ll go into more detail.

Summary of Features

In no particular order, here are some features of the Microvium memory management system:

  1. Allocations are dynamically-sized, each having a 2-byte allocation header.
  2. Allocation headers are optional in some circumstances, so allocations can be even more compact1
  3. Allocations can be as small as 2 bytes2. For example, single-character strings are stored as 2-byte allocations, having a character3 and an implicit null terminator.
  4. The garbage collector is compacting (wiki: Mark-Compact). So, there is no heap fragmentation.
  5. Allocating on the heap is cheap and is constant-time — there is no searching through free lists.
  6. Data structures used for garbage collection are allocated only during garbage collection — no structures are persisted while the VM idle.
  7. Memory is acquired from the host in chunks as-needed, so users don’t need to decide ahead of time how much memory they want to commit to a particular VM.

I’ll add in two limitations for completeness:

  1. The memory for a VM is currently limited to 16 kB
  2. A VM temporarily uses about twice its memory during the garbage collection phase.

Design Considerations

Microvium is for MCUs

While Microvium can and does run fine on desktop-class machines, it’s optimized primarily for microcontrollers. Microcontrollers have quite different characteristics that influenced the design decisions of the garbage collector (GC):

RAM is small relative to processing power.
A 3 GHz desktop computer might have 16 GB of RAM, while a 3 MHz MCU might have 16 kB of RAM — a thousand times less processing power but a million times smaller RAM (obviously this is a very rough calculation to illustrate a point). This is relevant to the GC design because it should be heavily biased towards having a small RAM footprint, since RAM is relatively more expensive than CPU on a microcontroller.

There is no separate CPU cache
Most MCUs I’ve dealt with have no cache — all memory is accessed with constant time4, and often on a single instruction cycle. This affects the design of a GC because things like locality, order-of-access, and prefetching are not factors that need to be considered.

Microvium expects small allocations

Microvium is optimized for 16-bit platforms. Its native value type is a constant-sized 16-bit dynamically-typed value. Anything that doesn’t fit in this small size needs to be heap-allocated, including strings, 32-bit integers5, floats, arrays, objects, etc.

Since Microvium expects these kinds of values to be frequently allocated on the heap instead of the stack, the heap is designed to be fast to allocate onto, allocations shouldn’t incur a lot of memory overhead, and allocations shouldn’t require a lot of padding.

For this reason, Microvium allocations are only 2-byte aligned, so that the most padding ever required will be 1 byte, and only for odd-sized allocations.

Microvium is not the main show

Another significant design consideration is that a Microvium program is not expected to typically be the main program running on a device. Rather, it is optimized for situations where there is an existing firmware application and a Microvium script is only consulted occasionally. The rest of the time, the script is “dormant” and should occupy as few resources as possible.

I think this point is important, so let me explain in a bit more detail…

The way I visualize this design consideration is to think that any one Microvium program is only one out of a number of “tasks” or “components” that exist in the firmware application as a whole. It doesn’t matter whether the other components of the application are other Microvium programs or are C modules — the design considerations are the same.

For argument’s sake, let’s say that some application has 12 components. In the following diagram, I’m representing the RAM space as the dark circle, and each of the colored circles is a “component” occupying RAM, and the space in the middle is vacant memory. Let’s say that each component in this diagram is dormant, but is designed to become active in a particular scenario.

You can think of these as 12 “modes” or “activities”. To use a concrete example, consider an application with Bluetooth support, where one of these 12 components is the part of the firmware that handles Bluetooth communication. Most of the time, there’s no Bluetooth activity, and so the component of the application which handles Bluetooth communication is dormant.

When there is some Bluetooth activity, we can think of it as if the Bluetooth component takes the stage for a short time to do its work, and then afterward, packs itself up and go back to its idle/dormant state. While the component is active, it will likely consume more memory, as it juggles Bluetooth messages in memory, or whatever else it needs to perform its activities. We hope that in most cases, when it’s done processing the Bluetooth event, it can pack up and release memory before waiting dormant until the next event. While it’s packed up, the “stage” is clear (there is free memory) for other components wake up and do what they need to do to respond to other events as they come.

This model works best when you’re operating in a single-threaded environment.

This is the kind of situation that Microvium has been optimized for.

Why do I bring this up?

In this model, the memory that an average component uses while dormant is 12 times more costly than the additional memory it uses while active, because the average dormant memory is multiplied by the number of dormant components, while in this model, there is only one active component at a time.

While this example is an oversimplification, I think it still illustrates the point I’m making: for applications with dynamic memory allocation, memory held by a component for a long time is typically more expensive than memory held for a short time (see also Short-lived memory is cheaper). Microvium can be used in situations where this is not the case, but it’s optimized for situations where this is the case.

So, a VM that incurs a lot of memory for a very short time is considered better than one that has much less memory but holds onto it forever. This principle to any part of the VM, such as the garbage collector, or the Microvium engine as a whole, or user-level JS code. But here we’re talking about the implication for the GC design: if the GC has memory overhead that it holds for a long time, this is more costly than memory overhead for a short time.

What kind of memory overhead is related to the GC? Here are some examples:

  • Some GC algorithms store extra data in the allocation headers6. This is permanent overhead proportional to the dormant heap size of the VM, and so is expensive. Microvium does not do this.
  • Some managed heaps can get fragmented. Even if fragmentation only caused the VM to consume an additional 20% dormant memory, by our above example, this is roughly equivalent to consuming 20%*12 = 240% additional peak memory. So fragmentation is expensive. Microvium uses a compacting collector to fully eliminate fragmentation; to “squeeze” out all the unused spaces before a VM goes dormant.

On the other hand, the Microvium garbage collector uses quite a lot of memory while collecting. In fact, during a typical collection, the amount of memory consumed by the VM may be double for a brief time.

This is because the Microvium collector is actually a semispace collector. At the time of collection, it allocates a whole new heap for the VM, and then copies reachable objects from the old heap to the new one, before discarding the old heap.

The above design considerations are critical to understanding this design choice: for situations where a Microvium script is a small component in a larger firmware system, and is dormant during most of the firmware’s activities, the overwhelming metric to optimize is how well the VM can be “packed up” while it’s dormant, and Microvium achieves this with great success.


  1. Property cells, for example, are 6 bytes each and do not use an allocation header 

  2. Excluding the allocation header 

  3. Strings are utf-8 encoded 

  4. For internal SRAM 

  5. Or any integer larger than 14-bits 

  6. Such as mark bits and forwarding pointers 

Microvium Status Update
June 2020

Microvium Status Update
June 2020

I’ve made a lot of progress on Microvium over the past month and will share some of the details here for those who are interested. I think it’s really close to being ready for an alpha release, with the garbage collector being the only remaining piece of work before it can start being used.

Summary

In no particular order, here are some of the recent developments, which I’ll discuss in more detail below for those who are interested.

  • Dynamically-sized arrays!
  • Bytecode disassembly (now you can see what bytecode Microvium is producing!)
  • Code coverage analysis of the native engine implementation
  • Control over integer overflow behavior

Additionally, Microvium now has a lot more “flesh” than it used to — a wide variety of scripts now run and compile correctly which didn’t a month ago, making the engine usable for real-world scripts.

For readers interested in the details, read on! For everyone else, thanks for stopping by, and see you next time!

Dynamic Arrays

Microvium now has the much-needed support for dynamically-sized arrays.

Arrays are simple to create. For example, the following code creates an array of 3 numbers:

arr = [1, 2, 3];

As with full JavaScript, you can change the length of the array simply by assigning to indexes beyond the current upper bound of the array, for example arr[3] = 4 to extend the array to 4 elements. Nice and simple. (You can also assign to the length property of the array, or call arr.push)

Arrays are implemented internally using a structure like the following diagram, where each square is a 2-byte word (this example array consumes 20 bytes of memory to hold 4 numbers):

An array consumes two allocations on the heap: one which has the metadata describing the array, and another allocation for the array elements, with the former pointing to the latter.

Originally, I was planning to use a single allocation, but the issue is that when arrays grow, they need to be reallocated, which involves updating all the pointers to them. This is possible, and I was brainstorming some ways of doing so, but in the end the cleaner solution seemed to be the one above with two distinct allocations, and only one of them needing to grow.

To make the growth of arrays efficient, arrays have both a “capacity” and a “length”. The capacity of an array is the number of available slots in memory before the array will need to be reallocated. The capacity is invisible to the user. The length is the .length property of arrays and accessible to the user.

For array literals, the allocated capacity is exactly the length of the literal array. For example, the literal [1, 2, 3] has both a length and a capacity of 3. Thereafter, every time an array’s capacity needs to grow, it will be reallocated with twice the previous capacity. In our previous example, extending the array from 3 to 4 elements causes the capacity to go from 3 to 6, resulting in 2 words of padding (the 2 gray squares in the diagram). Until adding a 7th element (or higher index), the capacity would not need to grow again.

Advanced readers may observe that the memory structure here has no space for non-index properties. This is an intended design limitation of Microvium, that you can’t add arbitrary properties to arrays as you can in JavaScript. For example, you can’t say arr.p = someValue. I think this is totally acceptable for most applications.

I’m quite happy with this design. Initially, the 8-byte (4-word) overhead really plagued me, but I’ve come to terms with it, and certainly, this design is very clean.

Bytecode Disassembly

A really great feature that I added to Microvium recently is the ability to decode snapshot bytecode and, in particular, to generate disassembly for bytecode files. This is great because it allows us to really get a close look at the structure of a snapshot bytecode file.

Let’s say we have the following script:

vmExport(42, run);
function run() {
  console.log('Hello, World');
}

We can get a bytecode disassembly for this by running the following command in the terminal:

microvium microvium script.js --map-file bytecode.txt 

I called it a “map-file” because it creates a human-readable map of the bytecode. You can see the generated output here if you’re interested.

The main thing is that it allows us to see byte-for-byte what’s occupying a bytecode file. Remember that these bytecode files fully describe the virtual machine state at the time the snapshot was taken, including all the variables, functions, and heap state. This is one of reasons I developed this feature: so I can see inside the working state of a virtual machine, taking snapshots at various points to see a complete breakdown of memory usage.

I think this feature will also help to demystify snapshots for end users, since it makes it easy to see what’s inside them.

Code Coverage Analysis

One of the concerns that was plaguing the back of my mind was that I had done a lot of development work for the native engine but had little idea how much of it was tested and working correctly. For something as critical as a virtual machine, this was unacceptable.

I thought that something like code coverage would probably resolve this. I have a test suite with a bunch of self-testing Microvium scripts, and it would be great for some off-the-shelf tool to tell me how much of my C code was exercised by these tests.

Unfortunately, I couldn’t find any off-the-shelf solutions that were adequate while not being out of my price range. But the solution I came up with is even better, if I may say so myself!

It’s not the prettiest, but you get used to the syntax quite quickly. Here’s a sample from the C code that shows what I’m talking about. This is the code that handles the ObjectSet opcode, which is for setting a property on an object. Take a look at the CODE_COVERAGE macro calls:

Basically, I’ve generously sprinkled these CODE_COVERAGE markers all over the code. When running in production, these marker macros don’t expand to anything and so don’t consume resources. When running in the testing environment, these macros expand to function calls which record that each marker is hit at runtime.

But wait! That’s not all!

I have a script which parses the C code in search of these markers, and automatically fills in information about each:

  • It automatically fills in the unique identifier for each marker (in the selected line in the example, this is 124). I used identifiers instead of just relying on line numbers because they remain consistent during code movement. The script automatically cleans up duplicate or missing identifiers.
  • It automatically appends the // Hit or // Not hit comment, depending on whether the marker is hit or not during the tests (and you can see in the screen capture that I’ve tweaked my IDE to colorize these distinctly).

But even better than typical code coverage tests, we extend the idea to test for value cases as well. For example, there is a bytecode opcode for loading some common literal values, and there are 8 different possible literals that it can produce, such as null, undefined, true, false, etc. (the exact values are not relevant to the point I’m about to make). These options might have been implemented in a switch statement (handling the null case, the true case, etc), in which case a CODE_COVERAGE marker on each switch case would tell us whether the particular literal value has been tested. However, it was more efficient to implement this as a table of literal values, with a simple lookup. To get test coverage data for this kind of implementation, I created a different macro which I call TABLE_COVERAGE:

The details are not important. What’s relevant here is that the test runner has identified that there are 8 values in the lookup table, and that the code is hitting all 8 of them (see the highlighted portion of the image above). This is really valuable information, to know that there exists some code in the test suite which is exercising all these options, even though they don’t appear as distinct code paths.

Integer Overflow Control

Microvium is all about being micro — having a small footprint in terms of RAM and ROM. When it comes to squeezing the most out of the ROM footprint of the engine itself, the overflow behavior of numbers in C and JavaScript is frustrating, to say the least.

In JavaScript, all number operations are conceptually performed on 64-bit floating point numbers. In a C implementation of these semantics, we don’t need to store the full 64-bit float. The Microvium engine represents the number 1.0 with a 14-bit integer and the number value 2_000_000_000 (2 billion) with a 32-bit integer in memory. It’s not until you exceed the signed 32-bit range that Microvium needs to resort to the full floating-point representation in memory, and expectation is that this will almost never happen in practice.

However, the challenge is that most operations on numbers theoretically need to check for overflow in order to correctly do the promotion from int to float in the rare case it occurs, and this takes up precious program space as well as degrading performance.

To make this more concrete with an example, consider that the smallest signed 32-bit integer is -0x80000000. If we negate this using the operator -x, in JavaScript we should get the number 0x80000000, but this is no longer within the signed 32-bit number space, and so in C, we would consider this to be an overflow. The overflow in C is technically undefined behavior, but in practice, -0x80000000 will almost always give us exactly what we started with, which is -0x80000000.

Let me say that again: in C, -(-0x80000000) == -0x80000000, when dealing with signed 32-bit integers.

This is the only integer in C that can overflow an int32 during negation (-). To implement the correct JavaScript semantics in Microvium, we need to have a test for exactly this case.

To make things worse, C does not provide any standard way of checking for overflow. GCC provides some language extensions for such, but a cross-platform engine can’t assume they exist.

If this were the only overflow case, then a few extra checks and branches wouldn’t be worth worrying about. But the reality is that many or most number operations can overflow, and they sometimes do so in cases or ways that you wouldn’t necessarily expect or care about.

I’ll briefly give 3 more examples to drive home the point:

Negating Zero

I said above that 0x80000000 is the only integer that can overflow in negation in C. But in Microvium, this isn’t true. In particular, in Microvium, negative zero is not an integer, because it is distinct from positive zero. The behavior of Microvium in the case of negative zero is best illustrated by the corresponding test case:

  // Without overflow checks enabled, the negation of integer zero is integer zero
  assertEqual(8 / -(1-1), overflowChecks ? -Infinity : Infinity);

As you can see above, if overflow checks are enabled, then negating integer zero results in the floating point -0, which when used in the denominator above will give us the answer -Infinity.

But if overflow checks are disabled, then the result of negating zero is still zero, as most C programmers would probably expect.

The reason for the (1-1) gymnastics is because -0 is treated as a literal with value negative zero, whereas -(1-1) is treated as the negating operator acting on integer zero.

Side note: You might think that division by any kind of zero is an overflow, but in Microvium, the naked division operator / is always a floating-point operator. There is a separate integer division opcode in Microvium which is syntactically represented as x / y | 0. I might discuss this more at a later stage.

Integer Addition

We all know that integer addition can overflow, but I bring this up as an example of how hard it can be to test for integer overflow. After a lot of research, the fastest way I could find to check for overflow on integer addition seemed to be presented in a comment on this blog post. Basically, if a = b + c, then overflow can be tested as:

if ( (a ^ c) & (b ^ c) ) < 0) /* overflow */;

On a 16-bit machine, doing four 32-bit operations and a branch is not the cheapest, when all you wanted to do was add two numbers together.

To make matters worse, this particular solution is determining overflow retroactively, which is technically undefined behavior. But pragmatically, I expect this to work in an overwhelming number of real-world C environments, so I’m using it anyway.

Right Shift

In JavaScript, the left and right shift operators (<<, >> and >>>) always yield a result that fits into the signed 32-bit range, except for one case, which is zero-fill right shift by zero (>>> 0) of a negative number . Take a look at these examples, and see if you can spot the odd one out (hint: it’s the last one!):

// For positive numbers
1 | 0   // = 1          Bitwise OR
1 & 1   // = 1          Bitwise AND
1 << 0  // = 1          Bitwise left shift
1 >> 0  // = 1          Bitwise sign-propagating right shift
1 >>> 0 // = 1          Bitwise zero-fill right shift

// For negative numbers
-1 | 0   // = -1          Bitwise OR
-1 & -1  // = -1          Bitwise AND
-1 << 0  // = -1          Bitwise left shift
-1 >> 0  // = -1          Bitwise sign-propagating right shift
-1 >>> 0 // = 0xFFFFFFFF  Bitwise zero-fill right shift

Since all other cases yield correct results when interpreting the result as if cast to a signed 32-bit integer, Microvium treats the last case an overflow. If overflow checks are disabled, the result of the last case will be -1 instead of 0xFFFFFFFF.

I can understand the reasoning behind the defined behavior in JavaScript, since doing a zero-fill right shift then always consistently produces an unsigned result, no matter how much you’re shifting by. But it breaks my intuition about how bitwise operators work in JavaScript. Why does 0xFFFFFFFF | 0 not also produce 0xFFFFFFFF (it produces -1)? Why is it such that the result of most bitwise operators is interpreted as if the 32nd bit was a sign bit, even when the operator has no special understanding of signedness? And then why break that rule with the zero-fill right shift, which is equally just a bit manipulation operation like bitwise AND and OR?

In Microvium overflow behavior is configurable

My solution in Microvium, if you haven’t already figured it out from some of these examples, is to make overflow checks optional. It’s a compilation flag presented as a #define in the port file, which is accessible specifically to the unit tests for reflection so it can determine the expected result of various operations.

Snapshotting vs Bundling

Snapshotting vs Bundling

TL;DR
Bunding and snapshotting are two different ways of packing a program for deployment. This post is a somewhat-biased overview of why snapshotting is clearly superior in many respects.


I recently wrote up an explanation of Microvium snapshotting, along with this hopefully-helpful animated diagram:

In short: the diagram depicts a running Microvium (JavaScript) application that is suspended and transferred to a new environment where it continues where it left off. For the typical use of Microvium, the original environment might be a development machine or back-end server, and the new environment might be a low-resource microcontroller.

I want to expand on snapshotting a bit more in this post and, in particular, I want to contrast it to an alternative: bundling.

Background

JavaScript doesn’t have a preprocessor as C does. I’ve argued before that this is a great thing: having all behavior represented the same way, in the same language, rather than needing a separate preprocessor language, template meta language, linker script language and makefile language, etc. It also means there is no distinction between compile-time and runtime: everything is runtime. It eliminates needless concepts. I like this ideal and I want to stick to it.

But if you don’t have #include, how do you access behavior declared in other source code files? (other modules)

In JavaScript, unfortunately, there are lots of different ways of importing other modules, and they don’t have consistent semantics. Today, let’s talk about probably the most common one, at least in node.js applications, which is the use of a function called require(). Importing a module like this means that you are using the commonjs system.

Bundling

A bundler is a way of pre-combining all the JavaScript files in your project (and other files, to some extent) into one (or a few) files for distribution. This is particularly needed in web applications, which could contain hundreds of thousands of files, but you don’t want the user’s web browser to have to download each of these files separately.

Bundling is not a passive operation like creating a zip file. A typical bunder will actually analyze your code to figure out what files in your file system it imports, and repeat this to transitively to pull in all the required dependencies.

Bundling sounds an awful lot like preprocessing or compilation — it’s a step that happens before runtime, that needs to statically analyze your code in order to produce a distributable output. Something smells funny here.

JavaScript is not a langauge designed for such static analysis and I argue that it never should be.

Ok, let’s look at a concrete example of a bundler: Webpack.

In the following code, module a, which we’ll assume is the entry point, imports module b, and prints a value from it.

// a.js
const b = require('./b');
console.log(b.bValue);
// b.js
exports.bValue = 5;

This does what you expect it to do. Now let’s pack it with webpack1.

webpack --entry ./a

On my machine, this outputs a single js file, which when run, correctly prints the value 5 to the console.

Ok, but, how did it know to bundle module b into the result, so that it actually runs correctly?

The answer appears to be: it makes an educated guess by looking at the source code and seeing that it calls require with an argument of ./b.

This sounds so dodgy. But I guess that people had JavaScript that needed bundling, and bundlers seemed like the only way to fix it.

It’s easy to mess this process up. Let’s change a to the following code which does the same thing as before if evaluated directly in node.js, but which causes completely different Webpack behavior:

// a.js
const b = require('./' + 'b'[0]);
console.log(b.bValue);

What does webpack do with this?

Err, well, the resulting bundle still correctly outputs 5 to the console when executed, but now the bundle is 105 kB instead of 5 kB, at least on my machine. Why? Webpack has clearly given up trying to figure out statically what I’m importing and has instead included everything in the folder as part of the bundle, which in my case happens to include both the output files and webpack itself (so Webpack is packing itself into the bundle!).

But, it gets worse. Let’s change module a to the following code, which again does the same thing when run in node.js, but causes completely different webpack behavior:

// a.js
const b = arguments[1]('./b');
console.log(b.bValue);

This source runs in node because require is actually a parameter passed to the commonjs module code and it so happens to be the second parameter.

Webpack doesn’t see require at all in this source, so it assumes it doesn’t need to include anything else in the bundle. Consequently, the bundle throws an exception when run in node.js.

Sure, this example here is abusive towards Webpack — I’m picking pathological cases to expose Webpack’s Achilles’ heel.

A better solution: snapshotting

It’s my blog; I’m allowed to make bold, opinionated claims about what things are “better” than others. Snapshotting is better than bundling.

A Microvium snapshot fulfills the same purpose as a Webpack bundle, in the sense that the snapshot is a highly-efficient representation of all the resources that the JavaScript application needs in order to run in a future environment. But it’s better than a bundle because:

  1. It does not require a third-party tool to make it
  2. It does not depend on shady guesswork about what’s being imported. Rather, the application code actually runs and performs its imports using the full semantics of the language.
  3. It also does not require any declarative representation of what should or should not be included in the bundle, ala tsconfig.json or webpack.config.json. I really dislike these external declarative files, because they add needless complexity to the application.

Let’s get concrete, translating the same example as earlier but to leverage Microvium snapshotting:

// a.js
import * as b from './b';
import ffi from 'ffi';

ffi.export('restore', whenRestored);
function whenRestored() {
  console.log(b.bValue);
}
// b.js
export const bValue = 5;

When run on a suitable host, the above will print 5 to the console, the same as the Webpack equivalent does. It does this by performing the imports at runtime before snapshot is taken, and then the snapshot can be deployed to an environment on which importing might not even be supported at all.

Let me see if I can draw another diagram to illustrate this:

Application imports dependencies (e.g. other modules) and then moves to a new environment

Please note that the ES6 module import syntax is not the reason why this works. It will work equally well with dynamic import() when that’s been implemented. In fact, there are working test cases that interact with resources directly using node’s fs module.

The pattern isn’t new

Web developers are already familiar with the idea of two distinct phases of runtime execution: there is before the page has loaded, and there is after the page has loaded. It is the same application executing across both phases, but the application has access to different resources after the page has loaded (for example, all the DOM elements and images).

To run part of your application after the page has loaded, you simply subscribe a callback to the load event:

window.addEventListener('load', whenLoaded);
function whenLoaded() {
  console.log('The page is fully loaded');
}

The similarity between this and Microvium example should be clear. Snapshotting is just the mechanism that is used to transfer the running application from the development/build environment to where it will run on the user’s device.


  1. Assuming you’ve installed webpack globally 

Microvium Modules
Design thoughts

Microvium Modules
Design thoughts

Today, I completed the first working version of the module system for Microvium, and I want to share some of the design considerations and the conclusions I came to. The design as it stands is the result of quite a few iterations. It’s is likely to change again in the future as we iterate on Microvium before its first proper release. Any feedback and suggestions are welcome.

This will be a long and technical post. Most of my readers will probably want to skip this one. There’s also a more succinct overview of the module API available here:

https://github.com/coder-mike/microvium/blob/master/doc/js-host/module-api.md

There is some overlap between what Microvium achieves and what SES Compartments achieve, in the sense that both create isolated environments that allow JavaScript code1 running inside the respective environments to gain access to other modules through configurable mechanisms (e.g. hooks) specified from outside the environment (e.g. in the host). So naturally, some of the inspiration for the Microvium API has come from the Compartment API. But I’ve also diverged somewhat from some of the decisions made for the Compartment API to date and made different trade-offs, and I wanted to informally document my thought process.

Note that modules are only a consideration when running Microvium on a desktop host, for example in node.js, so the API I’m referring to here is the JavaScript host Microvium API. On the microcontroller host, there is neither the ability to import modules nor parse source text, and interaction with the firmware host is not done through modules for efficiency reasons.

Design Objectives

I want the design of the Microvium API to be clean and simple to understand and use. Bear in mind that, at least initially, the target users for microvum are probably firmware developers who are looking at potentially adopting Microvium as a new and foreign third-party tool. This is different from the target users for Compartments, who are likely to already be entrenched in the JavaScript ecosystem, and when confronted with SES and Compartments will be learning about a standard feature of JavaScript rather than a foreign, third-party tool.

Simplity, simplicity, simplicity!

Modules are a complicated topic when you start digging into the details of circular dependencies, caching, relative specifiers, resolvers, etc. So it’s been quite a challenge to create an API that abstracts all of that while still remaining true to the specification!

As part of keeping things simple, I want the API to necessitate the understanding of as few new concepts to the user as possible when explaining microvium (e.g. in the documentation). As I go through the design in the rest of this post, I’ll highlight some of the areas where I’ve removed the need for the user to understand various concepts by removing those concepts from the API design completely. My favorite kind of design work is all about removing and simplifying.

Some other objectives:

  • To remain independent of the file system or any particular module specifier scheme
  • To provide complete and perfect isolation and sandboxing of the app running inside the Microvium virtual machine.
  • To have interoperability between node modules and Microvium modules, where desired
  • To remain independent of the particular caching (memoization) policy used for modules, while remaining correct according to ECMAScript specification requirements for module caching.

The best way to lead you through the design is by example. I’ll start with the simplest examples and then build them up in complexity, explaining the thinking and design choices as I go.

Examples

The following are working2 examples, but bear in mind that the Microvium implementation is, in general, probably still less than half complete, so don’t expect to generalize too much.

Example: No Modules (One Module)

I’ll start with an example without any imports or exports. For a user that doesn’t need support for multiple modules, the module system should not add any cognitive load.

A simple hello-world in Microvium is as follows:

import Microvium from 'microvium';

// Let's say we have this source text
const sourceText = `print('Hello, World!')`;

// Create a new virtual machine
const vm = Microvium.create();

// The virtual machine is going to need a "print" function
vm.globalThis.print = console.log;

// Import the source text into the virtual machine
vm.evaluateModule({ sourceText });

(Be aware that the above code is evaluated prior to running on the target device)

There are already a couple of interesting things to note here:

  • Globals are provided by mutating globalThis. Internally this is implemented as a proxy whose properties represent the global variables of the virtual machine. For users who don’t care about dividing code into modules, providing things through globals is quite powerful. I’ve kept the name consistent with the Compartment API.
  • In contrast with the SES Compartment API, the Microvium create function does not accept a set of endowments. This is for the practical reason that many globals in Microvium are likely to refer to objects created inside the VM itself (otherwise they wouldn’t appear in the snapshot), and so can’t be created until the VM exists to create them in.
  • The SES Compartment API has an evaluate function which could be used to evaluate JavaScript “script” text (as opposed to module text). In Microvium, for simplicity, I decided that all source text should be interpreted using module semantics, so there is no evaluate function. This eliminates one point of confusion for Microvium users since there is no need to understand “script” vs “module” (limiting the number of new concepts for non-JS users to absorb). This is only possible because Microvium is a completely new tool, and the language it supports is already a subset of full ECMAScript.
  • Observant readers will notice that Microvium.evaluateModule() actually accepts an object and not a string of source code. The reason for this will become more clear later, but it doesn’t add a significant amount of cognitive overhead in simple cases like this.

I’m not sure what the evaluateModule method should be named — I may change this in future, and I’ve changed it about 10 times since I started doing the design. It accepts source text and returns a module object (i.e. module namespace object) representing the exports of the module source text. This is in contrast to any of the Compartment methods which only accept module specifiers3.

There are multiple reasons I chose not to expose a method that receives a module specifier. I’ll elaborate as we go through more examples, but here are two reasons:

Firstly, it would mean that the user would need to provide multiple implementation points, just to get a single module running in the VM. The Compartment API requires that the user provide an importHook and a resolveHook to the compartment constructor prior to being able to process imports, so that the internal Compartment machinery knows what to do with the specifier. This may be more acceptable in the context of Compartments, since the evaluate function at least provides a starting point for new users to get familiar with Compartments without needing to understand all the concepts like “what is a hook?”, “what is resolving?”, “why is resolving different to importing?” etc. As I say, in Microvium, I wanted to introduce as few new concepts as possible, while still retaining the same generality if possible.

The signature for the evaluateModule method is as follows:

interface Microvium {
  /**
   * Imports the given source text as a module.
   *
   * Returns the module namespace object for the imported 
   * module: an objects whose properties are the exports of 
   * the module.
   */
  evaluateModule(moduleSource: ModuleSource): ModuleObject;

  // ...
}

(I’ve used the name ModuleObject because it is an object representing the module. The Node.js documentation just calls the analogous thing “the module” — the thing returned by require — but I wanted to disambiguate the instantiated module from the source text of the module, both of which are in some respects “modules”).

I find this aspect of the design reasonably elegant — it accepts source code as input, and gives you back a module as output. All the intermediate steps are abstracted away, which I think is beautiful. The only concepts that the user needs to understand here are the existence of the source code and that of the resulting module, which are both well-understood concepts. In particular, I’ll note that this example (so far) has no dependence on the following concepts:

  • Hooks
  • Specifiers
  • Resolvers, loaders, etc.
  • Files and file systems

It’s important to me that this is the case, since the demonstration of Microvium with a “Hello, World” example should not overwhelm the user with new concepts.

The name evaluateModule is similar in principle to the Compartment API’s evaluate method, but works on module source text instead of script source text.

Example: Single Export

It follows naturally, and probably without need for explanation, that a module can have exports which are accessible to the host:

const sourceText = `export const x = 5`;
const { x } = vm.evaluateModule({ sourceText });
console.log(x); // 5

Again, this is all very natural, and we haven’t yet needed to add any new concepts into the user’s mental model.

One point I’ll touch on briefly is the fact that these kinds of direct object references between the host and the virtual machine are severed in a snapshot — a native C host has no way of accessing the module object of a module in a VM restored from a snapshot. Exports to a C host, or any host that resumes a VM from a snapshot (e.g. in node.js using Microvium.restore()), are done through a separate mechanism (see the now-slightly-outdated getting-started guide for examples of native exports).

I did attempt at one stage to consolidate the ES module system with the Microvium native FFI system, but came to the conclusion that it wasn’t reasonable to do, since it would require a lot of complexity and overhead with C hosts.

Example: Single Import

Now let’s say that we have a module that imports another module:

const sourceText = `import x from 'y'`;
// The following will throw "Module not found: 'y'"
vm.evaluateModule({ sourceText });

The above value for sourceText is a string of Microvium code which attempts to import the default export from the module identified by the module specifier 'y' and binds it to the name x. It doesn’t do anything with x, but that’s fine for illustration for the moment.

So far in this example, we haven’t yet provided any mechanism for the VM to actually perform the import, so it will throw to say that it can’t find the module4. Remember that Microvium is implemented independently from the file system, or any particular method of understanding module specifiers — they could refer to files, web URLS, or database IDs.

So, how do we provide the module 'y' to the VM so that the script5 can import it?

A challenge with modules is that the specifiers in general can be relative to the importing module. Consider in node.js that a module could import '../m', which is importing the file named m in the directory that is the parent of the directory containing the module performing the import. So, we can’t just give the VM a global resolution that says that the specifier 'y' maps to some particular module, since it may map to different modules depending on the context and the specifier scheme being used. We need to do it in a way that allows the specifier 'y' to change meaning depending on which module in the VM is doing the import.

This is another point where SES compartments and Microvium diverge. With compartments, the Compartment constructor accepts a resolveHook as one of its options. The resolve hook interprets the meaning of a specifier. For example, it may search the file system to find the appropriate module source text (but not load it, yet).

The resolve hook for a compartment is a function which accepts a module specifier and a referrer string and returns a string that absolutely identifies the module6. The referrer string identifies the module doing the importing so that the resolver can produce different results depending on which module is performing the import.

For Microvium, I think it’s unnecessary complexity to have the idea of an intermediate “thing” for users of the API to understand, which is that of the resolved/full module identifier (and also the “referrer”). It’s not easy to explain what this is even. In the case of node.js, it’s probably a normalized, absolute file path to the module source text file. In the case of the web, it might be a URL that identifies where the script should be downloaded from. I’d prefer not to have to explain this in the Microvium docs, so I’ve deleted the concept from the API design.

Example: importDependency

The solution I came up with in Microvium is to pass a callback into the evaluateModule method itself. Let’s look at another example, here with code that doesn’t throw:

const sourceText = `
  import { x } from 'y';
  print(x); // 10
`;
const moduleY = { x : 10 };

vm.evaluateModule({ 
  sourceText,
  importDependency: specifier => moduleY 
});

I’ll start with the disclaimer that the techniques used in these examples would likely not be used in production software. We are still in the realm of “demonstrating the public API of the virtual machine so that it can be understood“, and not “demonstrating the typical/recommended way to use the API“.

In our example, of course, we’re ignoring the specifier and always returning the module moduleY. This would only be correct behavior if you intended all module specifiers to alias the module y, but hopefully it’s obvious how the example could be extended to use the specifier to discriminate between multiple possible module objects, possibly by looking in a table of available modules, or by iterating the file system (more on that in a moment).

To understand this example, we need to introduce two new concepts to the user7:

  1. Module specifier: This is not a “new” concept for JavaScript programmers, but might be for C programmers, since it is a concept distinct from “file path”. However, it’s easy enough to explain that the module specifier is “the string text that appears after from in an import statement”.
  2. importDependency

I’m quite conflicted about the name “importDependency” here — any suggestions are welcome as to alternative terms to use. The importDependency callback8 is a function that takes a module specifier and returns the corresponding module object, in the context of the importing module.

This feels reasonably elegant and intuitive to me. Whenever we import a module, we are also telling the VM how to import nested dependencies relative to that module. It’s completely abstracted from the location of those nested dependencies, the mapping of module specifiers to the corresponding modules, and the mechanism by which they are retrieved and cached (more on caching later).

I’ll note here that the module object returned in the example by the importDependency callback is actually an object residing in the host, not the VM. The VM will interact with the host object through something like a proxy. This kind of module is analogous to the node.js core modules, which the node documentation says are “compiled into the binary” — i.e. they are baked into the host rather than being implemented as part of the application, but appear to the application as local objects. (Tying the analogy to our example, the host is the node.js code and the application is the Microvium script). It would be fine for the importDependency callback to return a VM object as well (of which a VM module is a special case), and the membrane would handle it the natural way (more on this later).

It might be worth highlighting at this point that the module that importDependency returns is just a normal/ordinary object. In the ES spec, module namespace objects are exotic objects with special internal properties that are inaccessible to user code. I don’t know if I’m violating the spec by allowing the reified module namespace object to be any object, but it certainly makes the API a lot easier to understand and use.

I want to emphasize that, so far, we’ve been able to demonstrate the module system of Microvium without requiring multiple files. Each example has just been a few lines of JavaScript code. I personally think that this is a huge plus when it comes to attracting newcomers to a new tool — being able to demonstrate something as complex as multi-module code without actually requiring multiple files in the examples. This is particularly useful for explaining this API with snippets like is done in this post. Of course, it naturally raises the question in the user’s mind of what to do with actual Microvium source code files on the user’s hard drive, but I like the fact that these concepts can be introduced independently of each other. A user familiar with node.js could quite easily see how to read a source file from disk instead of using a literal string if they so desired (although there are better ways, which I’ll introduce in a moment).

The benefits go beyond communication. Being able to treat arbitrary objects as modules makes it easy to programmatically synthesize modules on demand. An example of this would be an attenuated version of an existing module. Depending on whether the attenuation should exist inside the VM or outside, the attenuator could either produce a new module object or code-generated Microvium source code (which is in turn used to get a new module object).

I really like the fact that the relative nature of specifiers is captured by the callback being per-module. This eliminates the need for the concept of some kind of “full specifier”, e.g. an absolute file path or URL, or something like that. Of course, in most production projects, there may indeed be full specifiers existing under the hood somewhere, but I like the fact that the surface area of the VM API is independent of what this specifier looks like, if it exists at all. Full specifiers need not exist, or, if they exist, they could exist in any form, such as using a composite object as an absolute specifier (e.g. a tuple composing a database name with a record ID, if the module code is stored in a database). This is not to say that any particular method is advised, but rather to say that the VM is abstracted and decoupled from the choice.

Example: Importing Source Text

Importing source text instead of a host module introduces nothing new. It just involves putting together the machinery we’ve already covered so far in the post:

const moduleACode = `
  import { b } from 'moduleB';
  print(b); // 10
`;
const moduleBCode = `export const b = 10`;
// Import moduleACode, which will import module B
vm.evaluateModule({ 
  sourceText: moduleACode,
  importDependency: specifier => {
    if (specifier === 'moduleB') {
      return vm.evaluateModule({
        sourceText: moduleBCode
      })
  }
});

Nothing new here, but the nesting is starting to look ugly. Let’s refactor to clean up the nesting, and we can up the game a bit while we’re at it by adding a circular dependency…

Circular Dependencies and Module Caching

In the following code, I’ve created a modules “table” with two modules — moduleA and moduleB, and an importDependency function which is shared by both the modules and the bootstrap code. So, importDependency here is treated as a universal way of importing modules, with no support for relative paths (after all, we can make up whatever resolution rules we want).

const importDependency = specifier => 
  modules[specifier] && vm.evaluateModule(modules[specifier]);

const modules = {
  'moduleA': {
    sourceText: `
      import { b, printA } from 'moduleB';
      export const a = 5;
      print(b); // 10
      printA(); // 5
    `,
    importDependency
  },
  'moduleB': {
    sourceText: `
      import { a } from 'moduleA';
      export const b = 10;
      export const printA = () => print(a); 
    `,
    importDependency
  },
};

// Import moduleA which will import moduleB
importDependency('moduleA');

You may be surprised to see that this doesn’t result in an infinitely recursive cycle of loading modules. This leads me to the point of this section: module caching.

Node.js talks about this concept of caching modules. The term “caching” may be better described as “memoization”, since it is not an optional cache for performance reasons but rather something that is required in order to implement the described semantics: multiple calls to require will not cause the module code to be executed multiple times, and the same module object will be returned each time9.

Node.js caches the modules based on the full file path, but the ECMAScript standard doesn’t require caching to the same level. The spec says (emphasis added):

Each time [HostResolveImportedModule] is called with a specific referencingScriptOrModulespecifier pair as arguments it must return the same Module Record instance if it completes normally.

Multiple different referencingScriptOrModulespecifier pairs may map to the same Module Record instance. The actual mapping semantic is implementation-defined but typically a normalization process is applied to specifier as part of the mapping process. A typical normalization process would include actions such as alphabetic case folding and expansion of relative and abbreviated path specifiers.

https://tc39.es/ecma262/#sec-hostresolveimportedmodule

To translate this, my understanding is that multiple import statements within a single importing module must resolve to the same module, and multiple imports across the same or different modules are permitted to resolve to the same module, using some host-defined normalization process such as resolving absolute file paths.

Microvium implements the ECMAScript requirement by essentially consolidating multiple imports of the same specifier within a module into a single physical import entry. This does not require a cache at all.

Microvium facilitates node-style module caching (e.g. caching based on file paths) by providing the guarantee that if the same exact ModuleSource object (by reference identity) is passed to evaluateModule, then the same module object will be returned.

This doesn’t feel as elegant to me as the rest of the API, but it does make some level of sense:

  • The evaluateModule method takes a ModuleSource object (the one with the sourceText property) and returns a ModuleObject. Iff the caller passes a distinct ModuleSource, they will receive a distinct ModuleObject. Put another way, iff they instantiate a new ModuleSource, they will get a corresponding new instantiation of the ModuleObject (by having the module source code executed again).
  • This solution is still completely decoupled from the representation that the user may choose for full specifiers, if they choose to use full specifiers at all.
  • The user code is free to control the caching however they choose, by simply caching the corresponding ModuleSource objects according to their caching scheme. Later, I’ll show an example with a node-style importer and its corresponding caching system.
  • A special case of the above point is that the user is able to have multiple module instantiations with the same “full specifier” (e.g. multiple instantiations of a module with the same file path), should they choose to do so. This was not a goal of the design, but is a side effect of the fact that the API is decoupled from the notion of “full specifier”.

nodeStyleImporter

Writing the dependency importer by hand is fine for advanced users and library writers, and is great for the above examples that explain the Microvium API, but most Microvium users probably just want to know how to use Microvium scripts the same way they would use node.js scripts, and for that, there’s the nodeStyleImporter, which is a function that creates an initial import hook given some options and a Microvium VM to import into:

function nodeStyleImporter(vm, options): ImportHook;

An “import hook” in this sense is just a mapping from specifier strings to module objects, something like the require function in node.js:

type ImportHook = specifier => ModuleObject | undefined;

Here’s an example usage:

const moduleOptions: ModuleOptions = {
  // Allow the importer to access the file system, but only for 
  // subdirectories of the specified project directory.
  accessFromFileSystem: 'subdir-only',

  // Specify the root directory of the project, from which
  // initial imports will be resolved
  basedir: 'my/project/directory',

  // A set of "core" modules: those which can be imported from
  // any Microvium module with the exact same absolute specifier
  coreModules: {
    // Example of core module implemented as VM source text
    'a-core-module': './a-core-module.mvms', 
    // Example of core module implemented by the host
    'another-core-module': require('a-module-in-the-host') 
  },

  // Allow Microvium modules to import `fs`, `http`, etc.
  allowNodeCoreModules: true,
};

const vm = Microvium.create();
const importer = nodeStyleImporter(vm, moduleOptions);
importer('./the-entry-module');

This example delegates the complex work of resolving, importing, and memoization policy to the nodeStyleImporter, which is an importer that uses the resolve npm package alongside node’s fs module to find and read the source files for execution.

The full example can be seen as a test case in the github repo. I won’t explain it here because at this point we’re no longer talking about the Microvium API and we’re talking about an optional utility that helps in a specific scenario. I show it here as a demonstration of what can be done using the described Microvium API.

The nodeStyleImporter implements the importing of modules from the file system, as well as allowing Microvium scripts to transparently import node modules by indirectly invoking require, giving the Microvium app access to npm modules, among other things.

Conclusions

I’m pretty pleased with this design. It’s still early in Microvium development, so I’m sure the API will change again before the release, but I’m pleased by where it is at the moment.

I think it achieves all the goals, of being simple, easy to use, minimize the number of unnecessary concepts, and being compliant with the ES spec. It facilitates the handling of circular dependencies and it isn’t coupled to the file system. It doesn’t dictate a particular caching/memoization scheme. It can deal transparently with modules loaded inside the VM as Microvium source text alongside those provided and implemented in the host.


  1. In the case Microvium, only a subset of JavaScript is supported 

  2. The examples hopefully work, but I haven’t actually tested them verbatim. 

  3. A module specifier is the string that comes after from in an import ... from ... statement 

  4. For the purposes of isolation, the error message is the same whether the module is unavailable or whether no import hook is provided at all, since the script should not be able to distinguish these scenarios 

  5. I use the word “script” generically in this post to refer to source text written in the Microvium language 

  6. I don’t think the SES spec yet specifies whether these identifiers are strings — I’ll be interested to see if this is specified 

  7. As you know, I’m very aware of how many concepts need to be introduced and how quickly we need to introduce them 

  8. I’m calling this a callback and not a hook, because it’s not hooking into existing behavior but rather implementing the required behavior 

  9. This is generally true, but there are some caveats that I won’t discuss here 

Microvium

Microvium

Recently, I started working on a bytecode compiler and virtual machine which I’ve currently called Microvium.

(For those who have been following me, you’ll know I’ve also been working on a full JavaScript compiler called MetalScript for a while. MetalScript is still in progress and probably will be for some time — it’s not a small undertaking. Microvium is not MetalScript)

Quite simply, the objective of Microvium is to provide a way to run small scripts on microcontrollers, particularly for very low-resource devices (for example, those with less than 2 kB of available RAM).

I chose the name “Microvium” because it sounds similar to “Micro VM” (the original name) but is less generic. In particular, there is already an npm module named microvm.

There are already solutions that do this, so why am I creating a new one?

Microvium will have its own unique approach and tradeoffs which will give it advantages and disadvantages compared with other existing solutions, making it suitable in different scenarios. I’ll be discussing these in more detail in upcoming posts, but briefly, the two main focuses for Microvium are:

  1. The ability to run part of my favorite language (JavaScript) on a tiny device. I currently have it running on a 16-bit MCU where the whole firmware and scripts are using 8 kB of ROM and about 500 B of RAM.
  2. Leveraging the MetalScript idea of suspending a VM on your desktop computer to have it resume later on the embedded device. The heavy lifting of parsing and importing can be done on the desktop, while the device can just continue with the “easy” stuff.

Early Prototype Released

Last week we released the first working prototype to npm. Check it out on Github:

https://github.com/coder-mike/microvium

This hardly counts as a “release”, since really it doesn’t run anything except the “Hello, World!” example, so don’t go download it just yet. Really, it was just a test run of the release process, and to get a sense of what it would look like to use it. The exercise was worthwhile since it resulted in a few changes to Microvium to make it easier to use, and to simplify the concepts.

Subscribe to my blog to get more updates and stay tuned for an actual release, hopefully in the not-to-distant future.