Category: JavaScript

Single-threading is more memory-efficient

Single-threading is more memory-efficient

TL;DR: Single-threading with super-loops or job queues may make more efficient use of a microcontroller’s memory over time, and Microvium’s closures make single-threading easier with callback-style async code.

Multi-threading

In my last post, I proposed the idea that we should think of the memory on a microcontroller not just as a space but as a space-time, with each memory allocation occupying a certain space for some duration of time:

I suggested that we should then measure the cost of an allocation in byte-seconds (the area of the above rectangles as bytes × seconds), so long as we assumed that allocations were each small and occurred randomly over time. Randomness like this is a natural byproduct of a multi-threaded environment, where at any moment you may coincidentally have multiple tasks doing work simultaneously and each taking up memory. In this kind of situation, tasks must be careful to use as little memory as possible because at any moment some other tasks may start up and want to share the memory space for their own work.

This leaves a random chance that at any moment you could run out of memory if too many things happen at once. You can guard against this by just leaving large margins — lots of free memory — but this is not a very efficient use of the space. There is a better way: single threading.

Single threading

Firmware is sometimes structured in a so-called super-loop design, where the main function has a single while(1) loop that services all the tasks in turn (e.g. calling a function corresponding to each task in the firmware). This structure can have a significant advantage for memory efficiency. In this way of doing things, each task essentially has access to all the free memory while it has its turn, as long as it cleans up before the next task, as depicted in the following diagram. (And there may still be some statically-allocated memory and “long-lived” memory that is dynamic but used beyond the “turn” of a task).

Overall, this is a much more organized use of memory and potentially more space-efficient.

In the multi-threaded model, if two memory-heavy tasks require memory around the same time, neither has to wait for the other to be finished — or to put it another way, malloc looks for available space but not available time for that space. On the other hand, in a super-loop model, those same tasks will each get a turn at different times. Each will have much more memory available to them during their turn while having much less impact on other tasks the rest of the time.

An animated diagram you may have seen before on my blog demonstrates the general philosophy here. A task remains idle until its turn, at which point it takes center stage and can use all the resources it likes, as long as it packs up and cleans up before the next task.

So, what counts as expensive in this new memory model?

It’s quite clear from the earlier diagram:

  • Memory that is only used for one turn is clearly very cheap. Tasks won’t be interrupted during their turn, so they have full access to all the free memory without impacting the rest of the system.
  • Statically-allocated memory is clearly the most expensive: it takes away from the available memory for all other tasks across all turns.
  • Long-lived dynamic allocations — or just allocations that live beyond a single turn — are back to the stochastic model we had with multi-threading. Their cost is the amount of space × the number of turns they occupy the space for. Because these are a bit unpredictable, they also have a cost in that they add to the overall risk of running out of memory, so these kinds of allocations should be kept as small and short as possible.

Microvium is designed this way

Microvium is built from the ground up on this philosophy — keeping the idle memory usage as small as possible so that other operations get a turn to use that memory afterward, but not worrying as much about short spikes in memory that last only a single turn.

  • The idle memory of a Microvium virtual machine is as low as 34 bytes1.
  • Microvium uses a compacting garbage collector — one that consolidates and defragments all the living allocations into a single contiguous block — and releases any unused space back to the host firmware. The GC itself uses quite a bit of memory2 but it does so only for a very short time and only synchronously.
  • The virtual call-stack and registers are deallocated when control returns from the VM back to the host firmware.
  • Arrays grow their capacity geometrically (they double in size each time) but a GC cycle truncates unused space in arrays when it compacts.

See here for some more details.

Better than super-loop: a Job Queue

The trouble with a super-loop architecture is that it services every single task in each cycle. It’s inefficient and doesn’t scale well as the number of tasks grows3. There’s a better approach — one that JavaScript programmers will be well familiar with: the job queue.

A job queue architecture in firmware is still pretty simple. Your main loop is just something like this:

while (1) {
  if (thereIsAJobInTheQueue) 
    doNextJob();
  else
    goToSleep();
}

When I write bare-metal firmware, often the first thing I do is to bring in a simple job queue like this. If you’re using an RTOS, you might implement it using RTOS queues, but I’ve personally found that the job-queue style of architecture often obviates the need for an RTOS at all.

As JavaScript programmers may also be familiar with, working in a cooperative single-threaded environment has other benefits. You don’t need to think about locking, mutexes, race conditions, and deadlocks. There is less unpredictable behavior and fewer heisenbugs. In a microcontroller environment especially, a single-threaded design also means you also save on the cost of having multiple dedicated call stacks being permanently allocated for different RTOS threads.

Advice for using job queues

JavaScript programmers have been working with a single-threaded job-queue-based environment for decades and are well familiar with the need to keep the jobs short. When running JS in a browser, long jobs means that the page becomes unresponsive, and the same is true in firmware: long jobs make the firmware unresponsive — unable to respond to I/O or service accumulated buffers, etc. In a firmware scenario, you may want to keep all jobs below 1ms or 10ms, depending on what kind of responsiveness you need4.

As a rule of thumb, to keep jobs short, they should almost never block or wait for I/O. For example, if a task needs to power-on an external modem chip, it should not block while the modem to boots up. It should probably schedule another job to handle the powered-on event later, allowing other jobs to run in the meantime.

But in a single-threaded environment, how do we implement long-running tasks without blocking the main thread? Do you need to create complicated state machines? JavaScript programmers will again recognize a solution…

Callback-based async

JavaScript programmers will be quite familiar the pattern of using continuation-passing-style (CPS) to implement long-running operations in a non-blocking way. The essence of CPS is that a long-running operation should accept a callback argument to be called when the operation completes.

The recent addition of closures (nested functions) as a feature in Microvium makes this so much easier. Here is a toy example one might use for sending data to a server in a multi-step process that continues across 3 separate turns of the job queue:

function sendToServer(url, data) {
  modem.powerOn(powerOnCallback);

  function powerOnCallback() {
    modem.connectTo(url, connectedCallback);
  }

  function connectedCallback() {
    modem.send(data);
  } 
}

Here, the data parameter is in scope for the inner connectedCallback function (closure) to access, and the garbage collector will automatically free both the closure and the data when they aren’t needed anymore. A closure like this is much more memory-efficient than having to allocate a whole RTOS thread, and much less complicated than manually fiddling with state machines and memory ownership yourself.

Microvium also supports arrow functions, so you could write this same example more succinctly like this:

function sendToServer(url, data) {
  modem.powerOn( () => 
    modem.connectTo(url, () => 
      modem.send(data)));
}

Each of these 3 stages — powerOn, connectTo and send — happen in a separate job in the queue. Between each job, the VM is idle — it does not consume any stack space5 and the heap is in a compacted state6.

If you’re interested in more detail about the mechanics of how modem.powerOn etc. might be implemented in a non-blocking way, take a look at this gist where I go through this example in more detail.

Conclusion

So, we’ve seen that multi-threading can be a little hazardous when it comes to dynamic memory management because memory usage is unpredictable, and this also leads to inefficiencies because you need to leave a wider margin of error to avoid randomly running out of memory.

We’ve also seen how single-threading can help to alleviate this problem by allowing each operation to consume resources while it has control, as long it cleans up before the next operation. The super-loop architecture is a simple way to achieve this but an event-driven job-queue architecture is more modular and efficient.

And lastly, we saw that the Microvium JavaScript engine for embedded devices is well suited to this kind of design, because its idle memory usage is particularly small and because it facilitates callback-style asynchronous programming. Writing code this way avoids the hassle and complexity of writing state machines in C, of manually keeping track of memory ownership across those states, and the pitfalls and overheads of multithreading.


  1. Or 22 bytes on a 16-bit platform 

  2. In the worst case, it doubles the size of heap while it’s collecting 

  3. A super-loop also makes it more challenging to know when to put the device to sleep since the main loop doesn’t necessarily know when there aren’t any tasks that need servicing right now, without some extra work. 

  4. There will still be some tasks that need to be real-time and can’t afford to wait even a few ms in a job queue to be serviced. I’ve personally found that interrupts are sufficient for handling this kind of real-time behavior, but your needs may vary. Mixing a job queue with some real-time RTOS threads may be a way to get the best of both worlds — if you need it. 

  5. Closures are stored on the virtual heap. 

  6. It’s in a compacted state if you run a GC collection cycle after each event, which you would do if you cared a lot about idle memory usage. 

Short-lived memory is cheaper

Short-lived memory is cheaper

TL;DR: RAM on a microcontroller should not just be thought of as space but as space-time: a task that occupies the same memory but for a longer time is more expensive.

MCU memory is expensive

If you’re reading this, I probably don’t need to tell you: RAM on a microcontroller is typically very constrained. A 3 GHz desktop computer might have 16 GB of RAM, while a 3 MHz MCU might have 16 kB of RAM — a thousand times less processing power but a million times smaller RAM. So in some sense, RAM on an MCU may be a thousand times more valuable than on a desktop machine. Regardless of the exact number, I’m sure we can agree that RAM is a very constrained resource on an MCU1. This makes it important to think about the cost of various features, especially in terms of their RAM usage.

Statically-allocated memory

It’s common especially in smaller C firmware to just pre-allocate different pieces of memory to different components of the firmware (rather than using malloc and free). For example, at the global level, we may declare a 256-byte buffer for receiving data on the serial port:

uint8_t rxBuffer[256];

If we have 1kB of RAM on a device for example, maybe there are 4 components that each own 256 B. Or more likely, some features are more memory-hogging than others, but you get the idea: in this model, each component in the code owns a piece of the RAM for all time.

Dividing up RAM over time

It’s of course a waste to have a 256-byte buffer allocated forever if it’s only used occasionally. The use of dynamic memory allocation (malloc and free) can help to resolve this by allowing components to share the same physical memory at different times, requesting it when needed, and releasing it when not needed.

This allows us to reconceptualize memory as a mosaic over time, with different pieces of the program occupying different blocks of space-time (occupying memory space for some time).

When visualizing memory like this, it’s easy to start to feel like the cost of a memory allocation is not just the amount of memory it locks, but also the time that it locks it for (e.g. the area of each rectangle in the above diagram). In this sense, if I allocate 256 bytes for 2 seconds then it costs 512 byte-seconds, which is an equivalent cost to allocating 512 bytes for 1 second or 128 bytes for 4 seconds.

Being a little bit more rigorous

Skip this section if you don’t care about the edge cases. I’m just trying to be more complete here.

This measure of memory cost is of course just one way of looking at it, and breaks down in edge cases. For example, on a 64kB device, a task2 that consumes 1B for 64k-seconds seems relatively noninvasive while a task that consumes 64kB for 1s is much more costly. So the analogy breaks down in cases where the size of the allocations are significant compared to the total memory size.

Another way the model breaks down is if many of the tasks need memory around the same time — e.g. if there is some burst of activity that requires collaboration between many different tasks. The typical implementation of malloc will just fail if there is not memory available right now, as opposed to perhaps blocking the thread until the requested memory becomes available, as if the memory was like a mutex to be acquired.

But the model is accurate if we make these assumptions:

  • The size of the individual allocations is small relative to the total memory size
  • Allocations happen randomly over time and space

Under these assumptions, the total memory usage becomes a stochastic random variable whose expected value is exactly:

The expected allocation size × the expected allocation size × the expected allocation frequency

We could also calculate the probability that the device runs out of memory at any given point (I won’t do the calculations here).

Conclusion

In situations where memory allocations can be approximated as being small and random, the duration of a memory allocation is just as important as its size. Much more care must be taken to optimize memory usage for permanent or long-lived operations.

I’m not very happy with this stochastic viewpoint and all these edge cases. It means that at any point, we could randomly exceed the amount of available memory and the program will just die. Is there a better way to organize memory space-time so we don’t need to worry as much? I believe there is… and I’ll cover that in the next post.


  1. Another thing that makes it a constrained resource is the lack of virtual memory and the ability to page memory in and out of physical RAM, so the RAM size is a hard limit 

  2. When talking about this space-time model, I think it’s easier to talk about a “task” than a “component”, where a task here is some activity that needs to be done by the program over a finite stretch of time, and will consume resources over that time. 

Can you parse this?
JavaScript Corners

Can you parse this?
JavaScript Corners

What does the following JavaScript mean:

const x = await / +y; const z = await / +y;

Hint: it’s a trick question.

The answer depends on the context, as is demonstrated by the following snippet:

function foo() {
  const y = 10;
  const await = 5;
  const x = await / +y; const z = await / +y;
  console.log(x);
}
async function bar() {
  const y = 10;
  const x = await / +y; const z = await / +y;
  console.log(x);
}
foo(); // Prints 0.5
bar(); // Prints / +y; const z = await /10

Within the context of an async function, await is like a keyword, and the thing after await is considered to be an expression. In JavaScript, an expression that starts with forward-slash is a Regexp literal, and that literal ends with the next unescaped forward slash. The +y at the end then represents string concatenation, so both the regular expression and y are converted to strings, and the concatenated result string is "/ +y; const z = await /10".

This interpretation is easier to visualize if the syntax highlighting identifies and colorizes the respective parse tokens as follows:

Outside of the context of an async function, await is just a normal identifier and has no special meaning (this is important so that the introduction of the await syntax to the JavaScript language didn’t modify the meaning of existing JavaScript code which might have used await as a variable or parameter name).

If syntax highlighting was correct, as seen in the above images, the difference would be pretty obvious. Unfortunately, I needed to photo-shop the above images, since VS Code highlights both examples the same, and both incorrect:

Global To-String
JavaScript Corners - Part 9

Global To-String
JavaScript Corners - Part 9

(This is Part 9 in my series on JavaScript corner cases).

Here’s another one.

In JavaScript, global variables are properties of the global object. By default, the global object is like any other, and inherits from the Object.prototype  object. Object.prototype comes with a number of its own properties, such as the toString method. So, that means that toString is also a global variable1.

console.log('toString' in global); // prints true
console.log(toString === global.toString); // prints true
console.log(global.toString()); // prints [object global]
console.log(toString()); // prints [object Undefined]. Why is this?

Everything seems expected, except the last line, which might seem a little confusing. The toString() call is clearly invoking a function using a reference to that function, where the base of the reference is the global object, right? (Take a look at my posts on references). So surely toString() and global.toString() mean the same thing?

Wrong.

There’s a subtlety here. The unqualified toString reference actually has a base value2 that is the global environment, which “knows about” the global object, but is not exactly the global object. The base object for the global environment is actually always the value undefined. See here in the spec. This is why it prints “[object Undefined]” .

 


  1. To qualify as a global variable, there is actually an additional criterion. The property of the global object must not be listed in the set of unscopables on the global object. In this case, toString is not listed as an unscopable, since it was introduced into JavaScript before the existence of the unscopables feature, and for backwards compatibility it remains that way. 

  2. Recall that a reference has two components: the thing being referred on, and the name of the thing being referred to. For example, referring to the property named x on the object obj, in the case of obj.x 

JavaScript Corners – Part 9
Node.js With-Statement Bug

JavaScript Corners – Part 9
Node.js With-Statement Bug

What does the following evil code print?

var x = 'before';
var obj = { x };
with (obj) {
  x = (delete x, 'after');
}
console.log(x);

If you’re not sure, don’t worry — neither are current JavaScript engines. Firefox prints “after”, while Edge, IE, and Node.js print “before” (node v7.9.0). I believe that Firefox is correct in this case.

The tricky statement is obviously the following one, which sets a property on an object in the same statement that deletes the property:

x = (delete x, 'after');

(Side note: if you’re not very familiar with JavaScript, the relevant language features that are being used here are the delete operator, comma operator, and the good ol’ evil with statement).

What we expect to happen

The statement var x introduces a new variable at the script scope1.

The { x } expression creates a new object with a single property2 x, where the value of x  is copied from the variable x in the outer scope, so it has the initial value of ‘before’.

The with  statement brings the properties of the object obj into scope in a new lexical environment.

The statement x = (delete x, ‘after’) should perform the following steps:

  1. Evaluate the left hand side
  2. Evaluate the right hand side
  3. Assign the value from the right hand result, to the reference created when evaluating the left hand side

When the left hand side is evaluated, the property x will be found in object obj. The base value of the reference is the object, not the script variable scope.

The right hand side evaluates to ‘after’, but in the process it deletes the property x  from obj. However, the reference on the left hand side should still refer to “the property named ‘x’ on the object obj“, even though the property with that name is now deleted.

When the assignment happens, it should create a new property named ‘x’ on object obj, with value ‘after’. The variable x in the outer scope should be left unaffected.

In this case, I think Node.js gets the wrong answer.


  1. Theoretically, the script scope is the global scope. But in Node.js, scripts are wrapped in a module wrapper that changes the behavior of global vars. This doesn’t affect the outcome of this experiment though 

  2. Bonus fact. Object literals inherit from the global intrinsic object Object.prototype, which has other properties on it, such as toString. So when I say that it has a single property, it would be more accurate to instead say that it has a single own property 

JavaScript Corners – Part 8
References (Continued)

JavaScript Corners – Part 8
References (Continued)

Given an object o  with a member function f  that prints out what the this value is:

const o = {
  f() {
    console.log(
      this === global ? 'global' :
      this === undefined ? 'undefined':
      this === o ? 'o':
      '-');
  }
}

We know what the following prints:

o.f();  // prints "o"

And we know what the following prints1:

const f = o.f;
f(); // prints "global"

I always thought that the difference came down to the fact that o.f()  is actually invoking a different operator — something like a “member call operator”.

However, what do you think the following prints?

(o.f)();

My guess, up until today, would have been that this prints “global”, since with the parentheses, this is no longer invoking the member call operator, but is instead invoking the call operator.

But I was wrong. There is no such thing as a “member call operator”. Rather, the “call” operator just behaves differently depending on whether the target of the call is a value or a reference2.

So this actually prints “o”.

(o.f)(); // prints "o"

But hang on. Why didn’t the parentheses coerce o.f to a value?

One might have expected the parentheses to automatically dereference o.f, something like the following examples that use the logical OR and comma operators to coerce the target to a value instead of a reference:

(o.f || 0)(); // prints "global"
(0, o.f)(); // prints "global"

Indeed, this could have been the case for bare parentheses as well, but the language designers chose not to do it that way, so that the delete and typeof operators still work when extraneous parentheses are provided:

delete o.f; // The "correct" way to delete a property
delete (o.f); // This also works

 


  1. assuming the use strict directive isn’t provided in this case 

  2. To be more accurate, the target also behaves differently depending on whether the target reference refers to a property of an object vs a variable in an environment record 

JavaScript Corners – Part 7
Calls and With Statements

JavaScript Corners – Part 7
Calls and With Statements

Here’s a quick one. What does the following print? (Assuming not in strict mode)

function foo() {
  console.log(this.name);
}

const bar = { foo, name: 'Bar' };
global.name = 'Global';

foo();         // Case 1
bar.foo();     // Case 2
with (bar) {
  foo();       // Case 3
}

In non-strict mode, the naked function call foo() gets a this value that is the global object. So the first case prints “Global”.

In the second case, we’re invoking foo as a member of bar, and so the this value is bar (it prints “Bar”).

The last case is the most interesting, and the most useless (since with statements are strongly discouraged, and cannot be used outside of non-strict mode). The this object in this case is actually bar. JavaScript recognizes that the function foo here is being invoked within the context of a with statement, and implicitly uses the bar object. This prints “Bar”.

JavaScript Corners – Part 6

JavaScript Corners – Part 6

In what order does the following evaluate?

a()[b()] = c()[d()] = e()[f()];

TL;DR Answer

get a
call a
get b
call b
get c
call c
get d
call d
get e
call e
get f
call f
get e.f
set c.d
set a.b

Step 1: Variable Access

First off, what does this code even mean? If you’re not intimate with JavaScript, this might seem like a very confusing line of code. In fact, even if you’re familiar with JavaScript, this can be confusing.

So let’s break it down, starting with:

a

The expression a loads the value a from the surrounding scope1. This is done by searching up the scope chain until a is found.

There are a number of different types of scopes in JavaScript, including those that refer to blocks (like the inside of a for-loop), functions (the contents of a function), objects (scopes that are created using a with statement, or the global scope).

For our purposes, let’s define a at the global scope. You’ll see why in a moment. Assuming we’re working in Node.js, the global object is called global, and properties of the global object are part of the global scope2.

global.a = 42;
console.log(a); // prints 42

But, since we’re interested in the order of evaluation, it would be useful to know when the value a is accessed. Luckily, in JavaScript, you can define properties that have a getter and/or setter, which we can use to log when the global variable is accessed:

Object.defineProperty(global, 'a', {
  get: function() {
    console.log('get a');
    return 42;
  }
});
console.log(a); // prints "get a" followed by "42"

Great! We can now see when the global variable “a” is accessed. There aren’t many languages where you can do that. Hooray for JavaScript.

We may want to define more globals this way, so lets refactor this to use a helper:

function defineGlobal(name, value) {
  Object.defineProperty(global, name, {
    get: function() {
      console.log(`get ${name}`);
      return value;
    },
    configurable: true
  });
}
defineGlobal('a', 42);
console.log(a); // prints "get a" followed by "42"

Step 2: Calling the function

Now let’s look at the following statement:

a()

This is, unsurprisingly, a function call. It first evaluates a, as indicated above, by fetching a from the current scope. Then it calls a as a function. Nothing special going on here.

But to make this work with our a, we’re going to need to make sure that a is defined as a function, and not the value 42. So let’s change our getter to return a function:

defineGlobal('a', function() {
  console.log('call a');
  return 42;
});
console.log(a());
// get a
// call a
// 42

To answer our original question, we’re going to need to create a whole bunch of functions. So let’s again refactor this into a helper:

function defineFunction(name, body) {
  defineGlobal(name, function() {
    console.log(`call ${name}`);
    return body();
  });
}
defineFunction('a', () => 42);
console.log(a());

Step 3: Member access

The expression x[y], in JavaScript, is a property lookup. It evaluates the expressions x and y, and then finds the property on the object x that has the name resulting from the expression y. Here’s a snippet that illustrates this:

defineGlobal('x', { myProp: 42 });
defineGlobal('y', 'myProp');
console.log(x[y]);
// get x
// get y
// 42

If you’re not very familiar with JavaScript, it’s important to note here that the property name used here is "myProp", and not "y". The property name is the result of evaluating y.

Again, it will be useful to know exactly when the property is accessed, so let’s use a getter instead:

defineGlobal('x', {
  get myProp() {
    console.log('get x.myProp');
    return 42;
  }
});
defineGlobal('y', 'myProp');
console.log(x[y]);
// get x
// get y
// get x.myProp
// 42

Here I’ve just used the ES6 getter syntax, rather than using defineProperty.

As before, we’re going to need to do this a few times, so let’s create a helper function:

function createObject(objectName, propertyName, propertyValue) {
  return {
    get [propertyName]() {
      console.log(`get ${objectName}.${propertyName}`);
      return propertyValue;
    },
    set [propertyName](v) {
      console.log(`set ${objectName}.${propertyName}`);
      propertyValue = v;
    }
  };
}
defineGlobal('x', createObject('x', 'myProp', 42));
defineGlobal('y', 'myProp');
console.log(x[y]);

Step 4: Assignment

The last piece of the puzzle is the assignment operator. Consider the following code:

x = y

The assignment operator, like the other operators so far, will evaluate the each operand, and then perform some operation on the results. In the above case, x is evaluated, and then y is evaluated, and then the result of y is assigned to the result of x.

But wait. What do you mean “the result” of x?

The model here that JavaScript uses internally, is that x actually evaluates to a reference. This is a type in JavaScript which you’ve probably never heard of. A reference value consists of two components:

  • A base value, that tells you what container the value is stored in
  • A name, that tells you which value in the container is being referred to

In this case, the expression x evaluates to a reference that has the following attributes:

  • A base value that is the global object
  • A name that is the string "x"

In other words, the reference value is something like the English description “the property x on the global object”. When you assign to x, you are assigning to “the property x on the global object”. When you delete x, you are deleting “the property x on the global object”.

The expression y also evaluates to a reference, but the assignment operator coerces that reference to the actual referenced value. The same thing is done in expressions such as x + y or x(y).

Here’s another example of an assignment:

x.y = z

In this case, the base value of the reference is the object x, and the name is y.  The assignment sets the value referred to as “the property ‘y’ of the object x”. Similarly, you can do delete x.y to delete “the property ‘y’ of the object x”.

In a more detailed consideration of the above example, x and z evaluate to references. Both x and z are then coerced to values (dereferenced, by fetching the property or variable), and then a third reference is created refers to the property y on the base object x.

But, what order does this occur in? To find out, let’s use our trusty helper functions:

defineGlobal('x', createObject('x', 'y'));
defineGlobal('z', 42);
x.y = z
// get x
// get y
// set x.y

This might come as a little bit of a surprise. The expression x is evaluated before the expression y, and then the assignment takes place. In some ways, one expects the opposite — one expects that the left hand side of an assignment is not considered until the right hand side.

This seems to be a general rule in JavaScript. Operands are evaluated from left to right, and then the operator is executed. Perhaps an exception to this rule-of-thumb, is that the short-circuiting operators such as && must necessarily execute part of the operation without all the operands fully evaluated.

Side note: in languages such as C++, the order of the left and right hand side of a most operators is not defined. The compiler can chose to evaluate them in whatever order it thinks is best, or even evaluate them simultaneously (e.g. if the CPU has multiple cores). JavaScript is different, in that the specification lays out a specific, unambiguous ordering.

We can follow this to its logical conclusion, and determine the order of execution of the whole of the original program in question:

defineFunction('a', () => createObject('a', 'b'));
defineFunction('b', () => 'b');
defineFunction('c', () => createObject('c', 'd'));
defineFunction('d', () => 'd');
defineFunction('e', () => createObject('e', 'f'));
defineFunction('f', () => 'f');

a()[b()] = c()[d()] = e()[f()];
// get a
// call a
// get b
// call b
// get c
// call c
// get d
// call d
// get e
// call e
// get f
// call f
// get e.f
// set c.d
// set a.b

Can we abuse it? (Advanced)

The reason I started looking into this at all, is that I was trying to discover a way to “see” references. They are objects that exist in the execution model, but are never shown explicitly to the user of the language, so do they really need to exist at all?

This is import to me, because I’m writing a JavaScript compiler, and need to know whether references are best left as just a description mechanism in the ECMAScript specification, or if they should be considered to be real entities with real allocated memory in the runtime.

So, can we design an example, that unequivocally proves that there must be a reference allocated in memory at some point?

Here’s my attempt:

let resolveZ;

defineGlobal('z', new Promise(resolve => resolveZ = resolve));

async function asyncAssignment() {
  x[y] = await z;
}

defineGlobal('x', createObject('x1', 'y1'));
defineGlobal('y', 'y1');
asyncAssignment();
console.log('...it should be waiting for the result of z at this point...');
const o = createObject('x2', 'y2');
defineGlobal('x', o);
defineGlobal('y', 'y2');
asyncAssignment();
// Let's switch out property 'y2' for a new one, to make sure it's not holding a
// pointer to the property itself, but is instead recalling it by name
delete o.y2;
Object.defineProperty(o, 'y2', {
  set: value => {
    console.log(`set x.y2 (redefined) to ${value}`);
  }
});
asyncAssignment();
// And lastly, let's delete x from the global scope
delete x;
console.log('...now we are going to resolve the promise for z...');
resolveZ(42);
// get x
// get y
// get z
// ...it should be waiting for the result of z at this point...
// get x
// get y
// get z
// get x
// get y
// get z
// ...now we are going to resolve the promise for z...
// set x1.y1 to 42
// set x.y2 (redefined) to 42
// set x.y2 (redefined) to 42

What I’ve done here is break up the x[y] = z assignment using the await operator. The await operator will suspend the statement (and the rest of the async function), allowing us to swap out various things in the environment to see if we can mess with the operation while it is suspended. What we’re trying to prove here, is that the reference itself must be preserved in memory, from the time that the operation is suspended, to the time that it is resumed (when z is resolved).

To make it even more apparent, I’ve executed the async function multiple times, trying different ways to “mess” with the pending operations.

Conclusions

This experiment has proven to me that references are “almost” tangible objects. We can see that they must exist in memory under some circumstances, and that they are not simple “pointer” values — they must refer to both the object and the property name.

This leads to some interesting results when it comes to the order of evaluation of various expressions. While this knowledge isn’t needed for everyday programming scenarios, it helps to have a deeper understanding of what’s going on so that we know where the limit lies.

 

 

 

 


  1. Known in ECMAScript as a Lexical Environment 

  2. There is an interesting recursion here, since the value global here is also a globally scoped binding, which means the global property on the global object points to itself. You can see this if you have a statement like console.log(global.global.a) 

JavaScript Corners – Part 5

JavaScript Corners – Part 5

Here’s a quick one:

let f = function() {};
console.log(f.name); // prints f

This was quite unexpected to me. It’s the only time I’ve ever seen where the left-hand side of an assignment can affect the right-hand side.

This only happens once. Once the anonymous function has a name, it can’t be re-named:

let f = function() {};
let g = f;
console.log(g.name); // prints f

Interestingly, this doesn’t seem to work with destructuring:

let [f] = [function(){}];
console.log(f.name); // prints ''

This implies that somewhere between when the anonymous function is instantiated, and when it added to the array literal, the anonymous function acquires a name. It’s not the destructuring itself that suppresses the name, as can be seen in the following example:

let [f = function(){}] = [];
console.log(f); // prints f

It also doesn’t seem to work with anonymous functions passed as parameters:

(function(f) {
  console.log(f.name);
})(function() {});

It also doesn’t seem to work with anonymous functions evaluated from more complicated expressions:

let foo = (42, function() {});
console.log(foo.name); // prints ''

 

JavaScript Corners – Part 4

JavaScript Corners – Part 4

Here’s another quick one. It relates to the scoping rules with parameter expressions.

const x = 'a';
const y = 'b';
const z = 'c';
function foo(x, f = () => , y = 2) {
  var x = 10;
  var y = 11;
  const z = 3;
  return f;
}
const f = foo(1);
console.log(f());

Function foo has three parameters: x, f, and y. The key thing here is that the default value for f is a function that encloses the local lexical scope surrounding the function, and we’ve designed the function in such a way that when we call it it “tells” us the values of x, y, and z in the surrounding scope.

So the question is, which values of x, y, and z does f close over? There are global variables x, y, and z, there are parameters x and y, and there are local variables x, y and z. The parameter x is intentionally ordered before the function f is initialized, while parameter y is positioned to be after the function f is initialized.

I’ll spare you the time of running the code yourself. On my machine, in node.js 7.0.0, the output is . Clearly no local variables have been used, and f is capable of seeing parameter y even though it’s declared after f. The output is the same whether or not the code is executed in strict mode.

One of the most interesting things here, to me, is that the local variable x is not an alias for the parameter x.  This is strange when you consider the following code:

function foo(x) {
  var x;
  return x;
}
console.log(foo(42));

Given that local variables are apparently not aliases of the parameters with the same name, you would expect the output of the above to be undefined, since variable x is not initialized, even though parameter x is initializedHowever, the output is actually 42.

Where did we go wrong in our logic? Let’s see if we can get an example that explores what’s going on:

function foo(x, y, f = () =>  ) {
 var x;
 var y = undefined;
 return { x, y, f };
}
const { x, y, f } = foo(42, 43);
console.log([x, y]);
console.log(f());

The output of this is:


This clearly tells me that variable x is not an alias of parameter x, but rather a separate variable that is initialized to the same value as parameter x.

Also, as a side note, it tells me that there is a subtle semantic distinction between initializing a variable to undefined and not initializing a variable at all, which is very interesting.

I should note that we’ve explored this empirically, not deducing the behavior from the spec. When I run the same experiment in Firefox I get a different result — both instances of y are undefined.

Which is correct?

I believe the V8 (node.js) implementation is more accurate in this case. There’s a note in the ECMAScript specification1 that says:

NOTE A separate Environment Record is needed to ensure that closures created by expressions in the formal parameter list do not have visibility of declarations in the function body.

I think this note is self explanatory, and tell us that the closure function in the parameter list is not meant to be able to see the local variables of its parent function.

P.S.

I’m going add one last investigation to this post. The above quote applies only if there are parameter expressions (if some parameters have default values). If there are no parameter expressions, then apparently this second “environment” is not created, and so the variable x should be an alias for the parameter x.

At first I thought that there was no way that we could ever test this distinction. Can you think of a way to test this difference?

Spoiler alert, I did think of a way. If the code is not executing in strict mode, then the arguments variable holds aliases for the parameters, which makes it possible to mutate parameters without referring to them by name. This is important, because once you have a local variable with the same name, there is no way to know if the local variable is a copy of the parameter, or an alias of the parameter, based purely on code that identifies it by name.

Take a look at the following experiment:

function foo(x, y) {
 var x;
 console.log(x);        // outputs "1"
 arguments[0] = 2;
 console.log(x);        // outputs "2"
}
foo(1, 42);

First note that y is not used in this code sample, but you’ll see why I added it in a moment.

The first log output of “1” shows that the variable x is either a copy of parameter x or an alias of parameter x. Then we continue by modifying the parameter x, without touching the variable x. Then we log the value of the variable x and see that it’s also changed — because in fact the variable and parameter x are both symbols for the same binding.

But now look at this slight modification to the code:

function foo(x, y = 42) {
  var x;
  console.log(x);      // outputs "1"
  arguments[0] = 2;
  console.log(x);      // outputs "1" again!
}
foo(1, 42);

Note that again y is not used. In fact, to keep this a controlled experiment, y is even initialized to the same value that is passed as a parameter. The contents of the value initializer is not relevant though, because the initializer is never evaluated. What is important in this case is that there is an initializer expression at all, regardless of whether it executes or not.

The surprising thing, as I noted in the comment on the snippet, is that the second output is now 1 instead of 2, resulting from the fact that the mutation of the argument does not change the variable.

This is not a bug in node.js, but just an interesting quirk of the spec, in another corner of JavaScript.

 


  1. https://tc39.github.io/ecma262/#sec-functiondeclarationinstantiation