Concept: Distributed IoT Programs using Microvium
In a recent post, I wrote about an idea I have for making the development of distributed applications significantly easier by building an infrastructure that allows a single process (running instance of a program) to fork itself onto different target machines. In this post, I’m going to make the idea more concrete by exploring how this might look in an IoT application using Microvium.
TL;DR: the snapshotting paradigm of Microvium would allow a platform to host a unified cross-environment API (a single API spanning the firmware, cloud, frontend, and IaC) and would facilitate the factoring of application code in a way that is more modular and maintainable by keeping related code together even if its functionality spans multiple physical machines. The paradigm makes it easy to deploy shared information between the client and the server, such as shared code tables for compression or shared secrets for security, without the shared information needing to be manually managed or exposed to contention in a public namespace such as a database or global configuration repository. The unifying of Cloud and firmware platforms opens up a possible PaaS revenue model for Microvium which may help to support future development.
Background: The Grow Station
A lot has happened in my life recently. My partner and I are expecting a baby, and we’ve bought our first home!
Among other things, the new home has given me something I’ve dreamed of for a long time: a dedicated “maker room” (or at least a semi-dedicated collapsable table in part of a room!)
The first thing I’m building with at my maker corner is a controller for my “grow station” — an indoor enclosure I have for my herbs and seedlings, with some DIY “upgrades” off Amazon: a grow light, heater, and fan.
I bought the Arduino Nano 33 IOT which is perfect for this task, with enough IO and built-in Bluetooth and WiFi, so I can host a proper web UI for it rather than messing about with buttons and displays. I’m putting together a basic control circuit for it (schematic here if you’re interested).
The Arduino Cloud platform was remarkably easy to get set up with. It didn’t take long before I had a test project with the onboard IMU reporting to a gauge widget on a cloud dashboard. And for this project, that’s probably all I need (a few gauges, for temperature, humidity, soil moisture, control state, etc, and maybe some controls for overriding the state if necessary).
But I can’t help but wonder: wouldn’t this be a great opportunity to use Microvium? And taking it one step further, perhaps it’s an opportunity to test-drive the “distributed apps” paradigm that I was recently brainstorming, but using Microvium rather than node.js.
Whether or not I actually do it, the process of thinking about it has given me some insight that I want I want to share here.
Extension Design Pattern
Taking a step back, I’d like to contrast two different ways of structuring an application. This is easiest to illustrate by way of an example, and the example I’ll use is VS Code Extensions.
VS Code has what it calls Extensions, which are downloadable modules that extend the functionality of VS Code, such as providing support for new programming languages, code formatters, etc. It also defines Capabilities, such as the menu, keyboard shortcuts, configuration system, storage, etc.
Each extension can implement support for each capability. For example, a code formatter may implement menu capability support so that the extension appears in the VS Code menu. So, we can think of a matrix of extensions and capabilities, with each intersection being some piece of behavior described in code.

We know that this is a successful way of structuring code. Each extension can be developed essentially independently, and they have access to shared resources such as the menu or storage through a common API.
Code in this pattern is grouped by extension, not by capability. You download a whole extension at once, complete with all its capabilities, since they come as a single bundle.
An alternative way of structuring VS Code might have been to group code by capability rather than by extension. Unfortunately, I think most applications are structured this way. For example, if you open up the code for a typical MVC web application, you might find a group of files for “models”, a separate group of files for “controllers”, and a separate group of files for “views”.
If a feature of the app has a view, you won’t find the view for the feature in a “feature” folder/file, you’ll find it in the “views” folder/file.
When a new feature is added to a project like this, it generally requires opening up each of these different “capability” folders (models, views, controllers, etc) and adding new code to each of them to support the new feature. There is typically no way to add a “feature” to one of these applications in the same way you would add an extension in VS Code, as a cohesive group of feature-related code that can register itself into the view system, the controller system, and the model system, as capabilities of this “extension”.
The result is a distributed codebase that’s more difficult to add new features to, and more difficult to maintain because each feature is implemented in diffuse code spread throughout different components of the project.
I’ve been singing this song for many years. Here’s a post from 7 years ago saying basically the same thing.
Structure of Distributed Apps
This latter way of structuring code is especially prominent in distributed systems. Each “feature” in a distributed system is often spread across multiple places. An “add user” feature, for example, may involve “add user” code on the frontend to provide the button/menu item, and an “add user” API method to handle the logic, maybe with an “add user” stored procedure in the database, or an “add user” event in the event store. The “add user” feature in this example is diffusely represented in the code, and thus more difficult to maintain.
With IoT, the difference is even more pronounced. I’ve worked at multiple IoT companies and found that many new features requested by customers involved the addition or modification of code across all layers in the stack, from the firmware, to the backend server, to the database schemas, to the frontend. There was generally no way to add an “extension-like” module that could implement just a single feature by plugging itself into the firmware, the backend, and the database, all through one “Extensions API”, analogous to VS Code’s extensions has.
With today’s technology, it’s very difficult to do it any other way. Without a lot of work, we’re somewhat tied into grouping our code by the physical server it’s running on. All the code for “such-and-such web API” goes together, all the database schemas go together, all the client-side code goes together, even if there are single logical features that span all of these “capabilities”/environments.
As I’ve spoken about previously, I think there’s a better way. The Grow Station may an opportunity to test-drive a new paradigm in a simplified use case.
What might it look like?
For this simple proof-of-concept, I’d like to just display the temperature of the Grow Station on the web UI. And I’d like to structure it such that “showing the temperature” is part of one cohesive module of code, with capabilities that span all the systems required to achieve its purpose: reading from the analog-to-digital converter, sending messages from the client to the server, and updating the UI.

The code for this extension1 could look something like this:
/* display-temperature.js */ // Displays the current ambient temperature (at the device) in the Web UI export function displayTemperature(extensionApi) { const mcu = extensionApi.mcu; // Define a connection between the device and the server to send temperature readings const comms = extensionApi.newCommsChannel(); // Some firmware code to read the temperature and send it to the server every second const pin = mcu.getAnalogPin('A2'); mcu.setInterval( () => { const temperature = pin.read(); comms.pushMessageToServer(temperature); }, 1000 /* every 1 second */); // Define a front-end control to show the temperature extensionApi.registerUiControl(domElement => { domElement.textContent = 'Waiting for first reading'; comms.addEventListener('messageFromDevice', value => { domElement.textContent = `The ADC value is ${value}`; }) }) }
(I’ve elided any calibration required here to convert the ADC reading to a temperature since it’s unimportant to the concept).
In the above snippet, we have code spanning all 3 capabilities under a unified “extensions API”:
- The
getAnalogPin
andpin.read
API methods read from the analog-to-digital converter on the MCU - The
Node.textContent
DOM API method is used to write to the DOM - The
newCommsChannel
API method establishes a new communications channel between the device and the server so that the device canpushMessageToServer
and the server can listen formessageFromDevice
events.
Note that the newCommsChannel
API method has behavior that spans across multiple devices (and could be implemented in terms of a lower-level cross-device API).
The mcu.setInterval
API method sets a repeating timer on the microcontroller. Its callback will be invoked on the MCU itself.
Similarly, registerUiControl
registers that a new container should be added to the Web UI DOM to display something for this temperature reader and its callback will be called when the Web page and DOM element are loaded.
The code in this snippet spans at least 3 different physical machines, with different supported API methods2:
newCommsChannel
, andregisterUiControl
, are cross-infrastructural and must run on the build/deployment machine.pin.read
run on the MCU.comms
.addEventListener
andNode.textContent
run on the server/frontend.
How does the code get to these different environments?
An extension API as described above could be implemented on top of Microvium’s snapshotting ability.
- On the build machine, the host program will set up the API and import the script. (See this example here of how to run a script in Microvium with a custom API)
- Running on the build machine, the host is accepting the calls to
newCommsChannel
,mcu.setInterval
, andregisterUiControl
to build up metadata about how the distributed application looks as a whole (in this case, it has one communications channel, one UI control, and one callback to execute when the MCU starts up). This metadata is somewhat similar to the IaC code you might expect in some modern cloud applications. - When the script has run to completion, it’s snapshotted (as is the typical practice with a Microvium program).
- The snapshot must be distributed to the firmware and to the web application, with metadata about which respective callbacks of the Snapshot to run in those environments.
Is it bad form?
Some might argue that it would be bad form to put code for completely different environments together, such as those who argue that SQL shouldn’t be embedded in application source code.
Here’s a Stack Overflow thread discussing some of the disadvantages of structuring code this way. The discussion focuses on SQL in particular, but many of the same concerns may apply to embedding “firmware code” in the “server code” etc. Reading the comments, there are strong opinions on both sides of the debate.
I’m not going to recapitulate all the points on either side of the debate, but one point I’ll talk about is coupling.
Coupling
The earlier code example is highly coupled to the technologies on both ends of the stack. For example, it uses the hypothetical pin A2
on the embedded device, and it uses the DOM API on the frontend.
What if you wanted to use a different MCU or circuit, where the temperature reading comes from a different ADC pin, or a digital sensor on the I²C port? It seems you would need to open the code again and make modifications, which violates the open/closed principle. Similarly, if you wanted to move away from a DOM-based frontend, such as making a native mobile app instead of a web app.
But is the concern here really with the idea of embedding code from heterogeneous systems together, or is it just a limitation of the way we’ve structured the solution within that framework?
As with everything, if we anticipate certain kinds of changes down the line (changes to the database technology, the UI technology, or the MCU), we can put in the appropriate abstractions. Let’s say that we want our example to be decoupled from the exact source of the temperature reading. We could restructure the code like this if we wanted to:
/* display-sensor-in-ui.js */ export function displaySensorInUi(extensionApi, sensor) { const comms = extensionApi.newCommsChannel(); extensionApi.mcu.setInterval( () => { const temperature = sensor.getReading(); comms.pushMessageToServer(temperature); }, 1000); extensionApi.registerUiControl(domElement => { domElement.textContent = 'Waiting for first reading'; comms.addEventListener('messageFromDevice', value => { domElement.textContent = `The ADC value is ${value}`; }) }) }
/* my-sensor.js */ export function MySensor(extensionApi) { const pin = extensionApi.mcu.getAnalogPin('A2'); const getReading = () => pin.read(); const sensor = { getReading }; return sensor; }
/* app.js */ import { MySensor } from './my-sensor'; import { displaySensorInUi } from './display-sensor-in-ui'; export function App(extensionApi) { const sensor = MySensor(extensionApi); displaySensorInUi(extensionApi, sensor); }
I’m not completely sure that this is the right structure, but hopefully it shows the principle. In the above refactoring, we now have two subcomponents that make up the App, with a clean dependency hierarchy.
It would be easy within this design to swap out the sensor for a different kind of sensor.
All three components can theoretically span across all three environments. Although the MySensor
component only uses MCU hardware at present, it has access to the full extension API if it was needed.

Hopefully, this example demonstrates that there is another world of possibilities for abstraction and clean design that doesn’t necessitate breaking down the app along physical machine boundaries.
Why structure code this way?
I can see many reasons to structure code in this cross-environment way.
Shared Secrets and Configuration
The code examples in this post have demonstrated the construction of a shared resource (newCommsChannel
) which is accessible by both the edge device and the UI. It gets constructed while running on the build machine and travels via the distributed snapshot to each of the final runtime resting spaces. The comms
object that travels in the snapshot could encapsulate shared secrets, such as an API key or certificate.
Example: Coded Logger
For a different example that also demonstrates the encapsulation of shared information, let’s say that we want to transmit logs to the server, but that we’re very bandwidth constrained (IoT data can be expensive), so instead of transmitting log messages as strings, we want to transmit them as numeric codes. We could easily implement a library that encodes the string messages as numeric codes and transparently handles the decoding on the other side:
/* logger.js */ export function makeLogger(extensionApi) { const comms = extensionApi.newCommsChannel(); // Keep a registry of message strings (shared between client and server) const definedMessages = []; // The set of strings in this example must be pre-registered const defineMessage = text => definedMessages.push(text); const codeForMessage = text => definedMessages.findIndex(text); const messageForCode = code => definedMessages[code] ?? 'Unknown message'; // When we want to send a message, we send the index instead of the text const sendMessage = text => comms.pushMessageToServer(codeForMessage(text)); // When the message is received, we look up which message was being referred to const onReceiveMessage = handler => comms.addEventListener('messageFromDevice', message => handler(messageForCode(message)); return { defineMessage, sendMessage, onReceiveMessage }; }
/* example-usage.js */ import { makeLogger } from './logger'; export function App(extensionsApi) { const logger = makeLogger(extensionsApi); // At build time, we define some message strings to send at runtime logger.defineMessage('Message A'); logger.defineMessage('Message B'); // At runtime on the MCU, we occassionally send a message extensionsApi.mcu.setInterval( () => logger.sendMessage('Message A'), 600); extensionsApi.mcu.setInterval( () => logger.sendMessage('Message B'), 700); // Register a UI control so we can see the logs on the frontend extensionsApi.registerUiControl(domElement => logger.onReceiveMessage(message => domElement.addChild(document.createTextNode(message)); ) }
The example is highly simplified, but hopefully it communicates the point. The makeLogger
unit fully encapsulates all the behavior required to encode strings as small integers and recover the strings on the other side, and it does so by having information between the client and the server (the table of definedMessages
).
Imagine trying to implement that functionality in the “traditional” structure (the functionality of transmitting message/error codes instead of strings, to save space). If you’ve implemented a solution to this problem before in a traditional system, you’ll know that it’s not simple, and will generally involve a lot of manual work to coordinate the registry of codes between the client and server. Such solutions can’t easily be bundled into a neat little package and shared as a reusable library the way that makeLogger
can.
Note in particular that definedMessages
is a private variable of makeLogger
, fully encapsulated and inaccessible by anything else in the system. Most traditional approaches to this problem would require that the code registry is a globally accessible resource, such as a defined-messages.h
file or a database table with codes and their associated messages.
Also, note that two different calls to makeLogger
would make two independent loggers, with their own private state and they would operate independently. In most traditional approaches, it would be a major overhaul to introduce a second code registry. Whereas in the makeLogger
approach, it would be easy to conceive that multiple registries are common (e.g. a third-party library may make it’s own logger).
The logger produced by makeLogger
could easily be injected into code that needs it, so there is less coupling in this solution as well.
Example: IoT “secrets”
This “Grow Station” project has been my first exposure to Arduino. One thing Arduino has is this mechanism for sharing “secrets” with the code (information you don’t want to put in the repo itself), such as the WiFi credentials. See here: Store Your Sensitive Data Safely When Sharing a Sketch.
The way it’s implemented in Arduino, you write the text SECRET_X
in your code (where X
can be anything), and the IDE magically finds it and lists X
in a separate IDE tab where you can enter a value.
The idea is good, but flawed. For example, the IDE will pick these statements up as containing secrets as well, just because they contain the text SECRET_
:
int x = NOT_A_SECRET_X; int y = "this is not a SECRET_Y";
The platform could get around all the issues, such as by using a proper C++ parser, but then it’s starting to become a lot of work.
But with the Microvium paradigm, we could just define secrets as part of the extensions API. Imagine that we had a defineSecret
method.
export function Wifi(extensionsApi) { // Define a secret named "ssid" const ssid = extensionsApi.defineSecret('ssid'); const password = extensionsApi.defineSecret('password'); const connect = () => { console.log(`Connecting to SSID ${ssid.value}`); // ... } return { connect }; }
In this example, defineSecret
would be called at build time, and can add the secret into a user-editable repository of secrets somewhere. The current value of that secret can be returned from defineSecret
and embedded into the snapshot for use at runtime.
This is much more amenable to composition and reusability. For example, the Wifi
unit could be wrapped up in a third-party library. Also, if you had multiple WiFi modules for some reason, you can dynamically generate different secrets for connecting to different modules, such as in defineSecret(wifiName + '.ssid')
. Or maybe you only intend to connect to unsecured networks, in which case you can conditionally define the secret:
const password = onlyUnsecured ? undefined : extensionsApi.defineSecret('password');
And lastly, what if your secrets aren’t strings? defineSecret
could potentially accept a UI hook so that the code can define its own secret editor:
extensionsApi.defineSecret('mySecret', { editor: domContainer => { domContainer.appendChild(document.createElement('input')); // Etc. } }); // Or import certificateSecretEditor from 'third-party-library'; extensionsApi.defineSecret('myCertificate', certificateSecretEditor());
Example: IoT “Cloud Variables”
ArduinoCloud also has this idea of “cloud variables”. See here: https://docs.arduino.cc/cloud/iot-cloud/tutorials/iot-cloud-getting-started#5-creating-variables
Basically, cloud variables are variables reported by the edge device, such as temperature, and can be visualized on the frontend with gauges or other controls.
The Arduino implementation of this idea requires a lot of “magic” behind the scenes. It requires hand-configuring the variables through a UI. It code-generates the firmware definitions of these variables, and the hookup code to report these to the server. It works great, as long as you’re doing those simple cases. But the solution isn’t modular or customizable. You can’t create your own variable types and controls, or import third-party variable types. Variables can’t be composed into “super variables” (e.g. temperature and humidity backed into one). And third-party libraries can’t create their own variables.
You’ve already seen a Microvium-style solution to this problem (the problem of visualizing sensor state on a frontend) and, unlike the Arduino Variables solution, the Microvium solution would achieve all of these goals out-of-the-box without any code generators or other magic (beyond just the magic of distributing and executing the snapshots).
Conclusion
I think this idea is worth trying out, given all the advantages it presents. It could be done using Microvium in its current state, but it would be significantly less elegant without support for closures, which is the feature I’m working on at the moment3. All of the examples above use closures.
On a non-technical note, the mixing of server-side and client-side code may uncover a potential revenue model for Microvium, similar to how ArduinoCloud has a paid tier for its PaaS/SaaS offering. Having a revenue model may be critical to Microvium’s success and the success of this paradigm, since it’s hard to get much development done without the resources to support and promote it.
I’m using the term “extension” here to try associate it with the VS Code extension pattern, but it might be more accurately called a “feature”. ↩
Calling an API method from a device that doesn’t support it should just throw. Such as calling registerUiControl when running on the MCU ↩
Although I’ve been somewhat distracted this year. ↩