# Functional Futures: Lunatic with Bernard Kolobara

May 31st, 2022

In this month’s episode of Functional Futures, our guest is Bernard Kolobara – the creator of Lunatic, an Erlang-inspired runtime for WASM.

In the episode, we talk about Lunatic, WebAssembly, BEAM (Erlang’s VM), and more.

You can check out the full episode on our YouTube channel or listen to the audio version on our podcast page.

Below you can read some of the highlights of the episode, edited for clarity.

## Highlights from our conversation with Bernard Kolobara

### What is Lunatic?

Jonn: I learned about Lunatic a while ago because my friend and one of earliest Serokellers Ante Kegalj was contributing to your stuff. It’s a small world. Can you please tell our viewers what Lunatic is, how it evolved, and what’s the story behind the name?

Bernard: Lunatic is basically a WebAssembly runtime inspired by Erlang’s principles. It’s an attempt to bring some of the ideas from the Erlang world to all programming languages that can be compiled to WebAssembly.

When I say ideas, I mean we’re bringing the concurrency model based on processes that don’t share any state with each other, the message sending between processes, linking, building abstractions such as supervision trees, and also the distributed part of Erlang. So you can spawn a process in a different node and just have it magically run and send messages to it.

The interesting part is actually the WebAssembly component of it.

I can give an example of how Rust works currently with our library, I think this will paint a picture of how the fundamentals underneath work. Basically, I first attempted to do something similar to Erlang in Rust, and if you want to do it, you kind of need to use async Rust and tasks in async Rust. But you always have this issue that Rust applications have one memory space. So if you have a task, you spawn it, it’s running, and if it messes something up in this memory space, corrupts some data, even if you restart it, you won’t get a fresh memory like in an Erlang process. This fault tolerance part is always missing and hard to get.

There are other issues with async Rust. For example, if you’re having some intense computational part in Rust and you run it and you don’t yield back to the executor, it’s gonna get the thread stuck and the responsiveness of the system will suffer. There are also different ergonomic issues because you cannot have async traits in Rust. So there were a bunch of different issues.

In Erlang, it feels so elegant – you write your code in a linear way, no coloring of functions, no async/await, and it still works in a high performant, I would say, scheduled way with [inaudible]. My idea was “how can I have this experience in Rust?”.

At the same time, I also kind of discovered WebAssembly, and the WebAssembly instances are really similar to Erlang processes. Basically, each WebAssembly instance gets their own linear memory, so it’s similar to Erlang process’ heap memory, and they have their own stack.

What we do with Lunatic is we kind of insert preemptive points in the bytecode. We take a Rust application, compile it to WebAssembly, insert what’s basically a reduction counter, if you’re familiar with Erlang. After the application runs for some time, it will yield back. So you can write any code, you can even cross compile, take some C code base and link it together with your Rust code base, and everything stays always performant, yields back at the same time, and works fine.

Basically, through WebAssembly we introduced a new concurrency model to Rust. You can not only spawn async tasks, but now you can spawn WebAssembly instances that are small, computational, isolated models that have a mailbox (so you can send messages to them) and you can also link them. We try to bring some of the Erlang ideas to, particularly now, Rust but we are also focused on bringing it to different languages that compile to WebAssembly because our main abstraction is this WebAssembly instance.

Regarding the name: in the beginning, I actually did not know about WebAssembly, and when I wanted to have more highly-performant Erlang, I started working on a mix between Lua because I knew Lua had a really amazing JIT compiler. I was doing a Lua runtime with some elements of Erlang, and I named it Lunatic. The logo, if you notice, is a moon. It’s also inspired by Lua. So that’s actually where the name comes from. And I just kept the name but the project is now in a completely different direction, and it does not really have any parts of Lua inside of it.

### Why WebAssembly?

Jonn: Could you tell more about what WebAssembly actually is, and why you picked WebAssembly and not JVM, for example?

Bernard: I can give a short intro to WebAssembly for people that are not familiar with it. Basically, I think the “Web” part of the name in WebAssembly is a bit unfortunate. I mean, nowadays WebAssembly is mostly deployed inside of the browser, so it makes sense, but WebAssembly is basically a bytecode definition. It’s a really simple bytecode definition that maps well to modern CPU instructions. So you can compile it really fast to some really efficient code that the computers can execute.

It’s the opposite of the JVM: it doesn’t have anything else. Like, it doesn’t have a garbage collector, it doesn’t have any APIs, it’s completely blank. And that’s why it’s such a good target for languages such as C, C++, and Rust, because it doesn’t assume any runtime. It’s always meant to be embedded somewhere else.

So you compile some code to WebAssembly, and then you run it, for example, in the browser, and then you can expose to the WebAssembly code some APIs through JavaScript. So you can talk with JavaScript, but this interface is always just, like, integers, floats, there are no string types, it’s always super low-level. WebAssembly gives you a way to just abstract away from the hardware a bit.

We use this also in part of distributed Lunatic. You can spawn different nodes running different operating systems and different CPU architectures. So, basically, you can say “run this Rust function, just on this other machine” and we transfer the module in the background, compile it to the native architecture, and then run it there.

Once you have this kind of bytecode abstraction, you can do a lot of interesting stuff. Because once you’re limited to machine code, the code is, first of all, not portable. But we also use WebAssembly to insert instructions for reduction count. Basically, what we do is, if you are running a loop, for example, we make sure that you cannot get into the loop without running at least one reduction count instruction, so you cannot have an infinite loop that will just occupy a thread and occupy all the resources, and never yield back. So you can try to insert it at interesting points and never go through too many instructions in a row.

It’s not static analysis on Rust or C code, we can take any WebAssembly bytecode, and then we analyze it, and then we put the instructions there, and it’s a step during the just-in-time compilation.

We use a compiler called Cranelift that allows you to generate efficient machine code from WebAssembly. While this process is going on, we just put extra points like “now yield back to the scheduler”, so the whole system stays responsive and we get this low latency part that Erlang promises.

I think it’s really interesting because Erlang gives you low latency, but if you want high performance, you kind of need to drop down to C. And then you have NIFs or Rustler is also a popular project – so you can kind of link Rust native modules, but once you do this, you lose all this responsiveness of the Erlang virtual machine, and then you are again on your own. You need to write some code that yields periodically back, and if you just pick a library for image processing, you cannot really modify it to not do too much image processing and yield back.

And we don’t really care about it because you can cross-link like multiple projects and do a codebase that’s half-C half-Rust and it will just work, we don’t really care.

What’s also important for WebAssembly, as I mentioned, it does not have any APIs, basically. We call them host functions – it is the layer between the code running inside of the WebAssembly module and the host environment. For example, a good example of a host environment is the browser. So you can expose some functions to interact between the browser and the WebAssembly model. And what we do with Lunatic is we expose functions: so you can spawn new processes, send messages. Because it also does not make sense for many constructs that live in a programming language, for example, in Rust, threads. What would it even mean – spawning a thread in a browser – because most of the execution is single threaded. So this functionality is just not available when you compile down to WebAssembly.

This helps us eliminate all the errors that we would have if somebody was running a process and just suddenly started spawning threads because this is just not allowed by design. WebAssembly gives us a bunch of these limitations that we use to create this nice environment, and we set the rules of what you can do and what you cannot do, and we provide a concurrency model so you can kind of get really close to this Erlang feeling of applications.

### OTP in Lunatic

Jonn: You mentioned that you don’t want to impose architecture and stuff like this onto the users. I wonder what you are doing about OTP and do you ship with it?

Bernard: I think there are two parts of the VM. First of all, there is the fundamental runtime that’s implemented in Rust and compiles to native code and then runs the WebAssembly. And then there’s these guest libraries that you use to write applications that take the exposed capabilities of the underlying platform. In the platform, we provide the minimum amount of things like processes, linking together, messaging, mailboxes, and things that you cannot really do in the guest space. But everything else we provide, for example ,we have supervisors in our Rust library and some abstractions that are similar to the GenServer, so you can write GenServers and supervisors.

It’s also similar to Erlang because OTP is a library, and so we provide this also as a library on top of it. Now we are focused a lot on Rust, but these abstractions really depend on the language you are using. In Rust, we take advantage of the type system to express how our supervisor works, what the children are, and the supervisor strategy. It depends a lot on the Rust type system. We wanted to make the programming model feel really close to the language.

What we do, for example, in Rust, is we stick to cargo and default tools. So if you know Rust, you just basically change a cargo config saying the runner is Lunatic […] So if you do cargo run it just runs on Lunatic, compiles directly, if you do cargo test everything works out of the box. So the developer experience feels native.

And we also want the libraries to feel native to the Rust developers. In the beginning, it was really hard for us to take some of these Erlang concepts that are pretty dynamic and kind of shove them into the Rust type system. Some things did not feel natural, and we needed multiple iterations until we got it right. But I think, at this point, we have a really good foundation for it and we take advantage of Rust’s type system.

For example, we have session types that we introduced in the latest release. You can spawn processes that communicate and define the protocol in the type system up front, and what happens, for example if you drop this object that communicates earlier, the process panics and then also the linked process panics. So if one part hangs or something, the type system kind of assures that everything blows up and you don’t just have a hanging processor somewhere. We try to take the best parts of Erlang, but also the best parts of Rust and the type system, and merge them together into something that’s really powerful.

Jonn: But can I still compile a WASM program that will send processes arbitrary garbage?

Bernard: Yes, you can, definitely. We also expect people to sometimes write in different languages, so you write in C one process and another in Rust, so the messages that are sent between each other are just buffers because we cannot make any assumptions. You can can serialize any data you want inside of it and deserialize on the other side, and we leave this up to the developer – the serialization format and stuff – which is like: “here’s a buffer, write something in it, and we will ship it to the other process, and let it know it it arrived”.

### Go and Lunatic

Jonn: What about other languages? Like, for example, Go. Some people like Golang and I understand them now that it has generics, but one of the things they sometimes mention when they explain why they like Golang is the fact that it interacts with WASM.

How hard would it be for me as a Golang user to make use of Lunatics capabilities and host functions, and etc?

Bernard: The Go compiler – I don’t think it compiles to WebAssembly, but there is a project called TinyGo. It’s basically Go that was meant for embedded devices, and it has great support for WebAssembly. I’ve never tried it in practice, but I know the simple examples at least work. So it’s like I take a hello world application, compile it to WebAssembly, and because it also uses WASI, the WebAssembly system interfaces, Lunatic understands those, so you can open files, write files, write to system out, read something from the standard input. But what we don’t have is we don’t have a library for the specific Lunatic capabilities: spawning new processes, sending messages. This does not exist, so somebody would need to contribute this.

We are mostly focused on Rust at the moment, we want to first nail this story before we expand from that point on, but I think it’s completely feasible to do this. And what’s also interesting, as I mentioned, because WebAssembly does not assume anything and it’s such a simple bytecode, once you compile Go code to WebAssembly, you cannot really do stack switching inside of the WebAssembly. So goroutines don’t really work when you compile them in the browser. They make it work because they have this part written in JavaScript. When you call something that’s supposed to await, they kind of reschedule it and then jump between JavaScript back in the WebAssembly module, and it feels like goroutines are running concurrently, but this obviously doesn’t work in Lunatic because we don’t have a JavaScript runtime and we don’t have this host function that would take care of this.

So you could just use our concurrency model with Go – you would not have goroutines but you could use processes and mailboxes. And it really depends on the language, I think, the type system and the possibilities, but maybe it could feel nice in Go – you kind of would have like Go the Erlang way that you would write. And I think the simple examples like reading from files work out of the box right now, it can just compile it, it runs no problem. But this part still would need somebody to contribute them so you take care of opening networking connections, for example. This is not yet part of WASI, so this would need to be worked on.

Jonn: So for me, if I want to, let’s say, interact with Go programs, I would need to write a Rust wrapper, essentially, and then proxy messages into stdin.

Bernard: The Lunatic functions for sending messages are pretty simple, just like write into this buffer. this slice of memory, and then just send it to this process ID. If you created a few of those wrappers, you already can take off capabilities of message sending, so you don’t actually need to proxy the standard input. I think it would not be too much work even if you just hand-coded the small parts that you need to do.

At the moment, I feel like there is this opportunity because people discuss a lot nowadays how you should structure your applications. Like, should you do microservices, should you do monoliths. [Inaudible.] I feel like this kind of architecture comes naturally to Erlang because you’re already building your application out of small components – processes – and if you just take them and put them in on different computers now they don’t talk through memory but they talk through messages to each other, so it kind of naturally scales and you don’t need to change your programming model.

With Lunatic, you also get the possibility to write different languages. Now you’re not in Erlang, but you have a WebAssembly module written in Rust and another in Go, and they also talk. While microservices usually talk over HTTP, they use thee native message sending. It’s more performant because it jumps through less stacks. But at the moment, as I mentioned, we’re mostly focused on Rust so we did not explore much of this whole world of interacting of different languages and how it should work.

I know with C and Rust it’s easy because you can always link the C code inside of Rust so you can take care of some libraries, if you need a C library, it in some cases works. But it’s also important to, I think, mention that you are sometimes limited in what you can compile to WebAssembly, because some C code just has some assembly embedded in it. You cannot really take any x86 assembly and compile it to WebAssembly, it would not make sense. So some applications don’t even compile, the compiler would not work, so there are always l limitations there but I think it could create some interesting architecture.

### Would you recommend Lunatic to people new to WASM?

Jonn: Would you recommend Lunatic as a way to get familiar with the world of WASM for beginners like me who maybe know Rust or Erlang but don’t know anything about WASM and, if so, could you maybe highlight some repositories where us WASM beginners can draw inspiration from?

Bernard: There are, I think, two roads you can take with WebAssembly. First and the most, I would say, common one that people do is like “I want to run some Rust code, some C code in the browser”. Lunatic cannot help you much with this part because we are mostly focused on the back end at the moment. There are many great libraries you can use. Like for Rust, bindgen, it helps you basically map Rust types to JavaScript types and make the communication between them seamless. So if you’re focused on the browser and you want to run some Rust in the browser, that’s a great decision.

But if you would like to run some WebAssembly on the back end, I think Lunatic is a great choice. We are fairly stable. It’s like two years in development now. We went through a few rewrites and iterations on it, but at the moment it works well.

I wrote a Telnet chat application, I think it’s like a good starting point because it’s not that trivial. First of all, it also uses some Rust dependencies that compile to WebAssembly and everything works. So, basically, how it works, it’s Telnet, it’s a really simple protocol and you connect to this server, and every keystroke you type is sent to the server. The UI is rendered on the server and then just a diff of escape sequences is sent back to your terminal, and it changes. It’s a bit like a Phoenix LiveView but for the terminal if you’re familiar with it.

[…]

I think it displays all the major parts of Lunatic and it’s actually not a big application, I wrote it over a weekend so you could comprehend it pretty fast. I think this is a good starting point if you just want to get a feel how writing Lunatic-like Rust code feels like, I think it’s a good starting point for WebAssembly on the back end.

We would like to thank Bernard for being the guest of this episode!

If you would like to hear more from him, you can follow him or the Lunatic runtime on Twitter. Additionally, you can check out Lunatic on GitHub or join their discord.

If you want to hear more from our podcast, be sure to subscribe to us on your favorite podcast platform!

Functional Futures: Lunatic with Bernard Kolobara
More from Serokell