In this month’s episode of Functional Futures, our guest is Oscar Spencer – a co-author of a functional programming language called Grain that compiles to WebAssembly.
Listen to the episode to learn more about Grain, WebAssembly, and how to sell functional programming to developers.
Below, you can find some of the highlights of the episode, edited for clarity.
Highlights from our conversation with Oscar Spencer
Jonn: Hello, Oscar, was my introduction fair?
Oscar: Hello! Yeah, I think your introduction was pretty fair, that’s pretty much me in a nutshell – just sort of sitting around doing WebAssembly all day – all day, every day.
Jonn: Right, and you’ve been doing it for a while, right? You started Grain a long time ago, back then like it was 2017, right?
Oscar: Yep, 2017.
Jonn: So were you a WASM guru already then and decided that you should make a language, or how did that go?
At the time, I’d actually just completed a compilers course, and I thought it’d be really interesting if I put something together with a buddy of mine that targeted WebAssembly. And then, you know, we did that – I’ll never forget the shock on our faces when we managed to bind a DOM event from inside of WebAssembly, that was absolutely amazing. I think we both jumped out of our chairs with excitement of: “Hey, we clicked the button, and WebAssembly happened!”
So that was really exciting, and it’s crazy to think that that was back in 2017. In a way, I feel like I grew up with WebAssembly, so I’ve just been here for the ride all along, learning as I go, and I mean, it’s been great.
Jonn: A lot of people these days are basically more or less in the position like you were five years ago while working on Grain. How did you learn the concepts of the WASM virtual machine and how it’s executed? Did you just read the specification like 20 times and then it clicked? Or how was the process of learning for you?
Oscar: Funny enough, we had a lot of help. The Grain compiler initially targeted x86. I don’t want to call it boring, but for these days it was maybe a little bit boring in that sense. And so we had the official WebAssembly spec online that everyone could read. However, there was an actual reference implementation of WebAssembly written in OCaml.
That felt like a little bit of a cheat code for us because it didn’t take too much for us to figure it out. All we did was we just downloaded this WASM library from OPAM and then we just changed that as our target. We were like: “Hey, we’re gonna swap out most of these instructions with WebAssembly ones.” Doing stuff like register allocation was super easy because in WebAssembly we have unlimited registers, so it’s actually not too bad to get started, especially if you’re an OCaml nerd like we were and still are. We had a pretty easy time of it. So, yeah, definitely a little bit of a cheat code in the beginning.
But after that it’s supposed to just be, you know, reading specs, chatting with folks. There’s so many lovely people who are happy to help you understand anything that’s going on in the community. You just gotta reach out and say hi, and folks are more than happy to help you out.
Jonn: What is kind of the place to go for a beginner to to ask questions about WASM and its peculiarities?
Oscar: There is an official WebAssembly Discord that folks can join, and people chat about everything WebAssembly in there, things that you might not even imagine people are doing with WebAssembly – folks are chatting about it in there.
I think it’s good to join these circles, but also the specific circles for the actual languages and frameworks you want to use. In those circles you can get a lot more specific. Folks are going to have a little bit better answers for “how do I do this exact thing in Rust?” or “how do I do this exact thing in Grain?”.
And folks are gonna be able to get you really good answers because, like you said, WebAssembly is still pretty new. It’s not easy to figure out exactly, for example, once you’ve managed to produce a WebAssembly module, how do you actually go deploy that and run it in production, right? There are tools out there that can make it easier for you, there are folks who are running WebAssembly in Kubernetes and getting really crazy with it, but the way that you’re gonna find out about these things right now is through that personal chatting with folks, mostly on different discords.
We are gonna get to the point where we’ve got all the tooling in place and it’s super easy and everything’s well documented, but we’re still not quite there yet as a community. But we’re rapidly getting there.
Jonn: Can you tell us a little bit about your vision for Grain? If I had to do an elevator speech for Grain, I would say it happens to be the Golang of WASM. Is that a fair analogy?
Oscar: Yeah, sort of, in a way. I think my long-term vision for Grain is to be that easy entry point for people into WebAssembly with the very sly plan in the background of letting functional programming take over the future.
The whole point is to have this language that’s extremely approachable. And, I think, this is a concept that I have really picked up while looking at the React framework, looking at ReasonML, languages like this. You can take some pretty “advanced functional concepts”, but if you present them in a way that folks say, “Hey, actually, yeah, this makes a lot of sense, I’m not getting bugs in my code,” when you’re able to tell those stories about how Messenger.com went from 10 bug reports a day to 10 bug reports a year after switching to ReasonML, when you’re able to tell these stories – that gets people really excited. And so we’re trying to do a lot of that with Grain. We can give you all of this type safety, we can give you all these functional features where you’re going to be able to write really good production code, but it’s not scary, it’s approachable.
We’re breaking a couple rules in Grain. Like, we have
let mut, we have mutable bindings by default, that probably scares some people, but we want to have these places where folks can come in and really feel like they’re at home, feel like they’re writing a language that they’re super comfortable in, and that they just really feel good about. I mean, along the way it’s going to be teaching them functional concepts and […] just exposing people to these things and getting them excited about it.
So that’s a lot of where we want to go with Grain – just providing a rock-solid developer experience, just having folks feel really good writing whatever programs they want. Basically being that 90 percent. So 90 percent of the programs you want to write, you can write them in Grain, and the other 10 percent of WebAssembly you need to write, you can write it in Rust. That’s sort of where we’re aiming as far as how high of a level of a language we want to be.
Jonn: To clarify one point here: I’ve seen a lot of comparisons between Grain and Elm as well. My personal experience with Elm is that it’s roughly 80-20, but stuff that’s 20 is really borderline impossible. Like, you have to go through such hoops to get that 20 percent of use cases that 80 percent don’t cover, that it almost makes me personally question if I should even go for it in the first place. How do you go about creating escape hatches for people who know what they’re doing and that really need this escape hatch?
Oscar: That’s something that we’ve been toying with. We want to make sure folks are able to do things in Grain that we don’t necessarily endorse, and so we have a couple features to do this. One of those, for example, is an attribute you can put on functions called unsafe, and it’s an attribute that allows you to say: “Hey, I’m just going to drop down and write low-level WebAssembly right here, right now.” This is not something that I want 99 percent of users to do, however, we’re going to make sure that it’s possible. For one, it made things a little bit easier for us. The entire runtime for Grain is written in Grain, and we probably wouldn’t have been able to do that without being able to write some bare WebAssembly stuff. But it’s providing escape hatches like this in the language where folks can do things, especially when it comes to, for example, binding to a WebAssembly host. Right now, doing that – it’s some low level code, and that’s not something that I expect every user do, and we’re working on projects like [inaudible] that allow you to automatically generate Grain code to bind to these interfaces, so you don’t have to do that sort of stuff.
But yeah, we want to make sure that we have these things in Grain so folks can go wherever they want. And I’ll even add on to that. One of the big exciting things about WebAssembly is the WebAssembly Component Model where we want to be able to mix languages, we want to be able to pull in components from all over the place.
So sure, maybe you don’t want to write that cryptography library in Grain, but that’s okay, you can pull in Rust’s amazing performant cryptography library, be really happy with it, and integrate it into your Grain code. Are we there right this second? No, but we can actually get there.
And I and I’m telling you, if someone came into the Grain discord and said, “Hey, I’m trying to statically link to this Rust binary,” I’d say “Okay, let’s make it happen,” because that’s something that we can actually make happen if folks are asking for it now. But this is definitely where WebAssembly is going in the future, and we’ll have these different ways for you to get that last 10 percent that you might not have wanted to write in Grain.
Jonn: That’s really cool and very very pleasant to hear! You mentioned bindings to host platforms. As far as I understand these are the kind of capabilities that are provided by the host, for example, the file system would be one such binding or like
localStorage in a browser.
Oscar: Yeah, exactly. WebAssembly code runs in a sandbox, so it’s completely sealed off walls, with absolutely zero way to speak to the outside world at all. And this is how WebAssembly gets to be secure by default.
The way that we start adding on new capabilities (because maybe you do want to talk to a file or you do want to talk to the network, or something) is we add on these capabilities back via host functions. There is a set of host functions for common system calls – WASI, the WebAssembly System Interface – which provides a standard way to interact with all of these different system calls on a system. So we have that, but, additionally, you can have your host provide whatever host functions you want to do all sorts of things.
For example, there are a couple of game engines where they give you some host functions that say “here is the controller input” or “here is a buffer that you can write to for the graphics”. And that’s super cool, and there’s gonna be all sorts of different hosts providing all kinds of different host functions.
Even at Suborbital, we want folks to be able to do things like talk to a database very easily, and, of course, we could let you go make a network connection and do all that fun stuff, or we can just give you a function that says “hey, I want to query this thing” and go from there. So that’s what host functions are about. Of course, folks need to be able to bind to those host functions, but they’re very low-level things. If you’ve ever written C bindings into a language, it’s sort of exactly that – you’re writing those sort of bindings to what these host functions look like. Like I said before, we are working a lot on automating a lot of this, so folks don’t have to do it themselves.
I think my long-term vision for Grain is to be that easy entry point for people into WebAssembly with the very sly plan in the background of letting functional programming take over the future.
Oscar: Funny enough, back in 2017, the Grain compiler was just written in pure OCaml. The reason for OCaml is – you know, that ML part stands for “meta language” – it is the perfect language for writing a language. I will take that to my grave, no one can tell me otherwise. Developing the compiler in OCaml has been an absolute dream. If you want to tell me that OCaml is bad for all these other use cases, whatever, but for building a compiler, OCaml is fantastic.
I’m not exactly sure when it happened, I think it was maybe 2019 or 2020 when we made the cut over to ReasonML and the hilarious reason for that is that there aren’t a lot of OCaml developers out there. They definitely exist, we’re here, we’re a community, but there’s not that many of them. And at this point, ReasonML had gotten more popular, probably quite a bit more popular than OCaml. It was a pretty easy switch – we could just run a tool to convert all the OCaml into ReasonML, and all of a sudden, we had a full community of people who might be interested in coming and contributing to the compiler, you know, open source is hard and we need as much help as we can get. And that’s sort of the origins of why we are in ReasonML. It’s just to have the still glorious OCaml programming to write our language.
To your other point of how do we get ReasonML to emit WebAssembly – it’s not actually the ReasonML compiler doing that. The Grain compiler just ends up being this regular binary, but the actual WebAssembly emitting happens via a project called Binaryen. Binaryen is a project that essentially gives you an IR to target just with WebAssembly instructions, and then it can serialize and deserialize modules for you. It’s very convenient and, of course, on top of that, it has loads and loads of optimizations.
We could have used full LLVM for our back end, you know, Binaryen is the project that is behind LLVM as well, but we didn’t need that. We really knew that “Hey, we’re targeting WebAssembly, let’s just sort of skip all of the LLVM bits and let’s just go straight to Binaryen,” because we still get all the same WebAssembly optimizations that we’d get if we were going through LLVM. Not the LLVM IR ones – that can be pretty powerful, but we can handle that, that’s something that we can take care of ourselves.
But yeah, we wrote some OCaml bindings to Binaryen, and that’s what we use to generate WebAssembly.
Jonn: How do you make sure that these optimizations kind of make sense for your runtime, and did you need to make any adjustments in the process of using Binaryen?
Oscar: Just to clarify, we do have a full suite of optimizations as a part of the Grain compiler that we do run. You can sort of think of these optimizations as enabling Binaryen to do more optimizing. So it’s mostly saying: “Okay, we might have a bit of code that at the WebAssembly level Binaryen wouldn’t necessarily understand how to optimize, but it’s something that we totally understand how to optimize, and we can take care of that.”
So we have a bunch of our own passes that we run, and then after that, we additionally can have those Binaryen optimizations. It’s a lot of sort of adapting our code to make sure we’re putting out WebAssembly modules that are easily optimizable.
Jonn: From what I know about WASM so far is that WASM itself does not define a runtime. It’s up to the users of WASM spec to figure out how they actually want to run this bytecode. So, and this is where for me it gets kind of intellectually tricky to figure out, what do you mean when you say runtime?
Oscar: Yeah, it is a tricky one. When I say runtime in Grain, I’m specifically referring to two things. The first thing is memory allocation and garbage collection – how do we actually manage all of this memory and everything we’re doing with all these objects that we’re allocating. That’s the first big piece. And that’s incredibly important.
There is a whole WebAssembly proposal for garbage collection that’s in the works, which we are incredibly excited for. And I think it is going to be a major piece in getting modules across languages to link together. That’s the number one concern you have right now: you can have two website modules that do something, but to make them talk to each other, they’re just gonna be writing over each other’s memory.
The second bit for the Grain runtime is all of the support infrastructure for things: like the
toString function, stuff like that. Sort of like the language support stuff that you expect to exist in the language, but that code has to actually exist somewhere, it’s not just given to us for free.
Jonn: In terms of Grain, I understand that you’ve made runtime yourself. Can you talk a little bit about its properties: is it strict, is it lazy, what’s your garbage collection collection strategy, etc.?
Oscar: So, one thing about WebAssembly is – some people feel that it’s kind of annoying, other people feel that it’s fine – the way you write WebAssembly is as though it’s running on a stack machine. So you push values onto the stack, you pop values off the stack, you have a good time. That’s all fine and dandy. But one of the things that WebAssembly does not allow you to do is inspect your stack. So there isn’t really a way to just see what values are in the stack, what values are live, so can I actually you know walk my stack and do some garbage collection, that’s not something you can do with WebAssembly today.
The way folks have gotten around this is they’ve implemented their own stack and they just have the actual website code manipulate the stack that’s in memory. That’s one way to do it. For us, we chose to use the WebAssembly stack as intended, and so that gets rid of a whole class of garbage collection that we could potentially do. So instead we do reference counting garbage collection. We just keep track of how many references you have to this thing, and if it goes dead, then we reclaim that memory.
That’s really all that happens there, it’s fairly simple. But it’s still a garbage collector, which means it is a pain to maintain, and so we’re always looking up to the WebAssembly spec gods and asking “please deliver sgc”. We will be so grateful when we no longer have to maintain this. Nothing hurts my soul more than a GC bug and just the hours and hours lost to debugging.
In terms of language properties, we’re fairly strict, we evaluate things when you write it, it’s evaluated there, you don’t have to worry about if it is going to be lazy, is this happening later. Which, I think, might come as a disappointment to some people that might want to see lazy evaluation everywhere, but yeah, no, we’re pretty strict.
The reason for OCaml is – you know, that ML part stands for “meta language” – it is the perfect language for writing a language. I will take that to my grave, no one can tell me otherwise. Developing the compiler in OCaml has been an absolute dream. If you want to tell me that OCaml is bad for all these other use cases, whatever, but for building a compiler, OCaml is fantastic.
Keeping up with WASM spec
Jonn: This actually brings me to the following question. Throughout this interview, you say a lot of phrases like “as of today, WASM has this property”. How do you cope with changes in the spec, because I assume that you want to be on the forefront at all times?
Oscar: I think that’s one of our biggest challenges right now. There are some runtimes that we support and that we want to support that are maybe a little bit behind the times in keeping their WebAssembly internal engine up to date to actually be able to run some of this stuff.
And that’s a problem because we want to continue moving fast, we want to be using all the latest features and whatnot. So we have a handful of compiler flags to deal with this that turn off specific classes of instructions. For example, we have a flag that’s “no bulk memory”. It’s like hey, just don’t use bulk memory instructions. and go get some polyfills of these things. Which is kind of a little sad. We do want to be on the forefront of things.
I will say, a lot of folks are pretty good at updating their runtimes. We probably will end up having just more flags to turn features on and off, but come the end of the year, I think we might get a little more strict, I might start just dropping old WebAssembly features. Because, yeah, WebAssembly is rapidly evolving, we’re seeing proposals land, things are happening.
I think a really big one is tail calls, for example. We waited a long time for WebAssembly to have support for tail calls, and now we do. And if anyone ever opens an issue on Grain saying: “Hey, I wrote a tail-recursive function but it blew the stack,” I will not be happy to say the least, because this is a feature that exists in WebAssembly but not all runtimes support it, and so, by default we don’t have that flag turned on. And you do have to know to turn on that flag in certain runtimes.
So yeah, I think it is kind of just going to come to the point where we’re just not going to support old features, we’re going to be expecting folks – you know, proposals that were marked finished a year ago, I think we can safely say “Hey, yeah, you need to go upgrade your engines if you want to continue using the language.”
But, honestly, I think some of the old versions of Grain are pretty good too, it’s like they might be stuck on one version for a little while, but I think that’s okay.
Jonn: In terms of you personally like or your team, rather: you mentioned that there are features that are getting implemented for WASM, but are there breaking changes, do you need to do a lot of refactoring to accommodate those new features or breaking changes?
Oscar: I have to say, the WebAssembly team did an amazing job at making WebAssembly be in a position where it didn’t have to have a set of breaking changes. The binary format – the major version has not changed since launch, we’re still on version one of WebAssembly, so there’s no sort of random bytes that are going to just straight up crash the runtime other than new instructions that have been introduced.
And so some runtimes, when they’re trying to load a module, might just say “oh, I don’t recognize that opcode, and I’m gonna die.” And you know, that’s okay, it happens. But in terms of all the tooling to prepare the WebAssembly, you know, all that stuff continues to work.
So we haven’t had too much trouble in terms of breaking changes that’s like “Ugh, here they go again, breaking stuff,” it’s actually been quite pleasant. I think there’s a proposal right now for WebAssembly feature detection, if I’m not mistaken, to just make sure that “Hey, I know that this runtime is going to support these things,” and you can compile different versions of your modules depending on what features are available. That’ll be pretty nice when we can do stuff like that. As long as we’re able to support a couple flags that turn features on and off, then you know we’ll be okay.
I have to say, the WebAssembly team did an amazing job at making WebAssembly be in a position where it didn’t have to have a set of breaking changes. The binary format – the major version has not changed since launch, we’re still on version one of WebAssembly, so there’s no sort of random bytes that are going to just straight up crash the runtime other than new instructions that have been introduced.
Jonn: In terms of your early work, if we go a little bit back in history, did you hit the nail on the head from the first attempt?
Oscar: Oh, absolutely not. [laughs]
Jonn: So, then, what was your experience with refactoring the compiler code or bits of runtime that you write, etc.?
Oscar: That experience there was sort of the journey of going from a toy language to like a real, actual, serious language. Because, yeah, in the beginning the code base was kind of just the result of a school project, and I don’t know about you, but I’m not really trying to ship code from a school project to production, right? So it was a lot of refactoring, rebuilding a lot of pieces from the ground up to make them production class.
Ever since then, I’ve grown a ton as an engineer, other folks on the team have grown a ton as engineers in general, but also compiler engineers, which is understanding more concepts and things that we should be doing right as we’re developing.
I think a lot of it was refactoring code that way, refactoring a lot of ideas we had in the language of just how we want to reason about things, really hitting, “Okay, where exactly do we want the language to go?” Because in the beginning, we didn’t necessarily know, we were just kind of having fun, and we saw an opportunity – if we put in a little bit of effort, a little bit of elbow grease, we can actually build something that people are gonna love to use and people will actually want to use.
Because, fundamentally, a lot of the languages we have for WebAssembly right now are just super low-level, and that’s fine for some folks writing WebAssembly, they can deal with that. But the vast majority of people probably want to write a bunch of high-level code. And so it was a lot of how can we mold Grain to fit that bill more, how can we get Grain to a place where it’s super approachable for people to write code and run it, and where it runs reasonably fast. Is it gonna be as fast as a very serious Rust module? Maybe no, but most people probably don’t need that, given how fast WebAssembly is.
So it’s a lot of just reworking a lot of the ideas of the language, and, of course, there’s a lot of code with that too. But yeah, the road to 1.0 is paved with many, many features and things we want to get done. A huge one that folks have been asking for is macros. We want to have macros in the language. That’s a big, hairy feature that hopefully we’ll get a chance to start working on soon.
OCaml / ReasonML
Jonn: Does OCaml or ReasonML, now, help you formulate your ideas better, or do you do whiteboard brainstorming and then kind of shove your idea into the type system? How does it go?
Oscar: It’s definitely a lot of both. But I have to say, my thinking has been so shaped by OCaml, it’s kind of ridiculous. I think the major thing that OCaml has done for me is it really makes me think about data first. Which is very different to a lot of other languages, where other times you’re maybe thinking about a model, you might be thinking about an object and things you can do to that object, but the thing that I love about OCaml (and this has definitely started making its way into Grain) is thinking about the data first.
That means you’re thinking about the types first. The very first thing you’re thinking about is, okay, how am I gonna model this data, that’s like the very first step, and then we start thinking about transforming that data and moving that data around, and doing things to it.
I think that’s a lot of how the Grain experience works out as well. There’s this particular goal that we want to achieve, it’s not “Hey, let’s just start writing code and hopefully we make it there.” No, it’s, like, “This is what we’re gonna achieve,” and it sort of builds itself out from there.
I remember way back in school we had this thing called the design recipe, which essentially was thinking about your data, thinking about your transforms, and even thinking about your tests up front. […] And that’s a lot of what we do today, it’s really thinking about that end case first of what is the real problem that we’re trying to solve, modeling it from there, and then the very last step is “Okay, now we’ll implement it.”
I definitely have been influenced a lot by this sort of idea.
Jonn: That’s really cool, and it’s great that these sorts of approaches also make their way into frontend as well with types these days.
Jonn: For refactoring, I think type systems have obvious benefits. To name one, as you change a thing in one place, and then the compiler tells you what you missed.
Oscar: Oh yeah, that’s the entire development process in the Grain compiler. It’s really funny, especially when I explain the Grain compiler code base to folks, I’m always telling them that it’s actually really simple, we’re literally just moving from IR to IR. So, essentially, you just go to the compilation step at the very end, and you make a code change and say, okay, I want this data to output this WebAssembly code. And then the compiler graciously guides you all the way back through every phase of the compiler, making sure you’re implementing, how to lex this, how to parse this, how to go to abstract normal form, how do we do all these different things.
It’s really cool and that is the development experience I want everyone to have: make one small change, and then don’t try and guess, let’s not have engineers walking around being like: “Oh, should I change this thing in that file?” No, let’s have a tool that can tell me how to do it. That’s huge.
And it’s funny because we have some of these features in Grain that people think are revolutionary. Like, with pattern matching, you forget a case and the compiler tells you “hey, you missed an edge case, and, by the way, here’s an example of the code that you missed.” People, who’ve never seen this before – it blows their mind – they’re like “whoa, that’s insane” and like “this is really how you avoid bugs”, and you’re like “yeah, it’s crazy, this has existed forever and it’s just not in the mainstream languages, but yeah, it’s here and you can avoid all sorts of classes of bugs.”
That’s one of my favorite things about languages like OCaml – the fact that you write some code, and if it compiles, it’s correct. It may not do what you wanted it to do, but it’s definitely correct and is what you wrote. And that’s amazing to me and I think that’s kind of lovely, right, as you’re following all these compiler errors, usually, by the time you actually get the program to actually compile, it usually works “on the first try”. It’s not really your first try because you ran the compiler command like 12 times to get to this point, but by the time you get to that point, you’re usually done, and you can breathe a sigh of relief and say “ah I implemented this feature”, and submit your PR.
Jonn: Do you have any problem convincing people that that’s good? The argument I hear particularly a lot is that untyped languages or uni-typed languages are like languages for rapid prototyping, and you can’t move as fast with the ML-like languages or Rust, or something like that. Do you have this sort of pushback, and if yes, then how do you kind of respond to it?
Oscar: I think a lot and I preach this you know to the team all the time: the biggest thing with Grain is really how we tell our story. It’s all about how we talk about the language. Which is why you don’t see me going around talking about how you do this really complex functional thing in Grain, you never hear me say those sorts of things. It’s all about “here’s how I can develop faster”, “here’s how I can build better programs”. And I think when you come with that mindset, a lot of people are a lot more open to it.
Like, when you wrote that plus sign on that variable, the compiler knew that that thing is a number, which means this argument is a number, which means, in all these functions where you use this, this thing is a number, and the compiler is gonna make sure you’re being consistent about it.
And then someone might hit you with: “Oh, but the types aren’t in the code, I can’t see the types, so how do I know what types these things are?” But these days, having amazing editor support is huge. And with that, not only do we have like in VSCode, you’ve got your code lenses where every single statement is telling you: “hey, this is the type of this thing”, you can hover over literally any value, and it’s going to tell you what it is.
You alleviate a lot of these concerns and you tell people, hey, you can write this exact same code, you don’t even have to think about the types, you can just write the code like you might in Python. You just write the code, you don’t think about it, you run it, and you’re done. The difference is the compiler will let you know, in our case, that, hey, you did something a little bit wrong, and just make sure you clarify this or fix this thing, so that way it’s obvious not only to other people reading the code, but also to you and the compiler.
And then it’s actually pretty easy to get people on board with this. When you tell them that it’s just super magical and it’s gonna make sure you never make a mistake, they’re really excited by that. When you tell them that you can eliminate null pointer exceptions from your code entirely, it’s something you never need to worry about, they get excited by that.
So I think it’s all about how we talk about these things and how we get people thinking about them. Because, yeah, then you’re gonna get less pushback. And I think a lot of people, as they’ve come to Grain, they’ve sort of seen that, like, “Oh, yeah, this is fine”, like, “I can write this code, I feel really comfortable, I can use this library, it’s not a huge deal”. And then, of course, the language slowly, sneakily starts adding more functional concepts on you, but that’s okay, you’re with open arms, you’re accepting how good it feels to write your programs and how confident you feel about your programs, and that they’re doing the right thing.
It’s really cool and that is the development experience I want everyone to have: make one small change, and then don’t try and guess, let’s not have engineers walking around being like: “Oh, should I change this thing in that file?” No, let’s have a tool that can tell me how to do it. That’s huge.
Questions from the audience
Jonn: All right, we have a spicy one. What do you think is the best entry point right now for functional programmers to start working with WASM? Should I just learn Rust?
Oscar: What a spicy one. Okay, so there’s a couple of different ways you can look at this. Rust has enough functional features, I think, for your average functional person to be pretty happy. You got the strong borrow checker, it’s going to be telling you how to live your life, you’ve got pattern matching, all sorts of things, so you totally can learn Rust if you are just trying to do some functional programming.
I think it really depends on the level at which you want to write the code. If you want to write super-duper low-level code, yeah, go ahead and pick up Rust, but if you’re looking for something a bit easier, a bit friendlier, yeah, I think it’s going to be pick up Grain.
In terms of functional programming languages, right now those are going to be your really good options. However, there is a Haskell WebAssembly compiler called Asterius, and so if you specifically are a Haskell person and you want Haskell, you can check out that project as well, and you’ll probably be pretty happy because it’s exactly Haskell.
Some people just get a result type and they’re happy, that’s all they care about, but, like, there’s different levels to it.
Jonn: I think we have one last question. “Oscar said that WebAssembly is used like 80 percent serverside. It’s kind of surprising. Could we get a brief elevator pitch for server side WebAssembly?”
Oscar: [laughs] Oh, you shouldn’t have asked me that. Yeah, so there’s so much you can do with WebAssembly on the server. So I think that thing we were talking about earlier, about WebAssembly being that write-once-run-anywhere language. Not to completely upset the JVM, but that’s sort of, in a way, what it’s doing right now.
There is a WebAssembly runtime called WAMR – the WebAssembly Micro Runtime – that thing runs on literally anything. If you want to run WebAssembly on a Nintendo DS, you totally can, it’s like that level. And so you can actually get your WebAssembly code into all these places.
So, with WebAssembly’s sandboxing capabilities and other things like that, all of a sudden a use case that we couldn’t really have before suddenly became known. You can run arbitrary code from people you don’t know safely, and that is kind of insane. That opens up a ton of opportunities.
So what we do at Suborbital is we extend your SaaS applications, whatever kind of application you have, with this ability for your users to write functions that do things in your app. And previously that’s something that maybe you had to run some custom Docker container solution for, which was still probably a little bit unsafe, but now we can actually completely lock down that sandbox and say: “No, you are allowed to do this thing and this thing only, that’s it.”
Being able to do that is actually kind of huge. There’s a lot of use cases like this, and then even going into replacing Docker containers of: “Hey, actually we don’t need these big hefty containers to do something. If I can have a WebAssembly module that does this one task, I can then go and link all these modules together and do all this work. I’m all set.”
And that’s how we end up having a lot of WebAssembly happen on the server. And even going to edge computing as well. It’s like hey, how do I actually push compute out to my users? Shipping massive binaries and containers and deploying them all around the world is really cumbersome, but when I have a WebAssembly module that’s 100k, it’s actually very easy for me to be like “Okay, actually have this one module on a server in India and run it.” That’s actually like something that’s much easier to consider. It’s actually very serious. And I think that’s what makes us really excited about the capabilities of WebAssembly.
Big thanks to Oscar for being a guest on Functional Futures! 🙏
If you want to hear more from us, be sure to subscribe to the Functional Futures podcast on your favorite podcast platform. 😉