Rust is Not a Functional Language

We at Serokell love Rust. We also love functional languages like Haskell and OCaml. However, there appears to be a bit of confusion as to whether Rust can be called a functional language, too. This article purports to dispel this confusion. To spoil the conclusion, Rust isn’t a functional language.

Not to imply that Rust is bad in any way: it does what it sets out to do very well. However, while it borrows some ideas from the functional programming paradigm, it doesn’t follow it at the core. Which is okay, not every language needs to follow a single paradigm. But calling Rust a functional language is selling both Rust and FP short.

What’s a functional language anyway?

To decide whether Rust could be considered a functional language, we first need to understand what we call a functional language in the first place.

There isn’t a cut-and-dry definition, unfortunately. There are a few possibilities, ranging a spectrum from overly strict to overly lax. Let’s look at a few.

The theoretical underpinnings

The strictest possible definition would be based on the theoretical foundation of the language. In short, if a language uses lambda calculus as its theoretical framework, it’s a functional language. We have a series of articles on lambda calculus if you’re interested in the details.

This seems quite straightforward. However, it ends up being overly strict.

For one, Lisp, which is broadly considered to be a functional language, isn’t, strictly speaking, based on lambda calculus. Hence, under this definition, Lisp wouldn’t count as a functional language.

For two, not all programming languages are based on a strict theoretical framework to begin with, so this definition would be entirely inapplicable to those. And that includes a lot of popular languages, e.g., JavaScript, C, C++ — and, yes, Rust.

Still, there are some languages that satisfy this definition: Haskell, various ML variants, Agda, Idris, Coq, to name a few.

First-class functions

One oft-repeated definition of a functional language is “a language with first-class functions”. This basically means that functions can be passed around as regular values with no limitations.

This definition does capture one of the cornerstones of functional programming. In practice, however, it ends up being a bit too broad, and a little too vague.

Firstly, it’s not entirely precise: it doesn’t specify what we mean by “functions”, exactly. For the sake of argument, if we mean “a thing one could pass arguments into and get some result out of”, then function pointers technically fit the bill, and suddenly C, having first-class function pointers, is a functional language. So this is, evidently, too broad.

We can narrow this down if we say by “functions” we actually mean “closures”. That is, functions that can optionally capture their environment (i.e. variables in scope at the definition site). Indeed, most functional languages don’t even make a strict distinction between the two. This is a bit more useful, but may still be a little bit surprising: JavaScript has first-class closures, thus it’s a functional language under this definition.

Now, it’s debatable whether JS can be strictly considered a functional language or not, but it at least supports the functional programming style, so this definition is not entirely impractical.

On JS and functional style

This is a bit of a tangent, but while we’re here… I find it useful to differentiate between the functional programming paradigm and the functional programming style. The former refers to a certain approach to conceptualizing and reasoning about programs, the latter refers to, well, a certain style of writing the code.

Naturally, there’s a bit of an overlap between the two, however, there is a distinction to be made in relation to languages. Languages that support and/or are based on the functional paradigm, generally encourage (or even require) thinking about programs in a certain way, which naturally leads to code written in the functional style. On the other hand, languages that only support the style require a tremendous amount of self-discipline to adhere to the style strictly — because they not only don’t encourage it but oftentimes it feels like they actively discourage it.

For that reason, I never quite understood the idea of “functional JS” — I’ve never seen it actually work beyond toy examples, strictly speaking: in real code bases I’ve seen, there’s always a fair bit of imperative horror, which is haphazardly swept under the rug. But some people swear by it, and I’m not really a JS developer, so who am I to judge?


Other language features

If we want to refine this “first-class closures” definition further, we can add more language features as prerequisites. This approach is rather popular, and it is descriptive. Let’s go over some of the more popular ones.

Immutability

This is one of the more often cited features. Broadly, immutability refers primarily to data structures and variables. By immutable data structures, we mean that if we need to change something in, say, a list, we create a — likely shallow — copy of the list with a certain element changed. By immutable variables, which is, strictly speaking, a misnomer, we mean that there are no “variables” per se — the value doesn’t change once assigned. Formally speaking, it’s more correct to call these “bindings”, as in, we “bind” a computation (or its result) to a name — which we can then reuse in place of said computation.

However, it’s all a bit tongue-in-cheek: while yes, immutability promotes good functional style, it’s neither strictly required nor necessarily desirable for a pragmatic language. Case in point, in Haskell, values are considered immutable, but if you need mutability, you can have it via STRef, IORef, MVar, etc – although those only work in specific contexts (specific monads to be precise).

The point is, if something is, after all, mutable, the effects shouldn’t be externally observable beyond the end result of a computation, so as not to “infect” the rest of the code. Which neatly brings us to…

Functional Purity

Functional purity refers to the property of functions, which, to be pure, must not have side effects. That is, functions are functions in the strict mathematical sense: they map inputs to outputs, but nothing else — they don’t mutate memory, they don’t perform I/O, etc.

This also happens to imply immutability, at least to a degree, because externally visible mutations are, indeed, a side effect. However, complete strict purity is entirely impractical. After all, I/O is a side effect, so a completely side-effect-free programming language wouldn’t be able to do anything interesting — like print the result to the screen.

Referential Transparency

Referential transparency is more of a consequence of purity and immutability. In short, it means that any name can be replaced by either its definition or whatever it ultimately evaluates to, without changing the observable behavior of the program (in practice, usually modulo performance). This improves code readability a fair bit: one doesn’t need to have complete knowledge of all the code paths to understand what a given name represents. There is a bit of a caveat though: name shadowing (i.e. defining a locally scoped name that “shadows” the same name from an outer scope) complicates things a bit: readability may suffer a lot if name shadowing is abused.

This is not a “language feature” per se, as much as a direct consequence of using functional style. Still, a language that promotes referential transparency is closer to the Platonic ideal of a functional language than one that doesn’t.

Higher-Order Functions

Higher-order functions are functions that accept other functions as arguments. So your map, fold, etc.

This is a natural consequence of first-class functions. However, what is usually implied by this, is that the language has to support at least rank-1 parametric polymorphism (or be dynamically typed/untyped) and that using HOFs should be reasonably ergonomic without sacrificing type safety. Which brings us to…

Parametric polymorphism

Parametric polymorphism is slightly complicated to explain in one sentence. We explored a more detailed description when discussing typed lambda calculus but the long and short of it is this: parametric polymorphism allows us to use so-called type variables in the type signature. At the use site, type variables are instantiated to concrete types. The implementation must be the same regardless of the specific instantiation. This last part makes it distinct from ad-hoc polymorphism, a.k.a. function overloading, which has unique implementations for each specific instantiation, and thus isn’t universal, i.e. doesn’t allow calling the overloaded function with any type, only the types the function has overloads for.

In many programming languages, parametric polymorphism is better known as generics, and in C++ as templates. There are some subtle differences, but either can encode at least rank-1 parametric polymorphism.

Parametric polymorphism is a feature of a type system, and as such only applies to strictly statically typed languages. If a language is strictly statically typed, however, parametric polymorphism is absolutely necessary for HOFs to be useful, so it’s quite a hard prerequisite.

Unconstrained parametric polymorphism also happens to be quite useful for formally proving things about the code — or indeed using the language as a proof assistant. Every parametric type corresponds to a theorem, and every term of such a type to a proof (see Philip Wadler’s “Theorems for Free” paper for the more practical examples, or Curry-Howard isomorphism in general).

On ranks

Ranks of parametric polymorphism refer to what can and can’t be polymorphic, exactly. Rank-1 allows top-level functions to be polymorphic, rank-2 also allows immediate function arguments to be polymorphic, and rank-N allows arbitrary polymorphic nesting. Functional languages like Haskell make heavy use of rank-2 polymorphism for various abstraction techniques, lenses and ST being the two most common examples. But to get a vague sense of it, here’s a fictional rank-2 Rust function that applies the same unary operation to both f32 and i32:

fn applyToBoth(f: impl for<T: Num> Fn(T) -> T, a: f32, b: i32) -> (f32, i32) {
	(f(a), f(b))
}

This doesn’t actually work, not even with the nightly non_lifetime_binders feature, but it hopefully gets the idea across. Note also this is actually constrained polymorphism, due to T: Num — I was struggling to come up with a short and simple, yet somewhat practical, unconstrained example.


Algebraic datatypes

Algebraic datatypes usually refer to the language support for product types (i.e. tuples and records), which most languages have, and sum types (a.k.a. data-carrying enums, a.k.a. discriminated unions), which are somewhat rare. They’re called algebraic because they correspond to structures in abstract algebra, but that’s mostly irrelevant to the discussion at hand — “a rose by any other name” and all that.

ADTs are indeed a prominent feature of languages like Haskell and ML. However, ultimately it has little bearing on whether a language can be considered functional or not – it’s easy enough to imagine an imperative language with ADTs or a functional one without. Nice to have though.

ADTs usually come in a package with pattern matching.

Pattern matching

Pattern matching is also called “destructuring assignment” in some contexts — usually in languages that don’t have proper sum types, and thus pattern matching essentially boils down to syntactic sugar for extracting fields out of records.

There’s also a kind of pattern matching in OOP languages like Java and C# nowadays. However, there it’s closely intertwined with classes and inheritance, rather than ADTs, so similar, but different, and confusingly called the same.

Usually, in its basic form, pattern matching (at least the one we’re interested in here) comes as a case, match, &c, expression, which allows matching on specific variants of a sum type.

Is a bit of a necessity with ADTs, as an eliminator for sum types (i.e., how else would one get the value out of, say, an Option?).

Recursion

As a reminder, recursion refers to the ability of a function to call itself (directly or indirectly).

It is more or less mandatory, as functional languages generally don’t have structured constructs like loops. This directly follows from immutability and purity: indeed, updating a loop counter is a side effect. The implication is that the only way to achieve looping is recursion.

This has an important consequence, namely, a functional language has to support tail call elimination and/or guarded recursion, so as not to blow out the stack in long loops.

Expression-oriented syntax

Basically, everything is either a declaration or an expression, there are no “statements” in the traditional imperative sense per se.

This is mostly a feature of using lambda calculus as a guiding principle, if not outright as a formal foundation of the language. Again, more of a consequence than a prerequisite really, but it has some useful consequences, like the lack of the need for the dreaded ternary operator.

What about Rust?

Circling back to Rust, it does not satisfy the first, overly strict, definition. For one, there’s no formal semantics definition for Rust to begin with, despite the efforts of the RustBelt project. For two, even if there was, it wouldn’t be based on lambda calculus in any substantial manner, as Rust semantics allow for flagrant mutability.

Let’s consider the other criteria.

  • Parametric polymorphism. Yes, Rust has parametric polymorphism via generics, albeit only rank-1 (at least rank-2 via HRTBs, but only for lifetimes). It also supports constrained parametric polymorphism, a feature it borrowed from Haskell. That alone doesn’t make Rust functional, but it’s a start.

  • Algebraic datatypes. Yes, Rust has algebraic datatypes and pattern matching. This alone, again, doesn’t make Rust functional, but it’s a feature historically associated with FP languages.

  • Pattern matching. Yes, Rust has pattern matching, and it’s the kind of pattern matching one would find in functional languages like Haskell or ML. This still doesn’t make Rust functional.

  • Expression-oriented syntax. This is arguably one feature that might be largely responsible for the whole claim that Rust is a functional language. Yes, Rust does have this, in contrast to more C-like languages. There is, however, an important distinction to make. In functional languages, expressions are, generally, pure, i.e., side-effect-free. This is very much not the case in Rust. Indeed, there are “expressions” in Rust that have type (), which could more properly be understood as statements. Case in point, assignments. Also, in Rust, let-bindings are apparently “expressions” with the () type – this wouldn’t make much sense in a typical FP language.

    As far as I can tell, the idea of returning () for what can be effectively understood as “statements”, was inspired by Haskell’s monadic actions that cause a side effect but do not return a meaningful result. Haskell uses m () (a unit type wrapped in a monad) to signify such cases. But in Haskell, such actions only make sense in monadic contexts that emulate imperative machinery to some extent (most commonly, State and IO). By the same logic, this only makes sense in Rust because, at the end of the day, it’s imperative.

So, these were some features characteristic of some FP languages, which Rust shares. Don’t get me wrong, these are all great features and the way they’re used in Rust makes a lot of sense. However, they alone are not enough to make Rust a functional language. Here is where things rapidly go downhill for the claim that Rust is functional.

  • Immutability. Rust famously uses the “immutable by default” approach. However, in functional languages, it’s more than just “by default”, it’s the way it’s supposed to be most of the time, and if you really need mutability, you have to work for it a bit. With Rust, this is very much not the case: in practical terms, if you want to write something useful in Rust, you’ll be peppering mut all over the place. This is, again, perfectly fine, but that’s very much not the functional style.

  • Purity. The long and short of it, Rust doesn’t care about functional purity. One can write pure functions, of course. But, again, there are too many cases where one must use a function that changes some externally observable state, which is, by definition, a side effect. One could, if they really wanted to, use immutable data structures to approximate this to a point. But then again, even cloning an Arc already has observable side effects, so it’s a losing battle.

  • Referential transparency. Fundamentally impossible in Rust due to mutability and especially interior mutability. Furthermore, a common Rust style encourages name shadowing, which, while it doesn’t outright preclude referential transparency, largely negates its code readability benefits.

  • Recursion. Rust, naturally, has recursion. However, it doesn’t have guaranteed tail call elimination. Yes, it was a bit of a surprise for me when I learned it. LLVM will apply tail call optimization whenever it feels like it, but one can’t rely on it, and Rust makes no guarantees whether it will be applied or not in any given case. The direct consequence is that one should prefer loops unless an algorithm is fundamentally recursive. This, again, is perfectly fine by itself, but it’s not functional style, not by a long shot.

  • First-class functions. First-class functions, or rather, function references, are a thing in Rust. But that’s also true in C and C++, so that’s not a surprise. Closures, however, are not necessarily as “first-class” as one might hope. Functional languages generally don’t differentiate between types of a function and a closure, making this point moot. Rust, however, has to differentiate between the two, and so it does. But that introduces a lot of complications into actually using closures as “first-class” values, and there’s also a fair number of limitations that come with the fact that one can’t name the type of a closure and has to define the closure type via trait bounds instead. In effect, closures in Rust are roughly about as “first-class” as in C++14, and less “first-class” than in JavaScript.

  • Higher-Order Functions. This follows from closures not being as “first-class” as one would hope. The moment one tries to write a HOF accepting non-trivial closures, things get complicated in a hurry. Additionally, while HOFs are arguably a necessary feature to consider a language functional, it’s not sufficient: after all, you can get HOFs in C++, with similar caveats as in Rust, and nobody seems to argue that C++ is a functional language.

So, here’s what we have at the end of the day. Rust borrows a few concepts from functional languages, most of which are either not strictly inherent to functional languages (ADTs, pattern matching), or emerge as a consequence of the language being functional (expression-oriented syntax), putting its own twist on some of those. With cornerstone features of functional languages (first-class functions, higher-order functions, tail call elimination, functional purity), in Rust, they either don’t enter the picture at all (functional purity, tail call elimination, referential transparency), or end up being comparatively limited (first-class functions, higher-order functions).

Hence, it’s hard to call Rust a functional language, whichever definition you might choose to use.

So is Rust bad then?

Does this all make Rust dys-functional (pardon the pun)? No, not at all! While Rust might not satisfy a definition of a functional language, it’s still a great language! This article primarily aims to dispel some terminological and categorical confusion I’ve observed, not besmirch Rust.

The point I want to convey then is that the main point of comparison with Rust shouldn’t be functional languages like Haskell, ML &c, but rather systems programming languages, like C and C++.

Fundamentally, Rust is closer to C++ than to, say, ML. However, it should be noted that Rust borrows some proven solutions, typically found in FP languages, to address some specific pain points found in C and C++. Let’s look at some examples:

  • Constrained polymorphic polymorphism, also known as Wadler-Blott type classes, called “traits” in Rust, addresses the specific C++ (and OOP in general) pain point of “inheritance is hard, actually”. Well, to be precise, Rust doesn’t do OOP at all. But to solve the pragmatic issue of expressing ad-hoc polymorphism, Rust opts to borrow Wadler-Blott type classes from Haskell, which are, themselves, as Wadler put it, “[a way] to make ad-hoc polymorphism less ad hoc”.

  • ADTs and pattern matching essentially address the pain point of exception handling. Rust’s solution here is, basically, don’t do exceptions. C’s alternative to exceptions is equally cumbersome, however. As is Go’s use of, essentially, product types (that is, pairs) to return a nullable error along with the result. Rust reaches for the same pragmatic solution that’s been used in FP languages since the 1970s, namely sum types.

  • Expression-oriented syntax, as far as I can tell, doesn’t address any specific pain point. However, as someone with plenty of experience in both C++ and Haskell, I can make an educated guess as to how it came to be. One thing I find very much missing from C++ adjacent languages is a nicer way to do conditional initialization. Yes, C++ has the ternary operator, but when you have something beyond the simple “if a then b else c” type thing, it quickly becomes a nigh-unreadable bug-prone mess that one would rather avoid. Hence, you commonly end up with something like this instead:

    int x;
    if (foo) {
        x = 1;
    } else if (bar) {
        x = 2;
    } else {
        // compute baz and quux
        if (baz && quux) {
            x = 3;
        } else if (!baz && !quux) {
            x = 4;
        }
    }
    // do stuff with x
    

    This is okay(-ish) in terms of readability, but did you notice a bug? Indeed, things are likely to break horribly if foo = bar = false, and baz != quux, because x ends up uninitialized. This is a very simple and straightforward example, but it’s a class of bugs that I’ve seen quite a few times in C and C++ code bases, and which are fundamentally impossible in Haskell, not least because there are no “if statements” in Haskell, only “if expressions”.

    I suspect that Rust’s “everything is an expression” was motivated by a similar thought process. In addition, the if operator is not always the best fit, especially for complex expressions, and making match (and loop!) into expressions makes the language simultaneously more uniform, more predictable, and more expressive.

Conclusion

While Rust can’t be in good faith called a functional language, it’s one of the nicest systems languages I’ve had the pleasure of dealing with, and I have enough data points to compare against.

Rust strategically borrows some of the concepts from functional programming to address a lot of pain points common to other systems languages, notably C++ and C. First and foremost, Rust is a pragmatic solution to the problem of C++'s complexity (and bug-prone-ness), and the aim, as far as I know, was never to make a fully-fledged functional systems programming language, but rather just make a better systems programming language.

Arguably, Rust did achieve its goal of being “a better systems programming language” to a very reasonable extent (compiler soundness bugs notwithstanding). Calling it a functional language, when it’s not, at the end of the day, diverts attention from its real value.

Rust is a great systems programming language, mostly adhering to the procedural paradigm. It does borrow a few features common to functional languages, but those features are not what makes the languages functional — they’re just good pragmatic solutions to common programming language problems.

Banner that links to Serokell Shop. You can buy cool FP T-shirts there!
More from Serokell
rust traits thumbnailrust traits thumbnail
generics in Rustgenerics in Rust
across the kmettverse with edward kmettacross the kmettverse with edward kmett