From Whitespace to Idris: Interview With Edwin Brady

Jonn Mostovoy
Article by Jonn Mostovoy
October 29th, 2021
24 min read

As part of our video interview series, we interviewed Edwin Brady, the creator of Idris, a dependently-typed programming language.

In the interview, we discussed two of the programming languages Edwin has participated in the creation of: Whitespace and Idris. Edwin also shared some tips and tricks about language creation and talked about the future plans of the Idris language.

If you want to watch the full interview, you can do that on our Youtube channel.


In this post, we have collected some of the highlights of the interview, edited for clarity.

Interview with Edwin Brady

You’re perhaps the most known for your work on Idris. So can you please give us a brief elevator pitch for the language: for whom it is for, and why it’s relevant these days?

Yeah, okay. We’re going to be clear that it’s really a research project. It’s not something that I’d expect you to take and use in production, so I’ll answer with that in mind.

Actually, the vision is to bring state-of-the-art software verification techniques into mainstream software development. Obviously, I’m not going to claim we’re there yet, but the decisions we make, the design choices we make are all about getting to a point where you’ve actually got no excuse – just to prove the properties you care about of your program because it’s easier than not doing so. So that’s the goal. It’s all based on what I now like to call type-driven development, sort of by analogy with test-driven development, the idea being that using the type system you can be very precise or as precise as you want about how your program is going to behave, and then arrive at a working program by a conversation with the machine. It’s always good to have a three-word sound bite, isn’t it, or something like that, so I say programming is a conversation. That’s the forward sound bite.

Yeah, that’s really cool and it also can propagate to other practitioners. I think even, for example, work by Kostis and his team from Uppsala on success typing for Erlang.

Sure, yes. I think actually that’s maybe that’s an important thing to say early on is: when designing a new language and putting it out there, you have to decide what it means to be successful. And success for an academic would not be having everybody using the language, in some sense that would be a nightmare because you’ve got to support the thing. Me and a couple of colleagues were not going to be able to support it on our own. Success, on the other hand, is getting the ideas out there and used by other people.

So, for example, Haskell picking up full dependent types or something like Rust. When I occasionally get tags in issues on Rust RFCs on how do we bring more dependent types in. That to me is success, people knowing the ideas and wanting to take them into more mainstream systems, and I wouldn’t say that success typing has been necessarily influenced by dependent types, but it’s certainly been influenced by the fact that a lot of people have thought that type checking is a good idea.

[…]

It’s very easy to think of tools like Dialyzer or type checker in general as being there to tell you off when you’ve got it wrong, and that’s absolutely not the way we should look at programming languages. Programming languages are there as a tool to help us interface with the machine – in fact, you know, in some sense it’s a human-computer interaction problem, and it’s probably better if we thought of it that way a bit more.

But yeah, it’s still all too easy to think of when you get an error message it is, you know, teachers telling you off “see me after class”, and I guess with success typing then, as the name implies, it’s not so much about being told off, it’s just about getting closer to something more precise. So I think that’s what our tools should be doing – helping you. We in type-driven development put the type first, but that doesn’t necessarily mean the type is the final type. As we write the program, we’re going to learn more about the problem, we’ll refine the type a bit more, we’ll get a bit of feedback from the type checker and maybe we decide that the type needs to be more precise, maybe we decide it needs to be less precise, but all of that needs to be done with tool support. It’s not that we should just write the whole program and then feed it to the oracle.


Programming languages are there as a tool to help us interface with the machine – in fact, you know, in some sense it’s a human-computer interaction problem, and it’s probably better if we thought of it that way a bit more.


I find the story of you starting developing compilers very interesting… Let’s try to trace your footsteps along the path of compiler development, perhaps, going all the way to early years in a university.

It probably goes back further still, because (sure this will be a shock to everybody) I have always been a massive geek, you know. I’ve been into computers basically since I was very young. My parents bought a BBC Micro when I was about five years old, so I don’t actually remember not knocking about with computers, and early on I was learning BBC Basic and interacting with the machine, and always thinking: “oh, I wonder if there’s other ways of interacting with the computer”. And I’ve always had this fascination with other possible programming languages.

So eventually, I went to university to do computer science which, in hindsight, was the only thing I was ever going to do at university, and when it came to picking my honors project, there was one – the headline was “a compiler to teach compiling”, and the idea of it was just write a small language but have visualization tools for like the parse tree and lexical analysis, and the runtime system, so that you could trace how the source language ended up at a target language. And I loved doing that, I had enormous fun doing that, so that was probably the first compiler I wrote. It wasn’t a tremendously useful language (I think the most complicated type it had with the strings), but it was how I learned about parsers, runtimes, and compilers.

So what happened next? I got a proper job for a bit after that working on a natural language processing system at a spin-off company from Durham university, and even there I was working on domain-specific languages. I was working on an inference system – so there’s facts in a knowledge database and questions being asked, and they were being encoded, and we were inferring answers to questions and, in the end, it was a programming language problem. On the face of it, it didn’t look like one, but it was a programming language problem because it was “make the runtime fast”, “have a parser that allows domain experts to state facts that they can add to the database”.

And after a couple years of that, I decided it would be fun to go back to university and learn a bit about type theory. That’s where I learned about Epigram, so that was James McKinna and Conor McBride. James McKinna was my supervisor, I did quite a bit of work with Conor McBride. They were designing a new language Epigram, and my thinking then was: “You know, this is all very cool, what we can express here is very cool, but it would be nice if we could run these programs. How do we make this useful to a developer?”. And so my Ph.D. ended up being how do you compile dependent types, and it turned out there were some problems with just erasing types. Like, just erasing types on its own doesn’t really work because you end up with a lot of stuff at run time that’s not actually a type, but it’s still only compile time information. So how do we erase that? Everything I’ve done in all of my language implementation work, I think, it’s really been starting from that point of “these ideas are very cool, how do we run them, how do we eventually get them to end users, where the end user is a mainstream software developer or a practitioner.”

And somewhere along the way, you did some work on esoteric languages.

I wouldn’t go as far as calling that work, I mean, that literally was one evening after the pub. It was a couple of us, just went out for beer on a Saturday night, because, might have mentioned, I’ve always been a massive geek, we were talking about programming languages and thinking “oh well, you don’t really need many characters to encode a programming language, and it’s not just you don’t need many characters to encode and code it on disk, for example, or encode it in memory, you can actually give some semantics to those characters, and let’s see what happens if we just stick with the white space characters.” And so I went home and because I’d been doing quite a bit of work at that point on runtime systems, so I knew how to make a stack machine, for example, I had one that I’d been working on for the target of Epigram, so I kind of knew how they worked, and Haskell is a great language for making prototypes of this sort of thing you do very quickly without having to worry about memory management, for example, and it just kind of worked, and we noticed that it was March the 30th, so we thought (this was before everyone got bored of April fool’s jokes or maybe it wasn’t) “okay, let’s just post this on the internet on April the 1st and see what happens” and that that is the thing I’m sure has the most impact of anything I’ve ever done and, probably, ever will, and I’m okay with that.

It’s just a thing that if you happen to be working in programming languages and you think: ‘hey, wouldn’t this be funny” – you have this kind of sense of humor – it’s a quick thing to bash out, and so we did.

[…]

Every so often it comes up out of the blue. So one day, and maybe the person who did this is watching, so in about 2008, at work I got a printout of, I don’t know, just a load of blank sheets of paper, it was like scrap paper, and there was a letter attached saying “there’s a problem with my Whitespace code, I can’t figure it out, I don’t want to email it to you because I don’t want it to be intercepted by anyone, I wonder if you could help” and my plan had always been to highlight around a bit of it, post it back, and say: “here’s your problem”, but I lost it and I was never able to send it back. So if it was you who sent me your code, I’m sorry I wasn’t able to help you. I thought that was really funny, and I’m very sad that I don’t know who it was or where the code went, because – well, I guess if I ever find it, I can post it back and say “sorry it took me 15 years to debug your code, it was just a bit tricky to type it all in.”

Students around the world are said to read the Dragon Book, and then they will be able to write compilers. I would really like you to talk a little bit about what is the difference between the Dragon Book and other materials, and where should you go after reading the Dragon Book?

These days, I wouldn’t necessarily recommend starting with the Dragon Book. I haven’t read the latest edition, so I don’t want to say things that are unfair. There was a lot of in the edition (the second edition) that I was reading when I was doing my compiler-to-teach-compiling project and I learned a lot of good stuff from it, but it was based on how people wrote parsers and lexers at the time. These days, I think you need to know you certainly need to know how to write a parser, but the way you write a parser these days is probably with a parser combinator library. Like, you kind of need to know how to refactor your grammar into such a way that it’s not hopelessly inefficient, but you don’t really need to know all of the details that the Dragon Book goes into. Similarly, you don’t really need to know a huge amount about code generation unless that happens to be the thing you’re interested in. I’ve never really written the code generator, for example, not in the sense of spitting out assembly code, I’ve written a code generator in the sense of writing out C or Scheme, or a target language.

So, definitely know the basics, definitely have something like the Dragon Book to hand so that if you’re not sure how to do something, you can probably find it in there. But if you are interested in working on a new language, I think the most important thing is to know about other languages, how other people have tackled the problem of explaining things to a computer, know the trade-offs.

That might involve dabbling in functional languages, object-oriented languages, knowing a bit about linear type systems, so knowing a bit about Rust, and a bit about Prolog and Lisp, just knowing the history of where things have come from. You don’t have to write big sophisticated programs in all of these languages, but you do need to know what features exist. Because, I think, if you do want to design a new language, which, I mean, why not, it’s fun, it’s good to have something different about it. So don’t just come up with a new syntax for an existing language because you think a different syntax would be nice. Try to take inspiration from lots of different languages that you’ve tried out, and see if you can put them together in a new and exciting way. That’s kind of what programming language research is about, to some extent – putting old features together in new ways and seeing how well they work.

So, I would say, don’t get too hung up on understanding everything in the Dragon Book, have it at hand so you know how other people have solved long-standing problems, but better to know about how other languages work. Also, I think it’s well worth knowing a little bit of type theory, just a little bit of basics of lambda calculus and type theory, again because that’s a way that people have solved problems for a long time, it’s good to know how people have solved problems in the past. I think it’s a lot easier to learn about type theory than it was 10 or 20 years ago, now there are more things that you can read and, of course, there are languages like Idris and Agda that you can use to learn those concepts. So yeah, learn a bit of theory, make sure you know a few languages, look things up in the Dragon Book where you need to.


That’s kind of what programming language research is about, to some extent – putting old features together in new ways and seeing how well they work.


When I have to explain to someone what Idris is, I kind of wave my hands and say: “well, you know Haskell, you know Agda and Coq, and it’s kind of somewhere in between.” If you would have to compare Idris to practical typed languages like Haskell and to proof assistants, where would you put it?

Firstly, I love that we’re in a world where you can say “practical typed languages like Haskell”, that’s fantastic that we’ve got to that point. So, thinking back to just the beginning of working on Idris, there’s a question of would I still start a project like Idris now when Haskell exists, and I think the answer is probably yes, because Haskell has got to this point where you add more and more language extensions.

So, you’ve got Haskell 98 that essentially nobody uses (it’s not completely true, there are people who stick to Haskell 98, but most people will stick to a small collection or maybe a large collection of favored extensions). But the point is there are extensions, you never quite know which Haskell you’re at, you never quite know what’s coming next, and there’s all this wonderful new power that’s coming into Haskell, and you see people do fantastic things with it, but at some point I would want to consolidate all of that.

How do you take all of these wonderful features and make them simple? I don’t like to use the word simple but I’m going to use the word simple here in the sense of small – make the language as small and orthogonal as possible. So that’s why I think Idris is – partly by accident, but partly because all of the things that you might want to express using hundreds of Haskell extensions, you could do instead by writing a type level program that is just the same language as your ordinary term level language – I’m gonna assert that Idris is significantly simpler than Haskell because the type system allows you to write programs in this – I don’t say type level programs, I just say programs because they are the same thing, aren’t they?

So yeah, if one were to start again on a language like Haskell, knowing what we know now, I would hope it would be something like Idris but lazy. Maybe that’s where we’d want Haskell to be. Sort of minor differences are things like prioritizing interactive editing – Haskell’s picked up a bit of that lately with interactive tooling and particularly with typed holes but one of the things that makes type systems hard, especially expressive type systems like Haskell’s, like full dependent types, is that getting the whole program right is challenging, and you get error messages. Your program doesn’t make sense, but it’s up to you to figure out why because I can’t give you the full details. Interactive editing is essential once you have more sophisticated programs to work with and to work on, and really thinking about not only what complete programs look like but also what partially written programs look like is crucial. That’s another thing that I think Idris takes a lot more seriously than GHC Haskell does, but GHC Haskell and various tooling support is taking that more seriously now. So that’s how I compare it with Haskell.

On the other side, comparing with Agda and Coq – again, thinking back to when I was starting to work on Idris, Agda was around but Agda 2 was in its infancy. There had been an earlier version of Agda [inaudible] so I had all the components, so I thought I might as well not be tied to other people’s design choices, I might as well just have a go at my own. And at that point, I was coming at it more from the direction of what kind of programs we will run, what kind of programs mainstream software developers will want to run. So I kind of started from the back end and worked towards the front end.

And that’s what I’ve done for my Ph.D. work, anyway, so like type erasure so that you could compile dependently typed programs at least moderately efficiently. Whereas Agda, I think was more starting from the other direction of “what kind of things can we express in the type system, what kind of interesting things can we prove.” I think we’ve got to a point now where we kind of met in the middle. I think it’s great that both Agda and Idris exist because every time one gets a feature, the other one has to catch up. That sort of competition, I think, is really healthy.

You know, I go a lot to Agda meetings, they come along to Idris meetings, we each learn what the other one’s done. So, like the last Agda meeting – I shared some tricks for how I’ve made Idris 2 a lot faster than Idris 1 because I want Agda to be as fast as Idris 2 so that I have to rush ahead again. But I think the contrast is really about where the where the initial focus of the language was, and I think Agda has always been more about proving interesting things and Idris has always been more about how can we how can we make it easier for mainstream software development to be able to prove more things, but if I can explain something to another programmer about why my program works, I should also be able to explain it to the compiler, so the compiler can keep me right on it, and that’s kind of what we aim to do.

The contrast with Coq is, I think, quite a bit more bigger. Coq really is primarily a theorem prover, you can extract programs from it, people do it in a very big significant way, but I don’t really think of it as a dependently typed programming language. It is based on dependent type theory underneath, that’s the way you do your proofs, but what people will typically do is write down a specification and then derive a working program from that specification rather than add some indices to types and then write programs over those more expressive types. It encourages a very different approach to writing programs than Idris and Agda do.

All of these systems sit in a sort of different point in the design space. Haskell is sitting at a point where we’ve had pure functional programming for a long time, let’s make the type system a bit better. Idris is maybe what we would do if we were to start again and, let’s say “we like the Haskell syntax, but we want to consolidate all of the fancy type system features.” Agda is similar but with a focus on proving, and then Coq is really – I don’t want to say just a theorem prover because it is a lot more than a theorem prover, people write very big sophisticated software packages in it, but it definitely has more of a focus on theorem proving.

Maybe there’s a spectrum – Haskell, Idris, Agda, Coq – a spectrum of where you are on the programming side, where you are on the proving side. That’s oversimplifying, but I guess what is the Youtube stream for if not oversimplifying.


Interactive editing is essential once you have more sophisticated programs to work with and to work on, and really thinking about not only what complete programs look like but also what partially written programs look like is crucial.


In your elevator pitch, you’ve mentioned that Idris is a research language, but when we talk about research languages we really talk about two things at the same time. One is the language itself being the object of the research and another thing is a research language as a platform for research. Would it be fair to say that Idris is both at the same time; that Idris itself is the object of research but it also enables PLT research or dependent type research?

We certainly do both. Part of the reason for Idris existing in the first place was that I just wanted to have a dependently-typed language that I could write programs in. And then I thought “okay, once that’s finished, then I’ll just move on and write the program” but of course it’s not going to work out like that, is it? Because you suddenly think of other things you might want your language to do.

So there’s definitely those two sides of “how do you make the language better” and then “what kind of interesting things can you do in the language”, and I think we do both. At the minute, I suppose the focus would be on metaprogramming: how do we make the language nicely extensible, how do we add the reflection features that make theorem proving easier. Because you know, like it or not, it’s not really a theorem prover primarily, but if you add sophisticated types, you’re probably going to have proof obligations arise naturally that you just need to deal with. So how do we make it easier to do, that’s certainly a language design question.

Then there was a question of dealing with protocols and states, and the fact that “okay it’s a pure language, but the reality is the world, the outside world has things that exist in certain states, how do we represent that sort of thing?” Linearity, linear types come in, and how do we bring that into the language and make them usable? All these questions just arrive naturally once you start writing programs, which is fine. That’s one kind of research that we’re going to do with Idris.

Actually, those two kinds of research – how do you develop the language and what can you do with the language – they really feed off each other, because as soon as you try to do something, you find you might need another feature, and then you have to ask: “well, should I add the feature, can I do this in the library, what’s the best way to approach this.” But yeah, I think people do both.

You have mentioned linearity. As far as I understand, this is one of the standout features of Idris 2 when compared to Idris 1.

Yeah, the linearity part is still experimental, and people are using it in practice but this is all part of quantities. Quantitative type theory is the core language that Idris is based on. Fancy sounding name, but all it really means is every variable has a quantity attached to it. QTT itself is kind of agnostic in what the quantities can be, but just for the sake of simplicity and not having too much in one go, in Idris 2 they can either be zero, one or unrestricted.

So zero means never used at run time, one means use exactly once at run time, and then unrestricted is the world we used to be in where everything had no restrictions. Linearity is interesting, and there’s all sorts of fun we can have with it, but by far the most useful is this zero quantity, because thinking back to where I came in working with dependent types, it was all about there’s a lot of stuff in the types that’s going to leak into runtime if we’re not careful.

I tried all sorts of techniques for getting rid of that. I came up with the idea of noticing which arguments could be uniquely determined by which other arguments and then just erasing them, and then I had a student who had a whole program inference mechanism that could tell you what was never going to be used and erase it. But, quantities make the whole thing much simpler because the type checker will tell you if something is needed at runtime or not. If you say something is compile time only, the type checker tells you if you violate that, and that has proved to be, I think, extremely useful because if you’re not very careful, you can have things leaking into your program that you really didn’t intend. And it’s like not only does the program become slower, the complexity becomes worse, so QTT really stops that being a problem with the zero quantity.

And, you know, once you’ve got zero and unrestricted, you might as well see what you can do with other quantities. It’s fun to see what we can encode with linearity, it’s kind of ugly to encode anything at all interesting at the moment, but I guess that’s one of the language design things we’re going to be working on – how do you make that less ugly – and it’s probably going to involve some work on the core type system to make it nicer to work with, but I think that’s certainly going to be a line of research over the next little while, and for most users, you won’t really notice that that Idris 2 is based on QTT and Idris 1 isn’t, except in places where you have written programs with things accidentally leaking into the run time, in which case, Idris 2 just told you that a compile time thing is actually runtime and you might want to think about whether that was necessary or not.

What are your plans for the future of idris and, aside from metaprogramming, do you work on some new and exciting research?

I’ll probably start with metaprogramming because that is the new and exciting research. The primary goal for Idris right now (because I don’t have an awful lot of time to work on it at the moment) is performance. I think it’s absolutely crucial for a system like this to be responsive. Now, we can’t always be responsive because the nature of a dependent type language is that there are type level programs, and if you write a slow type level program, I can’t help you, actually. I can’t magically make your program fast.

But on the assumption that your type-level program is fast enough, we need to be able to respond as quickly as possible. So yeah primary focus is performance. So one way I’m thinking about that at the moment is just taking advantage of the fact that Idris is implemented in Scheme. We could use the Scheme evaluator to do compile-time evaluation of expressions, and that has, in my current experience, made the evaluator about 20 times faster, something like that, it’s between 10 and 20 depending on what I feed it. That’s crucial for metaprogramming, we’re running programs at compile time so they better be quick.

One project that some colleagues are working on and I’m helping out a bit with is called Frex. So Frex is a library. I guess the headline would be automatically proving equivalences – take an algebraic theory, take an expression in that theory, say, monoids, groups, whatever – take the Idris type that that expresses an equation, translate it into a form that Frex understands, do the proof, and then that proof is something you can use in your program.

Metaprogramming is essential to making that work effectively and usably, and an awful lot of my effort over the last couple of years has been making idris fast enough to compile the Frex library in a reasonable time, to make it responsive. So our definitive definition of responsive here is basically Jakob Nielsen’s, it’s under one second ideally, if it’s over 10 seconds then it’s a bug. We’re generally doing well on the “under one second” there. So yeah, my primary thing is about performance but what can we do with meta programming?

Other things would be type providers, this is where you run a program to calculate what a type is going to be. How do you know that you’re interacting with a database correctly? A database has a schema, maybe the compiler could run a program to query the database schema so that you know that the rest of your program is interacting properly with that database.

Another thing we might use that for is: I have a student working on verifying properties of state machines. You could take a graphical description of a state machine in DOT format, you can then parse that DOT format, generate an Idris type that captures the meaning of the state machine, and then try to do proofs about whether it’s always possible to get into a state – so kind of a mini version of a model checker in the type system. Again, that needs good metaprogramming for that to work effectively because we’d really like to just be able to draw the state machine, and that’s your program, and through type providers and metaprogramming you look at the picture and that is your API. It all comes down to metaprogramming. Everything interesting I have in mind seems to come down to good metaprogramming.


We would like to thank Edwin for the interview!

If you would like to read more interviews with programming language creators, educators, and companies that use FP in their daily life, head to our interviews section. Additionally, we sometimes release video interviews on our Youtube channel, so be sure to subscribe!

From Whitespace to Idris: Interview With Edwin Brady
More from Serokell
ML: Regression Analysis OverviewML: Regression Analysis Overview
telegram blockchain competition serokelltelegram blockchain competition serokell
servant-util: Extending servant with Database Integrationservant-util: Extending servant with Database Integration