Haskell in Enterprise: Interview with Rob Harrison

We’ve all heard about Haskell success stories from famous companies like Meta and Tesla. But did you know that Haskell is successfully used in plenty of enterprises, many of which you wouldn’t think of as being at the forefront of technology?

Our today’s guest is Rob Harrison, a Lead Architect at Flowmo.co. He has worked as a technical lead on projects for clients like Vodafone and Tesco. In the interview, we’ll be talking about his experience and techniques that he uses to bring the power of functional programming to consulting projects.

Read on to discover how Haskell can help large enterprises and what steps one should take to introduce it.

Our interview with Rob Harrison

Hi, Rob! Could you give our readers some introductory information about yourself?

I have had quite an unusual journey to software engineering actually. I studied electronics, so my first introduction to software was to solve specific problems, and I think that’s quite good practice in general. I always think having a project or goal in mind is super important to drive curiosity and learn the right-tool-for-the-job, rather than focusing on the general or theoretical. The real world is asynchronous, not procedural, and circuits gave me a good grounding in this.

I dropped out of university to go and work in the music industry, which turned out to be a great decision for me. I was able to train from the last generation of truly analogue recording engineers on the large format consoles and analogue tape machines. It didn’t hurt that I could help repair the equipment and that helped train my brain for modelling complex systems. I think that is when I realised that I could do some things that others couldn’t, particularly around visually modelling systems. It was like components had an emotion, or some sort of synesthesia thing. Many years later, when I found out I was Autistic (Asperger’s but now known as ASD) that explained a lot.

I always had an interest in servers and Linux, but building several websites provided me with a challenge to help drive my interest in programming beyond pure curiosity. I wasn’t just happy building the sites either, I wanted to know how to host them and then how to host them in a highly available way.

At that time I was less confident and dubious about my own ability to self-study, so I went to work for a web development company, to ‘learn from the professionals’. However when I started, I realised that I knew a lot more than I thought I did (and in this case than my colleagues at the time). I was developing a passion for business and entrepreneurism by following the tech sector so closely, so I left and started my first company which was an agency.

When the iPhone came out we started getting more and more requests from customers for app development, so I retrained and moved the company away from services and towards products. By doing this I was quickly offered a good position to join a growing startup AppInstitute as a Platform Architect, to help them improve their drag-and-drop app building product. This was at a time when Docker and Google were pivoting to Linux containers and Kubernetes was just getting started, so I had the chance to really simplify highly available stacks for the first time using techniques I’d played around with from LXC many years earlier.

I can’t underestimate how massive this change was for the industry, as for the first time it really empowered developers to use the right tool for the job. You could encapsulate all your application’s dependencies into something that would execute just the same locally as it would in deployment and it wasn’t as heavy as VMs or as difficult as other techniques at the time. Finally, as a dev, you felt like you could, for example, kick off an external system tool or library and be confident it would respond the same way on every system.

I worked briefly for another company and for myself for a bit, before joining the team at Flowmo.co and that’s where I’ve been since. We’re an agency contractor for many businesses, including Vodafone, Sky, the Invictus Games and many others.

We build software and tooling, largely for corporate businesses. We usually get hired for our expertise or because we can turn a project around quicker than an internal team, who are often too busy or they don’t have the specialists in-house. Flowmo.co is an agency, which poses different challenges than product development, where I worked earlier in my career. It means every project is different and allows me to use a lot of different technologies. In order to lead a team effectively, I often have to evaluate technologies and make logical decisions.

I’m a Lead Architect, which means my involvement starts at the very beginning of a project, where I recommend tooling and advise on how the project is built. Sometimes I lead the team during development of the project right the way through to the end and code on a day-to-day basis as part of the team. Sometimes I consult other teams and help them steer the project in the right direction. It’s important to me that I code every day and keep up-to-date with changing technologies regularly so I can give good advice.

It’s very odd for me to be recommending such a relatively old language such as Haskell; usually I’m working with the trendy new thing. So that says something about Haskell and the benefits I’ve seen, the fact that I feel the benefits outweigh the costs significantly. It’s always a personal risk for me when I advise against a trend, and I need to have a very good reason to back up my statements.

rob harrison, lead architect at flowmo.co

You have a book in progress. Can you talk more about what it’s about and what is your current experience writing it?

My working title is ‘Theory in the Category of Code’. Basically, I find myself on a daily basis having the same discussions over and over about what I consider in our industry to be essentially solved problems or answered questions in industry best practices. My book started out as a way I could share a chapter with my team to avoid having to have the same conversations over and over.

For example, I can’t believe we’re still having to have conversations about the efficacy and efficiency of TDD. For me, on anything but the smallest projects, TDD is a must to avoid the project grinding to a halt as it grows.

We are living through a slow period of change towards functional declarative programming and away from procedural imperative approaches. This led me to research where some of these ideas came from originally, and subsequently to discovering category theory, which I see as a beautiful and highly useful framework for thinking about software and logic and relating this to the physical world around us.

Of course I’m highly aware now that my brain doesn’t necessarily work the same way as other people’s do due to my Autism diagnosis. However, moving through physical space is something we all have in common. When I’ve worked with my team and used category theory diagrams and spatial logical thinking about the problem domain with them, it has been exceptionally practical for working things out together and forming system domain documentation.

I’m not saying category theory or our brand of its use in logical deduction is perfect, but I do think it is underutilised in the software profession. If the team is working in a functional style with types, where the Curry-Howard-Lambek isomorphism applies, then working with category theory as a tool is a really great experience. If they are not, then it’s also highly useful, but it might not have a 1:1 relationship with the project code it applies to.

I really want to help introduce basic category theory thinking to software development teams working in industry that might not be as aware of it as those in academia, and that’s another reason I’m writing my book.

What do you think is the single biggest problem in the software development industry that functional programming solves?

If I were to pick one, I’d have to say parallelisation/concurrency. It’s a recent problem really, because for much of the history of software, there was only one CPU or core to execute anything on. This fact, along with the increase in available RAM seems to be the driver towards the functional declarative style in the industry as a whole. Mental models of computing where the programmer thinks their computer is still doing one instruction at a time are really no longer valid.

The first thing you get from functional programming is immutability. This gives you a guarantee that the value you are using isn’t going to be changed by some other thread while you’re performing an operation on it. This eradicates a huge group of bugs that are not only really difficult to fix but also really difficult to replicate sometimes. It removes the ‘need’ to add locks into pure code but also provides other shared state options such as Software Transactional Memory and mutable container concepts such as MVars for effectful code that needs shared state.

One of the great things about working in a functional language, rather than just in a functional style in an imperative language, is that you have certain guarantees about the libraries that you’re using. They cannot or should not be changing the state that they give you. It’s one thing to do the right thing yourself, but unless you are writing all those libraries yourself as well, you have no guarantees.

The other concurrency thing you get from working in a functional declarative language is the ability to write the problem definition and not the mechanics of the solution. This means that rather than writing imperative for loops, you can write lazy declarative maps and folds/reducers. This separates the problem definition from the execution. You as the programmer don’t care how the machine iterates through the collection, just that it executes the function you provide for each element.

This empowers the language designers to decide what the best way of accomplishing your task would be, at both compile time and again at runtime. The aim of the game is knowledge encapsulation. If some new technique of getting to the same result faster is discovered, you don’t have to modify your code to see the benefits. Of course you can also optimise by ‘hinting’ to the compiler, for example by using the parallel library in Haskell and computing the result across multiple threads.

Of course doing anything on a small case is possible and we work in an industry where almost anything is possible. This sometimes monopolises the discussion around best practices, but I’m generally concerned with larger projects. Less so about what can be shown to work at the small scale, more so about what can be shown to work at every scale.

As a developer who works in an agency, which features of Haskell best enable you to deliver well-working software to your clients in a fast manner?

I think I’d have to say the compiler and the type system. For users who are new to Haskell, the error messages from the compiler can at first be a bit cryptic, but very quickly they become invaluable.

People forget how something as simple as sum types can have massive benefits in comparison to languages that lack this core data type. It’s crazy that other languages don’t provide a sum type at all. They really are half of the picture and provide huge simplification opportunities. It’s like a language supporting AND but not OR, and yet experienced Haskell developers sometimes take this for granted.

I suppose my next favourite feature would be laziness. Its ability to simplify and make the code we write more readable is, in my opinion, very nice. But for many it can take a lot of getting used to.

Finally, I think the parallel library is really great. I use it a lot. The parallel and concurrency work we do in Haskell just wouldn’t be possible in other languages due to the number of libraries we use. Also, having the guarantees from the type system are key to making reusable concurrent code. I think there are more developments to come in this area.

In the future, I’m hopeful that runtimes will be able to make dynamic decisions about what to run concurrently by measuring execution. I think this is feasible but a difficult challenge. Right now, we focus on describing problems declaratively so that they are independent of the runtime.

Among other projects, you have worked with Haskell at Vodafone. Could you tell our readers about the experience, the results, and what you learned from it?

I think one thing we’ve learned is that with a combination of Haskell’s strong type system, the compiler producing practically provably correct code, and with TDD utilising property-based testing (generative testing) via QuickCheck, we can write some really rock solid code. Incredibly stable and bug free, with great uptime metrics.

We always design our systems with high availability and redundancy; we utilise Kubernetes and NoSQL databases in environments configured to be highly scalable. So we design systems expecting them to fail and regularly test our production systems by simulating failures. Deploying Haskell containers has been ultimately a lot more stable than we really required, which is great.

However the real gems have been finding bugs that we never would have considered via property testing. I’ve had an experience where I delivered the first version of a tool to a client for review expecting feedback and basically everything worked the first time. That tool is still in daily use, and it’s still on the same version number! Obviously that’s an incredibly rare anecdote and not what you should expect or come to rely on, but it’s amazing that it can happen at all.

Developer flow and focus is so important to just getting things done. When you’re coding in Haskell, you find less need to execute your code over and over, instead letting the hints from the compiler guide you constantly towards something that executes. This is a really great workflow.

I think I’ve definitely learned a lot about training teams on these Haskell projects. I found things worked best when focusing on fundamentals and understanding. Usually if someone is struggling to understand something, there was an underlying assumption that, once corrected, allowed them to clear up a whole bunch of confusion. Not just knowing how to do something but why.

Simple Haskell with long descriptive naming of values is the way to go with teams, as there’s no need to use single letter naming that looks like an equation; it’s not going to make your code any faster and is going to slow down everyone.

In what ways can Haskell improve large codebases?

I think large codebases in general are an issue. I always try to keep applications small and targeted. Particularly with Haskell, the size of a codebase can become an issue if compile time becomes too large. This can be overcome by linking and by making smaller applications for specific jobs and messaging between.

At Flowmo.co, we use microservices to make this easy. They also enable a best-tool-for-the-job strategy, over just using what is available in the language or toolset you’re using.

In our Data Science department, we use a lot of Python, just because it is the language of data science at the moment. However Python is really executing C most of the time, so we can provide Python libraries written in Haskell with the foreign function interface.

How can category theory and functional programming be useful for software developers working on run-of-the-mill projects?

I think category theory can be at its most useful when used as part of the day-to-day routine of development. I don’t think it’s only limited to working in large teams, but large projects and large teams are very different. Even working on your own, category theory is really so handy.

For me, it is like a lens to look at the world, and particularly the world of functional programming software projects. It’s like a set of train tracks, or a snap-to-grid button in a piece of software. It allows you to think about problems spatially, in the physical world, write less code and identify discrepancies in your mental model of the problem domain early. By forcing oneself to use a limited set of simple rules, it supercharges code reuse and reduces codebase size, thereby increasing maintainability.

What is your opinion about using a functional programming style in languages that are not necessarily full-on FP like TypeScript or Swift?

I’ve done a lot of Swift and about two projects in TypeScript. I used to have the opinion that writing in the functional style in imperative languages was not a great thing to do. My opinion on this front has completely reversed, partly due to the increase in RAM available to machines and partly due to the need to heavily parallelise code due to increasing CPU core count.

Mostly though, I think writing in a functional style in other languages is simpler. If you don’t change the values of your variables once set, then you won’t get confused about what they are set to. Of course, you can’t be sure about third party libraries and how they will handle the variables you send to them.

In Swift, I use a lot of structs (rather than objects) because they are pass-by-value, which helps a lot. In Python and JavaScript, you kind of need to have an agreed upon convention within the team, which makes it more difficult.

I think JS developers have been burned a bit by TypeScript, as they don’t see many of the benefits before they put the hard work in. In general, I think more type safety is a good thing. I would highly recommend getting to grips with TypeScript if you have to use JS and getting over that initial learning hurdle and upgrading legacy codebases. I would hate to have to work in a dynamically typed language. So many mistakes we all make on a daily basis are caught by the typechecker and that avoids so many runtime exceptions that we do see during testing, however with dynamic typing I’m most concerned with the runtime exceptions we don’t see during testing and that only become apparent in production.

What tips would you give to software development teams that want to try out functional programming?

Firstly, I would say don’t be scared to check out a functional language. There is a good reason these exist and they are highly optimised for the job compared to imperative with bolted on functional features. You will get so many more benefits working in a language that is pure and functional and typesafe, but you won’t see all those benefits if you try to do FP in an imperative language and that can actually put a lot of people off. You may think that by going half-way you’re taking less of a risk, but actually I think the better strategy is to go all in but on a smaller thing.

I would advise that you look at your application and identify small easy chunks that can logically be isolated into independent processes. If you’re experienced in the functional language, pick easy chunks that would benefit from optimisation. All languages allow the execution of external processes and these can also run asynchronously, so this can sometimes be the easiest way to experiment with functional languages like Haskell and incorporate them into existing projects. Obviously the use of things like Linux containers to host microservices can make this easier but don’t take on too many challenges simultaneously.

If you are going to write command-line tools I would very much recommend the optparse-applicative Haskell library or similar to make sure your tools have really great help sections via the -h flag. This will really help other teams and yourself in the future have a really great experience using your tooling.

Try incorporating Haskell into projects this way first before trying something like building a C library via the Foreign Function Interface. You and others can easily get put off by the complexity of this interface layer, when really Standard Input and Standard Output can be a good starting point. When everyone is sold on the idea and the benefits can be seen, that is the time to start introducing complexity gradually and iteratively.

One last word of advice is to trust the maths, by which I mean trust the logic. I’m no maths whizz and I don’t think you need to be to do FP. Just know that if you’re building something logical, it will be provenly easier in FP. Problems occur when trying to build something illogical, and that is just FP’s way of telling you that there’s a problem with what you’re trying to do and it doesn’t make sense. That’s one of the great benefits of FP, and it will help you towards a logical solution that will execute in all conditions. At the end of the day, that’s what we all want, isn’t it?


Big thanks to Rob for chatting with us!

If you would like to read more interviews about the use of Haskell in production, we have a whole series on the topic. 🤯 And if you want to hear more from us, don’t forget to follow us on Twitter, where we post new articles and videos every week!

Haskell courses: everyday extensions
More from Serokell
How to Implement an LR(1) ParserHow to Implement an LR(1) Parser
How Did You Start with Functional Programming?How Did You Start with Functional Programming?
Haskell in Production: SimpleXHaskell in Production: SimpleX