Functional Futures: Functional Programming and Web3 with Brooklyn Zelenka

Recently, we started a new podcast called Functional Futures, where we interview people that are working on the future of functional programming. Our first guest was Brooklyn Zelenka, the co-founder & CTO of FISSION, an applied research company developing local-first and user-controlled applications. She is also the author of Witchcraft, a library for writing Haskell “fan-fiction” in Elixir.

In the interview, we talked about her path towards becoming a developer, functional programming, Witchcraft, and Web3.

You can listen to the episode in your favorite podcast app or watch the recording on our YouTube channel.

Below you can find highlights from the interview, edited for clarity.

Highlights from our interview with Brooklyn Zelenka

Learning programming

Jonn: I would actually start with asking you how you ended up pushing buttons to make computers do stuff? As far as I understand, you didn’t start your career as a coder?

Brooklyn: First off, thank you so much for having me here and for the nice introduction, and yeah, you’re right. I didn’t start off as a coder. I initially was studying music theory and composition and did a little bit of that – some film scoring and things like that – for very small, independent things but, you know, it’s a hard way to make a living. And because I was in music, I was also doing a lot of concert posters for the other students in school, so I ended up with some Photoshop, Illustrator skills, worked as a graphic designer for a little bit (there’s a lot more demand for it than than a classical composer), worked a bunch of random odd jobs, ended up in a startup and they said: “Hey, do you think you could do a little bit of front-end development, because we wear a lot of hats, you’re a designer, you could maybe do some of that stuff?” They sent me over the weekend with a couple books, and I came back on Monday and they’re like: “you seem better at this than you’re with graphic design, could you just, like, keep programming?” So I did that.

This was before Node was really the clear winner as we were using a JVM-based backend that would let you mix and match all the different JVM languages, and this company definitely was mixing all of them. So I had to pick up a bunch of languages really, really quickly if I wanted to interact with the backend. So, you know, Groovy, JRuby, Closure, a whole bunch. I found I really liked it and so got really into, as I like to say, “collecting programming languages”, got really into PLT, and the rest is history, I guess.

Jonn: Before we start talking about PLT at large and maybe particular details about stuff, I would also like to ask you the following. Right now, you are a CTO, which is essentially a leadership position. So I think that it will be very interesting for our viewers to know about your path to technical leadership, and given that you managed to get into technical leadership reasonably quickly, perhaps, what parts of your background prior to that contributed to this?

Brooklyn: There’s two parts to it, I guess. One is: I ended up doing a lot of management in places that are just not related to tech at all. My first job through high school and university was at a restaurant, in the kitchen, and so, learning to – as you work up through the ranks – learning to manage there, which is very, you know – I wouldn’t recommend restaurant internal management practices, it’s mostly screaming – but learning from those things of what not to do, really, working odd jobs in retail and ending up really quickly in management there as well.

My co-founder Boris often talks about people who have the common sense gene turned on and so I think mostly it was that I had some degree of common sense. I would look at something and go: “hmm, that doesn’t seem right, let’s fix it, let’s get everybody aligned, let’s make sure everybody’s unblocked, and not just have the thing be broken forever.” And so you end up, over the years, developing some experience with that, and I think that then carries over into other fields, because it’s an interpersonal skills portion.

The other part from the purely technical leadership side is: so I’d worked at this small startup, I’d done some consulting on my own and built up a skill set and a bunch of languages, which, unintentionally, I think, had some proof-by-intimidation. I would talk to somebody and be like: “oh it’s like in Closure, it’s like in Haskell, you kind of do it like this”, and people are like: “oh, she must really know what she’s talking about.” But at the time, I had like two years of experience, so you end up getting thrown into leadership positions or running teams more quickly from the purely technical angle as well.

Jonn: The normal path of getting there is to go to a university, to study computer science for a while, and then like around third year or something to figure out which branch interests you the most, and then become a classically trained specialist, and then work on social skills afterwards. There are big advantages of the classical approach as well because this way people kind of have to allocate a time where they more or less are dedicated to studying computer science related subjects. Maybe you can share some organizational tips and tricks? How did you manage to persevere in a world full of distractions and keep learning?

Brooklyn: It’s true, my background is, to say, messy at best. What I’ve noticed from people who went the classical route is they’ve been exposed to things that they may not have had interest in at the time: they’re forced to take a compiler’s class, let’s say. People that are self-taught can kind of wander around and explore the things that are interesting to them. I have both the advantage and disadvantage of “everything is interesting to me”. I was doing this in my early to mid 20s and was literally just working 17 hours a day, seven days a week, and just reading books, writing code, looking at other people’s code, getting into older books, and all of this stuff. And that’s really been consistent throughout my career: just reading, learning, talking to people, going to conferences, picking people’s brains, and every time I feel like I’ve hit a plateau, trying to find some other new area that might be of interest in which to grow.

And that often takes me into these, as you mentioned before, sort of the boundaries between different areas, because I’ll be reading a book on, whatever, compiler design and distributed systems, and go: “how can I apply compiler optimization to distributed computation?” The answer is, you know, yes. And oftentimes other people have already done that and I’m reinventing the wheel. In a university setting, your professor would say: “you’re reinventing the wheel, go look at this paper”, whereas for me or other self-taught people it’s more of a random walk. But you can end up in a place where, if you’re passionate about it, you’ve gone really deep into some areas that other people may not have because it’s not this received curriculum. So it has advantages and disadvantages.

I also saw this back in music school because I was primarily self-taught prior to going to university, and then I was getting a rigorous classical training, and it was the same sort of thing. When I was self-taught, I didn’t know that this was supposed to be a very difficult piece to learn, and then you go to audition and they say:” wow, you’re playing Prokofiev”. And then getting the really rigorous “No, these are the exercises, you need to do them in this order, listen to these records”, and so on, was also helpful. So I think that they’re just different, but you have to have a lot of self-discipline to do the self teaching route.

Jonn: When I personally observe non-coders or non-primarily coders transitioning into computer programmers, their initial exposure to programming is in some sort of scientific language, or maybe some sort of C-like language, or a scripting language like PHP or Lua. What I find a very frequent pattern is that people who have this career path tend to stay within these boundaries: maybe they will learn a C-like language and a Python or they will learn like Java and Kotlin, but their scope tends to be justifiably limited. You are, for sure, one of the most polyglot developers that I’m aware of. Aside from the fact that JVM has many frontends, what’s your secret to never stopping?

Brooklyn: I think a big part of it early on was impostor syndrome and just saying “everybody else obviously knows how to write Ruby, I need to learn how to write Ruby”, and then “oh, there’s this Python thing, they actually look kind of similar, what are the differences?” and kind of finding that I liked learning about those things starting from just the absolute surface and then discovering: “Okay, there’s these families of languages. How are they different? Can I – both to deepen my understanding of things, but also just for pure interest – is it possible to express object-oriented style in standard ML”, right, or whatever, which has ended up being a bit of a theme in my career, this remixing of ideas.

On the polyglot side, early on I was writing mostly JS on the front-end, PHP, and then JVM languages, did a whole bunch of PHP. As consulting, worked as a Rubyist for a few years.

I found myself drawn to FP very quickly as something that seemed to make sense, that felt rigorous, that I enjoyed the aesthetics of. And Closure being one of my first languages, I fell in love with Lisps. Then they would mention ML, and so I would go and look at, you know, Standard ML or Haskell, or OCaml.

I ended up running the overarching unified Functional Programming Meetup here in Vancouver, and a lot of people were coming in who wanted to learn about it. They had no exposure, they were just told functional programming is interesting, and so I would have to meet them where they were at.

It’s like: “Okay, great, you’re writing Python all day and you want to learn what a monad is, let’s write a monad in Python and just see what that looks like.” I was really the advocate for FP and I was trying to to get other people into it, and that meant that I put more of a burden on the teaching side of: “Okay, well, let’s meet you where you’re at, I’m gonna go and learn a little bit of whatever the language is and try to get something closer to where they’re coming from.”

Jonn: Yeah, it is one of my favorite quotes, that in order to really learn something you have to first teach it.

Witchcraft

Jonn: For those viewers or listeners who are not familiar with what Witchcraft is, maybe it’s time to introduce it briefly.

Brooklyn: Maybe the best way is to describe how Witchcraft came to be, because that probably gives a lot of the context of what it is. I was at the Vancouver functional programming meetup, and Vancouver had a lot of Rubyists, and everyone was getting frustrated with the lack of concurrency support built into the language, so a lot of people were looking around, and Elixir kind of has Ruby-ish aesthetics. It’s a very different language, but it comes from a lot of Rubyists working on it, even from the Rails core team.

And so people are coming in wanting to know two things: I want to know all of these scary abstractions that I’ve heard things about, I want to get over that fear, and I want to learn Elixir. So they were coming in mostly from Ruby, and I had to teach them at the same time Elixir and functors, applicatives, monads.

We started with: “Okay, well, let’s just do some live – to show – we have things that are functor-like in Elixir, but it’s not quite the same.” In Elixir, we have an Enum package, and it always outputs a list at the end, and we want to get back to the same data type that we had. So I wrote a functor instance and went: “Okay, well this is kind of interesting, what if I turn this into a larger project?” Well, we’re missing a few things in Elixir, all of the really classic FP stuff like partial functions, and an identity function, things like this.

So over the course of that, I think, weekend, I wrote Quark, which is like the small pieces that you put together to compose and decompose functions in Elixir. And that gave several people aha moments and then I thought: “Okay, well maybe I should flesh out this functor in Elixir idea.” And then I ended up doing quite a lot: there’s comonads in there, there’s all kinds of stuff, semigroups and all these things. Implemented do notation directly in Elixir as macros – Elixir doesn’t have a type system, but it reads pretty close, you do have to tell it “I’m in this context” off the top, but it’s almost identical to reading Haskell code once you’re inside, and also went pretty far with that, abusing the macro system so that we could do some property-based checking at compile time, things like that.

They have something like a type class, it’s called a protocol, but you can never inherit from another one. So you can’t have these towers of abstraction. So I used macros again to implement checking that the type has an instance of the, you know, if [inaudible], so we’ll actually go and check that out at compile-time.

So that’s Witchcraft. It ended up getting actually used in production in a few places. Mainly people use it for error handling because they might be coming from Haskell and they want that cleaner error handling style. I’m told that a few banks are using it, which is always scary to hear. And yeah, you know some web projects as well because they can define their own classes and towers of extraction, things like this, which is always nice to hear from people.

In terms of onboarding folks into FP by using this sort of thing, I’ve found in the past couple years, the last five years in particular, functional idioms have become much more widespread. So people are mainly coming in having already seen TypeScript, as a baseline.

At one point, I started writing a Haskell for TypeScript devs gitbook. As I was teaching things, if there was something where I would write a comparison: here’s how it would look in TypeScript, here’s how it looks in Haskell for somebody, and then i’ll just you know stick it in that in that guide, and that that repo was getting more stars than like almost anything else at the time, in terms of pace.

So there’s definitely an appetite for learning these things. I’m the CTO at a scrappy startup, so I have no time for such a project, but if somebody ever wanted to pick that up, I think that will be really helpful for a lot of people; I think these translations seem to be really useful.

Jonn: A little more technical question: you mentioned that you use macros and Witchcraft to basically be able to express constraints between different type classes and make a type class here or there using these constraints. To me it’s pretty impressive, I’d be interested in how you achieve that.

Brooklyn: Let’s start with type classes. It actually ends up not being too difficult to implement, it’s mostly just thinking outside the box. It ends up producing just a bunch of protocol instances, which is just a language level feature.

People generally think that a macro goes from AST to AST as a pure function, and, actually, in some languages that’s true, but in the case of Elixir you can do kind of whatever: you can do side effects, assertions, all of these things, run arbitrary code. And so essentially it looks at: “Okay, I’ve compiled so far, I’m waiting to see these modules having been built, and when they are, is there a protocol instance, which is a built-in feature in the standard library, for this type?”, and if there isn’t, then it throws an error with a nice message and all that stuff.

The DSL for doing these is: instead of having defprotocol and defimpl, I think, for implementation, you have defclass and definst for instance, which then de-sugars into just a regular protocol instance, but we’ll run these checks as the macro is running.

And then I was looking at like: “Well, there’s no built-in type system, what if we added essentially some limited propchecking to make sure that you had reasonable instances?”, and that’s probably the most controversial part of this, that’s maybe a step too far because now your compilation, which, normally, in Elixir moves very, very quickly, is now getting stuck running 100 checks on each of these things, including when you’ve imported the library.

So I have this very long-running PR, we’ll say. I keep trying to get back to it and then I look at it for 10 minutes, try to get back in context, and then I’m out of time. But essentially to be able to either disable those checks at compile time or turn them into test instances instead. But it only needs to run them (today) the first time you compile the module. So when you install the package, it takes a bit of time, and then after that it’s very quick again, which is good.

And then the other piece, just in case there’s somebody out there listening who wants to do these things, in Elixir, you can quote some code and turn it into AST, and so it feels very natural to be able to write what looks like regular code and then essentially use them as templates. I gave up on that essentially immediately because when you’re trying to do these more complex things where you’re really generating transforms from AST to AST, I just work directly in the AST in the tuples, and it just ends up being a lot easier to reason about even though there’s this initial step of: “Okay, I have to actually understand how the syntax works underneath.”

Deep functional programming

Jonn: In the Witchcraft repository, you use the term deep functional programming. I think I understand it, but I would like you to explain it in your own words.

Brooklyn: It’s really an ad hoc phrase. In FP, there’s lots of different techniques, there’s several families of languages underneath. You know, we’ve been talking here – Haskell and Elixir feel very different at a high level, Elixir wasn’t designed to be a pure functional programming language, it’s designed to solve problems at Ericsson. It happens to end up with a lot of FP flavor, but it’s certainly not a pure language. In fact, the whole VM is really oriented around side effects.

Deep functional programming is sort of a placeholder because I don’t have a better term for using particular abstractions, going really deep. Calling something an algebraic language A) scares people off, and B) there’s lots of things that are algebraic outside of algebraic data types. And so it’s really trying to just capture this idea of the part that most people think is scary, these deep techniques, where it’s not the first thing that you’ll learn when doing FP.

I think it’s the Fantasyland Institute, the people that used to run or maybe still do run LambdaConf, that used to have a list of beginner, intermediate, and advanced FP concepts, and let’s say that’s like roughly the trajectory for a lot of people, so deep FP is more the intermediate and beyond area.

Jonn: You mentioned how over the past few years you’ve noticed how higher order functions and other FP concepts become more mainstream. Where would you put the water line for deep functional programming compared to the sinking ship of imperative programming?

Brooklyn: Programming is so broad, there’s so many people. It partly depends on the particular subculture. But looking around, there’s a lot of packages in the Node ecosystem or in npm that implement these things, there are people that would like to write OCaml or Haskell, or whatever, that want to bring some of these concepts back in. There’s a version of these, as I like to joke about them, high-school fanfiction libraries in essentially every language now, so a lot of people are getting exposed to these ideas.

We’re also seeing them start to end up in other places, [inaudible], which is an industrial research lab, has sort of repurposed bi-directional lenses for doing schema management between arbitrary schemas.They’re using a slight variation from, say, the lens library, but still it’s this basic idea. In Swift, I saw somebody had ported parser combinators because it ends up being a really nice way of working, once you understand them. There’s a learning curve. But once you understand them, it ends up being a really nice way of working.

So more people are getting exposed to these things, and when you have a background already in something, like, just the absolute classics – map, filter, reduce – or understanding that you can build your own higher order functions, or that there’s relationships between different things that are more “principled” – there’s at least now a foothold and jumping off point, you don’t have to go to the Gang of 4 OO patterns and say: “Well, it’s kind of like a Facade,” or something like that. You can say: “Here’s the actual underlying idea, it’s a little bit like these other things you’ve already been exposed to,” and it’s super helpful.

Even in both the functional programming group here, which I should probably also specify – it sounds like it’s mostly, you know, Brooklyn shows up and teaches a bunch of people about FP there – pre-pandemic, we were running various language learning groups, there was a Haskell learning group, a Closure learning group, things like that, and then events where people could come and just present a topic or an idea that they’re working on. So it really intended to be this pipeline where you could take people from absolute beginner all the way through to pretty advanced with a lot of these things. In the past few years, again, it’s hard to say, in the past two years, in particular, but the past several years – much, much easier just because people had seen this stuff before, there’s more learning resources, there’s more books. Typically, people will have taken a run at this kind of thing before trying them out, and so at one point I was consulting at a company that was doing a lot of TypeScript and they would run into problems, I’d say: “oh, well, you know, there’s a solution for that and here it is.” And they would be like “wow, that looks really elegant”. Like, I know, it’s a monad, it’s a scary thing that you’ve been running away from, and here’s an actual use case, and now you have a practical application of it because you’re working in a typed language with higher order functions that you’re trying to kind of move in this direction. You don’t have as much help from the compiler to do it, but you can. So yeah, it’s become just way easier, and I like the metaphor you’re using: there’s this sinking ship and then this rising water line as people get exposed to things.

I think Rust is gonna be huge in the next couple years. It doesn’t have an exact feature-for-feature from Haskell, but there is a lot of higher-order function stuff happening there.

And people are getting exposed to types, as well. So I think it’s only going to become easier. Simon Peyton Jones says that the idea of Haskell isn’t that it should be the number one language, it should influence all other languages, and I think it’s been tremendously successful at that, when you look at all the fan-fiction and the language features that we’re seeing in very mainstream languages today.

Web3

Jonn: Lately, I’ve switched to Fediverse – Mastodon and platforms like this – and basically 90 percent of my feed these days is people saying how Bitcoin kills puppies and how Web3 is BS, etc. And to be completely honest, many of the arguments that are made in my Mastodon feed to me look kind of appealing. Coming from a cryptocurrency background, I know how many questionable substances are floating on the surface of those modern technologies, but I honestly don’t quite understand what Web3 means. So yeah, if you can put it in a positive light or repel the common arguments against it, that would be really nice.

Brooklyn: I agree with some of the negative comments about it, so let’s get into a nuanced discussion here.

Web3 is definitely broader than blockchains. I entered the broad space in 2016-2017 (like, working on it professionally, not just playing around with things on the side). I was brought in to work at a fintech company that was doing stocks and bonds, securities on the blockchain, cross-border, all regulated, and they needed a programming language that was formally verifiable but also legible to a lawyer.

So a non-developer lawyer has to be able to read this thing and it also had to be able to produce verifiably correct code that would then run, and we would be able to plug in things like: “here’s the regulations in the US”, “here’s the regulations in South Korea”, and they would have to overlap, and then plus all the extra logic for the particular stock, let’s say. As I sometimes like to say, the point that we got to was the unholy union of Prolog and COBOL, because it was essentially a business language with constraints. So even going back then, we were using this term broadly of Web3, which was coined by somebody, yeah, in the blockchain space.

If you take a step back and look at the broader picture of what’s happening, […] the overarching idea is we’re trying to return the web to its original founding principles. So when the W3C was created, they had five main values: decentralization, non-discrimination, bottom-up design, universality and consensus. (Consensus in a different sense, of people actually running things, not consensus as a distributed systems mechanism.) Those all hold very strongly for this community of people that are working on things. So I say that’s larger than blockchain. It also includes things like – I’m sure that we’ll dive into a few of these in a moment – what some people call Distributed Web or DWeb, where you can self-host data or have it be hosted in many places, and you can retrieve data by its bytes’ hash. So a little bit like BitTorrent. In fact, BitTorrent probably fits under the broad category of Web3 as well, it just started before the term.

Blockchains are in there; they are one tool of many as we’re moving towards a world that has more – I mean, the entire web is a distributed system – but using more and more distributed systems techniques and developments and making things self-verifying. It’s kind of the broad, technical unifying theme. And having users be able to control their data rather than the big cloud providers. I don’t know about you, but I don’t want to see Amazon, Microsoft, and Google own all of infrastructure for all of the future. And there’s some people, you know, Cloudflare and Fastly and a few others – fly.io – that are working on doing things at the edge, but it’s a really difficult challenge to go up against the ten-thousand-pound gorilla which is Amazon or AWS, who will just put in data centers wherever you are.

And these technologies fundamentally say: “okay, well, things don’t live in one particular location, they’re owned by the user, they can be completely local, they can be offline and continue to work”. And we’re not saying no to AWS – AWS could absolutely participate, but so could everyone else. I could use my excess computing resources to serve files or run computation, or whatever.

The two main complaints I think with blockchains in particular are: there’s a lot of scams, which is true, and there’s proof of work, which uses a horrendous amount of energy, which is also true.

So, a huge number of scams – yup, absolutely – there’s lots of scams in the world broadly, this is unifying fintech with, you know, systems geeks, so you end up with this area that is really focused today around money. There’s more applications to this because, really, what it is, is global distributed consensus, it is a way of everyone getting on the same page about some piece of data, but that’s not the only way to do these things.

You might not need global “everyone’s on the same page about what the state is”, you might have a more granular thing that only two parties are doing, and now you don’t need a blockchain at all. But yeah, scams exist, they’re already regulated, like, Ponzi schemes are illegal, and you need to be careful when buying anything online, really. It’s just there’s been so much upside for people in the past couple years that they’re heavily incentivized to make risky actions. So that’s the scary portion there that absolutely needs to get taken care of. And if there’s a market crash, maybe we can get back to building things again.

And proof of work, you know, if there’s any bitcoin maximalists listening to this, they’re really not gonna like what I have to say here: proof of work uses a lot of electricity, is actually not that secure, and most projects except for Bitcoin are moving off of it. So when I say it’s less secure, I mean there’s this flywheel where you make money from mining bitcoin, and then you can use that to buy more graphics cards to mine more bitcoin, and over time this centralizes into a few providers. Even though everybody can participate, you end up with a couple really big providers that are really calling the shots about what’s getting in and what isn’t. So the other two systems: proof of stake, which is more of a voting system and uses social and economic systems instead of burning processor cycles to secure the network, it does still tend to lead to a handful of people owning most of the what’s going to happen or not on the consensus mechanism, but you can design that to be more equal, and then there’s a bunch of effort going to things like proof of history, which is just using recursive hashing – whoever has the highest hash wins, and then everybody can synchronize against that immediately because now they have the latest hash and can just keep going, which is an interesting solution. So there’s lots of people that are trying to make this way less energy-intensive, and I think that that’s a potential future. But again – without wanting to over focus on on blockchains and broadly digital scarcity, we could have an entire podcast about that, I’m sure – tools like IPFS or [inaudible] or Secure Scuttlebutt (SSB) are the the early picks and shovels for liberating data from a particular location, having essentially a CDN that’s based on who’s looked at something recently, kind of like how BitTorrent works in a collaborative setting, and you can extend this to all kinds of things. If we’re going to talk about electric electric usage: why am I re-computing solutions to problems over and over again when somebody else has probably run that computation before in a lot of applications? What if we could post our sub computations even, and then now we have a giant memoization table that we can start pulling data out of, and the more people that participate in such a network the more efficient it gets. This is all stuff that’s really on the edge in every sense of the term, but that is very much under the umbrella of Web3 and where Web3’s trying to go.

Jonn: I pose as a person who doesn’t know anything about Web3, but I’m also, let’s say, using verifiable credentials in my projects. Does it mean that I’m part of the movement?

Brooklyn: Yeah, absolutely. Verifiable credentials are a W3C spec for asserting things about a person or an organization. The really classic example today is: you want to go to a bar, you need to be 18 years of age or older, and you show them your driver’s license, and your driver’s license is also your home address and a bunch of other information about you. So what this lets you do is the government can sign something that says this person is over 18, not even your age, just literally that this person is over 18. They have a private key that they can prove that they have some information that nobody else does, and now you have a verifiable credential that’s signed by the government.

In this case, it is actually something that in the province where Vancouver is, in British Columbia, the government is already doing. So this is happening for sure, it’s a useful thing. And when you think Web3 and governments together, people usually go: “no, it’s the weird hackers that are doing Bitcoin”, like, no, the government’s absolutely doing these things too.

It is a system that is local first, so it doesn’t have to go to a database necessarily (in some cases, yes, some cases, no), it’s self-verifying and is protecting more of your personal information, which I think is something that we’re, as a society, coming to realize is really important stuff. So yeah, absolutely, verifiable credentials, decentralized identities (DIDs), which typically go together, are absolutely part of Web3 broadly. There’s a lot of interesting work happening on both of these at the Decentralized Identity foundation (DIF).

And a part of why I think people have a hard time with the term Web3 is: the scams and the coins are the ones that are getting all the media attention, and in the past couple months in particular, it has shifted to mean more that, whereas it used to be a broader term, and I’m hoping we get back to the broader term because there’s all these other interesting things that have nothing to do with with those other bits.

We use DIDs pretty heavily at FISSION, both to make apps work completely offline and to reverse the control: user creates a key pair in the browser, the web crypto API lets you have a non-exportable key, so nobody can take your key and run off with it, it’s more secure at least, and then they register with other services. So they’ll register with, say, FISSION’s service to say: “hey, please store my stuff”, which is all encrypted and the user has encrypted it.

So it ends up being these both: things like verifiable credentials which are a form of digital scarcity – you can say that this is an individual person, and I can verify it directly, and nobody else has that – and also the privacy and user control aspects are really core to Web3.

Jonn: Yeah, yeah, and I asked about verifiable credentials for a good reason. Recently, I was trying to understand how we are doing authentication in 2021, and I was reading: “well, okay, we still have these identity providers” and I was like: “What? Why? What does it even mean? Why is someone providing my identity to me?”

And I mean, if you think about how people say that we need to go passwordless, etc. – I think that this is all irrelevant. What is relevant is, as you say, seeing who has to own which data. Like, my data has to be owned by me, and we have very simple cryptographic primitives to allow me to do so.

[…]

I mean, again, I don’t know much about Web3, but it doesn’t seem like Web3 people are evil. Look at Brooklyn, how can you say that? [both laugh]

Brooklyn: That’s what we want you to think. [both laugh again]


We would like to thank Brooklyn for the interview! If you would like to hear more from her, you can follow her on Twitter.

To read more of our interviews with developers, compiler experts, and people using functional programming in production, head to our interview section or follow us on Twitter to stay updated about new interviews and podcast episodes.

Haskell courses by Serokell
More from Serokell
haskell in production verity thumbnailhaskell in production verity thumbnail
Parsing with Happy thumbnailParsing with Happy thumbnail
Blockchain vs. DLT: What's The Difference?Blockchain vs. DLT: What's The Difference?