Haskell in Production: Textcontent

In our Haskell in Production series, we interview developers and technical leaders from companies that use Haskell for real-world tasks. We cover benefits, downsides, common pitfalls, and tips for building useful Haskell products.

Today’s guest is Marc Scholten, the Founder and CEO of digitally induced. In this interview, we discuss their product, extcontent. It is a text-generating tool that helps marketers create a distinct style and tone of voice. Marc explained how it differs from other options on the market and how they use Haskell in their product.

Interview with Textcontent

Marc_Scholten.jpg

Since our last interview, you’ve started working on a new product, TextContent. Could you please tell us about how the idea was born and what it looks like today?

At the end of 2022, we’ve been looking deeper into the AI space. This was a few months before ChatGPT was released by OpenAI.

We’ve played around with the OpenAI APIs and initially wanted to build something that is specifically not in the dev tooling space. We came up with the idea of a tool helping ecommerce stores to automatically generate ads for their product catalog. While exploring the idea, we showed a very early demo to some of our existing customers. Initially, we avoided building ecommerce integrations to save some time. But this choice led us to figure out that the more interesting use case for the product is actually social media marketing and marketing in general.

Then ChatGPT was launched, which helped a lot in educating our b2b audience on what is possible right now.

Today, textcontent is a comprehensive b2b marketing tool for using AI to generate marketing contents across a variety of different platforms. It’s used by more than 1000 people across 300 businesses. Some common use cases include:

  • creating marketing posts for LinkedIn;
  • drafting newsletters and press releases;
  • creating texts for ecommerce product pages;
  • creating content for marketing websites.

Initially, we wanted to use lots of different AI models. We quickly learned our customers don’t really mind about the AI model behind the scenes. So we stuck with the latest available GPT versions.

From a technical perspective, you could say that textcontent is “just a GPT-wrapper” app. Aravind Srinivas, Cofounder of Perpexity.ai recently mentioned in his talk at Stripe Sessions, that perplexity.ai was also just a gpt3.5 wrapper in the beginning. He said, “there’s no shame in being a gpt wrapper.”. In the end, it matters that we build something people want, not that a custom proprietary AI powers it. (Most of SaaS is also technically just a Postgres wrapper.)

On the other hand, it’s often the case that technical sophistication of a product is inversely correlated with the commercial success. Textcontent is kind of a proof for that.

Why did you choose Haskell as the programming language for Textcontent?

I wrote the first prototype of the product myself. So the first choice was pretty much what I was familiar with.

How large is your Haskell team?

1 - 2 people. It’s a small but highly profitable team. The product is simple enough that it doesn’t require a large team.

How does Textcontent utilize Haskell’s functional programming paradigm to implement machine learning algorithms, and what advantages does this approach offer over imperative programming styles?

Given the small team size and no venture capital, we don’t have enough resources to implement proprietary machine learning algorithms. Technically, we’re truly just a GPT wrapper.

Here are some specific advantages of Haskell we’ve seen.

  • Our problem space is mostly a compiler problem. (When Haskell is your tool, everything seems to be a compiler problem somehow :D)
  • We take in user configuration such as the content type, like a LinkedIn post, language, brand voice, knowledge about the users company and their products, and then compile that to a GPT prompt. This is fundamentally just compiling a data structure to a prompt string.
  • When we launched initially, the limited GPT context size was a big challenge. Users entered long strings into our system, and then the compiled GPT prompt was overflowing its max size. Thanks to Haskell’s type system we could easily refactor our compiler function to deal with this.
  • Another challenge in the beginning was that OpenAI’s API was very unreliable as they scaled up ChatGPT. We often had our requests return some tokens, but then crash in the middle of generating some sequence of text. Our users then reported issues that their text was nearly finished generating, but then everything was deleted on the screen and the system started generating from the beginning. With Haskell’s powerful library ecosystem we could write our own OpenAI bindings that allowed us to restart such a failed request, while keeping already generated tokens. So when a request fails after already generating 90% of the required tokens, we would only need to generate the remaining 10% of the tokens with the retried request.

Could you describe the process of developing and training ML models within the Haskell environment? Haskell isn’t typically the first choice for ML projects; ecosystems like Python have a much larger choice of libraries. How do you overcome this challenge?

Given we didn’t train our own models, I’ll try to answer this question in some other way: We definitely had trouble finding libraries for certain kinds of tasks. For example, at some point, we needed to figure out the token count for a string. There was no Haskell library for that. We had to write the functions for this ourselves. We wrote these functions by looking up some existing JavaScript library and then using GPT to rewrite the JS function to Haskell. It required some manual changes but it worked.

Another library we’ve been missing was a library for connecting to our Pipedrive CRM. Whenever a new user signs up using the self-service flow, we create a new deal in Pipedrive automatically, so that someone from sales reaches out. Again, we had to write our own functions for this, but this was pretty simple as it’s just a few HTTP calls.

The existing OpenAI binding libraries in the Haskell ecosystem didn’t compile when we wanted to use them. So we eventually had to write our own. Which we later extracted out into ihp-openai.

How does your team ensure code quality and maintainability in Haskell, particularly for complex AI algorithms?

Code quality is not really a concern for us. Speed in product development and time-to-market are. Thanks to Haskell, we could really quickly ship new features and keep up with the very rapidly changing market we’ve found ourselves in.

Banner that links to Serokell Shop. You can buy stylish FP T-shirts there!
More from Serokell
Haskell in Production: Meta thumbnailHaskell in Production: Meta thumbnail
Haskell in production fossaHaskell in production fossa
fintech functional programmingfintech functional programming