Catching Runtime Errors in Compile Time with Typescript

Catching runtime errors in compile time with - Typescript?

Hi, I’m Domagoj Cerjan. I work for Oradian, a SaaS company that enables financial institutions in some of the poorest and remote regions of the world to reap the benefits of moving their operations to the cloud. I’m part of the frontend team and vigilantly bore my colleagues with FP and dabble with all things Typescript, React and Redux. In my free time, I develop 3d rendering engines for fun and since that is the domain I know well, I often try out different languages and approaches to solve the same problem over and over again.

Intro

Today, nVidia’s high-end GPUs can do real-time raytracing, open-source AMD GPU drivers rock and cross-platform gaming is more of a reality than a dream. A few things remain as they were decades ago - 3d graphics is the domain where languages such as C/C++ have been and still are used almost exclusively for writing game engines, GPU interfacing APIs are still plenty - from old and tried varieties of OpenGL and DirectX 9 - 11, to all modern and extremely low-level Vulkan, DirectX 12 and Apple’s Metal.

While C and C++ do provide us with the ability to write really fast programs, they lack in a lot of other areas, from memory safety, debuggability and ease of use to portability and flaky type systems - even though one can do wonders with C++ templates which are in themselves a pure FP language with a weird syntax, they are not languages someone can pick up in a week or two.

Having cross-platform in mind, one could argue that there is but one platform which is available to everyone, from any flavour of Linux, Unix or Windows be it on mobile phones, tablets or personal computers or even washing machines and Roomba’s (because IoT). That platform is of course - the Web. Developing 3d accelerated games for web is nothing new, WebGL has been with us for some time, WebGL 2.0 is already supported in newest Firefox, Chrome and Opera and there exist already well battle-tested engines such as Three.js, or one could even use emscripten to port any of the C/C++ engines to web more or less.

But what if we want to write an engine using something more modern, specifically targeting the web browser as the environment and trying to make it as robust as possible? Well, I chose Typescript because of its pretty powerful type system, awesome tooling and amazing integration with IDEs (emacs via tide, VSCode by itself, …) and WebGL 2.0 because this engine is my toy and will probably never see the light of day so there are no issues with only supporting a few web browser. I intentionally chose not to use Idris here, even though it compiles down to Javascript, because i wanted to see how far i can push Typescript’s type system.

The problem

After the lengthy intro, a little heads up - WebGL 2.0 API is still a low-level-ish API dealing with a bunch of byte buffers, very specific rules what values can go to which arguments of functions and when can they even be invoked and in which order. It’s a mess for anyone who is used to write and/or look at pure functional code. In its essence 3d APIs are as far away as you can possibly get from purely functional. You are dealing with the outside world after all ;)

Speaking of said rules - they often come from tables defined in WebGL specification that define which combinations of arguments are valid for a certain API function. This post will deal with how to make one of the most commonly used functions - a function which uploads an image to the texture - gl.texImage2D safe from runtime errors caused by not obeying those rules.

The Solution

First to understand what can go wrong, let’s look at the gl.texImage2D signature:

void gl.texImage2D(
  target: GLenum, // Of interest
  level: GLint,
  internalformat: GLenum, // Of interest
  width: GLsizei,
  height: GLsizei,
  border: GLint, // **Must** be 0
  format: GLenum, // Of interest
  type: GLenum, // Of interest
  pixels: ArrayBufferView, // Of interest
  srcOffset: GLint
 ) // _May_ throw DOMException

Sources: MDN, Khronos WebGL 2.0 Specification

For simplicity, we will only look at internalformat and format since what will be applied to them can easily be applied to target, type and pixels.

Said function arguments format and type values are dependant on the value of the internalformat argument which is in no way visible just from the function prototype. To know which combinations of values are legal, one must take a look at the specification and carefully craft the code so it can never pass invalid values to the gl.texImage2D by obeying rules defined in tables here.

Now, dependant types come as a natural solution to not write a bunch of runtime code making sure no invalid values get passed down. Thankfully, Typescript actually has a kind-of dependant type system since Typescript 2.8.

Typescript Conditional Types

Lets assume we have a renderer capable of rendering Mesh, Light and ParticleEmiter objects to the screen:

type Renderable = Mesh
                | Light
                | ParticleEmiter

const renderer = (context: WebGLRenderingContext) =>
  (renderables: Renderable[]): void =>
    { ... }

Now, some time passes and we get another entity our renderer can deal with - Gizmo - but only if it was created from a WebGL 2.0 context. How do we deal with that without having to do nasty runtime checks and throw exceptions?

Welcome the conditional types to the scene:

type WebGL1Renderable = Mesh
                      | Light
                      | ParticleEmiter

type WebGL2Renderable = WebGL1Renderable
                      | Gizmo

// actual conditional type
type Renderable<Context> = Context extends WebGL2RenderingContext ? WebGL2Renderable
                         : Context extends WebGLRenderingContext  ? WebGL1Renderable
                         : never

And then enforce the context type via

type ValidContext = WebGLRenderingContext
                  | WebGL2RenderingContext
                  | Canvas2dRenderingContext

const renderer = <Context extends ValidContext>(context: Context) =>
  (renderables: Renderable<Context>[]): void =>
    { ... }

As an added extra, that never will cause an additional compilation type error if we provide the Canvas2dRenderingContext as a context to renderer

const webgl1renderer = renderer(canvas.getContext('webgl'))
webgl1renderer([Mesh('a mesh'), Light('a light')])
// no error
webgl1renderer([Mesh('a mesh'), Light('a light')]), Gizmo('a gizmo')])
// error -> Gizmo is not found in the type union of WebGL1Renderable

const canvas2drenderer = renderer(canvas.getContext('2d'))
canvas2drenderer([Mesh('a mesh'), Light('a light')])
// error -> Mesh and Light do not have anything in common with 'never'

Conditional types are a powerful tool which gives us the freedom to ignore some runtime errors because the compiler will not even allow us to write a codepath which will lead to the said runtime errors in the first place. But we have a problem, we want to type-check against values and not types for our gl.texImage2D function.

Well, we need to cheat a bit now. Welcome the typescript enum to the scene. The only language construct besides literal values that lives both in the type and value realms.

Typescript Enums

Like many other languages, Typescript supports enums. A Typescript’s enum is nothing more but a dictionary of key-values where by default, values are numbers starting from 0 unless specified differently or strings. In order to bring GLenum values, which are nothing more but unsigned integers into the type realm we will create an enum.

NOTE: This now becomes tedious since it boils down to copying what is defined in the tables into the code via enums:

For the internalformat argument we use this enum:

enum PixelInternalFormat {
  // Sized Internal Formats
  // rgba
  RGBA4   = 0x8056,
  RGBA8   = 0x8058,
  RGBA8I  = 0x8D8E,
  RGBA16I = 0x8D88,
  ... // and so on and so on

For the format argument we use this enum:

enum PixelFormat {
  LUMINANCE_ALPHA = 0x190A,
  LUMINANCE       = 0x1909,
  ALPHA           = 0x1906,
  RED             = 0x1903,
  ... // and so on and so on
}

For the type argument we use this enum:

enum PixelType {
  UNSIGNED_SHORT_5_6_5 = 0x8363,
  BYTE                 = 0x1400,
  FLOAT                = 0x1406,
  ... // and so on and so on
}

Putting it all together

Armed with conditional types and a bunch of enums we now have everything we need in order to implement a function that will cause compile-time type errors when trying to pass in the wrong combination of values to internalformat, format and type arguments.

Implementing the AllowedPixelFormat dependant type:

type SizedRGBAPixelFormat = PixelInternalFormat.RGBA8
                          | PixelInternalFormat.RGBA16F
                          | PixelInternalFormat.RGBA32F
                          | ... // and a bunch more

type AllowedPixelFormat<P extends PixelInternalFormat> =
  P extends SizedRGBAPixelFormat ? PixelFormat.RGBA :
  P extends SizedRGBPixelFormat  ? PixelFormat.RGB  :
  P extends SizedRGPixelFormat   ? PixelFormat.RG   :
  P extends SizedRedPixelFormat  ? PixelFormat.RED  :
  ... // and a bunch more
  never

And the safe function wrapper:

const safeTexImage2D = <
  InternalFormat extends PixelInternalFormat,
  Format         extends AllowedPixelFormat<InternalFormat>,
  Type           extends AllowedPixelType<InternalFormat>,
  Buffer         extends AllowedPixelBuffer<Format, Type>,
>(gl: WebGL2RenderingContext) => (
  pixelInternalFormat: InternalFormat,
  pixelFormat:         Format,
  pixelType:           Type,
  width:               Int,
  height:              Int,
  pixels:              Buffer
): void =>
  gl.texImage2D(
	gl.TEXTURE_2D, // target, for simplicity just set to gl.TEXTURE_2D
    0, // level
    pixelInternalFormat, // internalformat
    width, // width, shocking
    height, // height, also shocking :)
    0, // border which must be 0 because specs
    pixelFormat, // pixel format dependant on pixelInternalFormat
    pixelType, // pixel type dependant on pixelInternalFormat
    pixels, // pixels buffer type dependant on both pixelFormat and pixelType
    0, // offset
  )

And all what is left now is to call this safe wrapper function and witness the awesomeness of dependant types in Typescript:

const texImage2D = safeTexImage2D(canvas.getContext('webgl2'))

// legal
texImage2D(PixelInternalFormat.RGBA, PixelFormat.RGBA, PixelType.UNSIGNED_BYTE, 256, 256, validBuffer)
// ilegal
texImage2D(
  PixelInternalFormat.RG16F, // constraint to FLOAT | HALF_FLOAT comes from here
  PixelFormat.RG,
  PixelType.UNSIGNED_BYTE, /*
  ^---------------------- Argument of type 'PixelType.UNSIGNED_BYTE' is not assignable to
                          parameter of type 'PixelType.FLOAT | PixelType.HALF_FLOAT' */
  256,
  256,
  validBuffer, // has to be exactly 256  *  256  *  2  *  (2 | 4) bytes big
               // 2 channels from PixelFormat.RG ---^     
               // 2 or 4 bytes from half or float --------^
)                                                      

Conclusion

By applying the same approach it is possible to create safe wrapper functions above unsafe functions coming from not only WebGL but from other commonly used APIs with complex rules not immediately visible from the function signatures. The power to prove that if the code compiles it will not end up in a codepath that might result in weeks worth of headaches to debug is priceless in the world of large codebases and should be appreciated.

The fact that even Typescript which is a typed dialect of Javascript is headed towards dependant types in its type system serves as a strong indicator how dependant types are the future.

And remember, catch errors in the compile time, not runtime :)

Banner that links to Serokell Shop. You can buy awesome FP T-shirts there!
More from Serokell
hackage search picturehackage search picture
Why TypeScript? TypeScript vs JavaScript comparisonWhy TypeScript? TypeScript vs JavaScript comparison
Typescript frequently asked questions and answersTypescript frequently asked questions and answers