Racket, OpenGL, and You

The Racket Situation

Racket is great. Odds are if you are reading this, you're probably here more because of the word "Racket" in the title, and less because of the world "OpenGL" in the title. And that is great. But if this is the case, you may be disappointed to find that this post has very little to do with Racket.

So here's the good news: you absolutely can render to a racket/gui canvas in real time with a modern OpenGL context, and use all of the latest OpenGL features your machine supports, and it works great.

The catch is that Racket is barely involved in the process. I would encourage you to consider this to be a good thing as well. Real time rendering is very time sensitive, and any source of extra overhead in your hot path adds up fast. If you are looking to render at 60 fps, you have a time budget of about 16.6 milliseconds to draw a frame. You can free up some time by dropping to a lower frame rate, but this has the cost of worse temporal aliasing (objects in motion have choppy movement), longer input latency, physical discomfort, and so on. If you drop below 15 fps, your application starts to look more like a slide show than animation. Personally, I don't recommend dropping below 30 fps. Racket is fast, but it isn't as fast as C++, and you have to pay a cost every time you hit the FFI. When you issue rendering commands too far apart, you're spending part of your frame budget with the GPU idle. Racket has a lot of invisible performance gotchas that are very hard to pin down, like contracts and implicit conversions.

Racket is an incredibly powerful tool when you use it for its strengths. And so are low level languages like C++. So if nothing else, what I hope you take away from this is that you have to think about rendering code in terms of hot paths and cold paths. The hot paths are anything that issues rendering work to the GPU or might otherwise stall the GPU. The cold paths are everything else.

With all that in mind, the goal here is that Racket owns the high level UI via racket/gui, and Racket performs high level functions like handling user input, loading resources, creating procedural content, running your game sim, and so on. Some of these functions are colder than others, but the renderer is The hot path, and we want that implemented in a backend, and running largely in its own thread.

Cold Start

The first thing you need to do before you can do any rendering is create your OpenGL context. This is really easy in racket/gui:

(define gl-config (new gl-config%)) (send gl-config set-legacy? #f) (send gl-config set-sync-swap #t) ; Set this false if you don't want vsync. (define canvas (new canvas% [parent your-frame-here] [style (list 'gl 'no-autoclear)] [gl-config gl-config] [min-width 200] [min-height 200]))

And that is pretty much it. There are a few configuration options available on the gl-config% object, most of which pertain to the formatting of your back buffer.

Some options aren't exposed, and you will have to live with them or get creative. For example, if you have vsync on, as I do above, the sync interval is always set to 1. This means that vsync expects your frame cadence to be 60 fps. In this case, this is usually what you want anyway, so whatever.

If you need to get down to brass tacks with how the context creation is implemented, you can find the Windows implementation here, the X11 implementation here, and (I'm not totally sure) the OSX implementation here.

The different platform implementations are inconsistent! The GLX (X11) variant will attempt to set up a OpenGL 4.5 context, and if that fails, it'll count backwards to 1.5 until it finds a version your machine accepts. This may be very surprising if you absolutely require features that are only available in, say, 4.6 core. Similarly, the wGL (Windows) variant never sets up a core context, and will you always get a legacy (1.5) context, which will confuse GPU debugging tools. Both of these problems can be worked around, which will be discussed later.

I don't have an OSX machine, so unfortunately at this time I cannot say what pitfalls you will encounter, or how to overcome them.

Warming Up

The remainder of the OpenGL initialization is going to happen in a C++ backend. You only need to expose one function to Racket to start, which Racket should call after showing your canvas widget to initialize OpenGL. When Racket calls this function, it must be wrapped with call-as-current.

When your backend's setup function is called, it will need to do three things. First is it will need to create something called a shared context, the second is you will need to use a OpenGL library loader to finish setting up OpenGL, and the third is to set up your render thread. I highly recommend using Glad to generate the library loaders for OpenGL, GLX, and wGL. A shared context is an OpenGL conext that is able to share resources (such as the back buffers you already created) with an existing OpenGL context. These also allow for OpenGL to be used in a thread safe fashion, such as described below.

On Windows, you will need to call wglGetCurrentDC, and wglGetCurrentContext to access the device context and rendering context that you created earlier in Racket. These can be accessed before calling gladLoadWGL, and the glad_wgl.h contains the necessary includes to access these functions, but you will need to link against opengl32.lib. Your code will probably look something like this so far, adjust as needed:

#if _WIN64 HGLRC UpgradedContext; HDC RacketDeviceContext = wglGetCurrentDC(); HGLRC RacketGLContext = wglGetCurrentContext(); if (gladLoadWGL(RacketDeviceContext)) { std::vector ContextAttributes; // Request OpenGL 4.2 ContextAttributes.push_back(WGL_CONTEXT_MAJOR_VERSION_ARB); ContextAttributes.push_back(4); ContextAttributes.push_back(WGL_CONTEXT_MINOR_VERSION_ARB); ContextAttributes.push_back(2); // Request Core Profile ContextAttributes.push_back(WGL_CONTEXT_PROFILE_MASK_ARB); ContextAttributes.push_back(WGL_CONTEXT_CORE_PROFILE_BIT_ARB); // Terminate attributes list. ContextAttributes.push_back(0); UpgradedContext = wglCreateContextAttribsARB(RacketDeviceContext, RacketGLContext, ContextAttributes.data()); wglMakeCurrent(RacketDeviceContext, UpgradedContext); } else { // TODO: Halt and catch fire. } #endif if (gladLoadGL()) { std::cout << "Qapla'!\n"; std::cout << glGetString(GL_RENDERER) << "\n"; std::cout << glGetString(GL_VERSION) << "\n"; }

If all goes well, you'll have a shared context to use for your actual rendering, and the OpenGL version your application needs all set up and ready to go. The next step is to create your render thread.

Wait, what about Linux

GLX is very similar, but requires a little more work to get right. The biggest caveat here is that X11 is not thread safe, and therefor GLX is not thread safe 😭. Issuing OpenGL commands in a separate OS thread will crash when racket/gui attempts to do GTK stuff at the same time. I'll talk about this a bit more in the next section.

As described by this racket/gui code comment , there is also an oddity where GLX will raise an error if you call glXCreateContextAttribsARB requesting an OpenGL version that is not available on your system. You should set up the appropriate error handler so you can shut down your application gracefully if the OpenGL version you need is not available.

As for the GLX APIs themselves, they're mostly the same as their wGL counter parts: glXGetCurrentDisplay and glXGetCurrentDrawable together replaces wglGetCurrentDC, glXGetCurrentContext replaces wglGetCurrentContext, glXMakeCurrent replaces wglMakeCurrent, glXCreateContextAttribsARB replaces wglCreateContextAttribsARB, and glXSwapBuffers replaces SwapBuffers.

There's some additional book keeping you have to do. You'll need to call glXQueryContext to query the GLX_SCREEN and GLX_FBCONFIG_ID values from the current display, and you'll need to pass that GLX_FBCONFIG_ID into glXChooseFBConfig to retrieve the GLXFBConfig from glXChooseFBConfig. Don't forget to call XFree on glXChooseFBConfig's return value once you copy out the data you need from it. glXCreateContextAttribsARB, glXMakeCurrent, glXSwapBuffers work pretty much the same as their wGL counterparts, but you have to pass in a little more info. So for Linux, your code will look something like this:

#elif defined(__GNUC__) Display* RacketDisplay; GLXDrawable RacketDrawable; GLXContext UpgradedContext; if (gladLoadGLX(nullptr, 0)) { RacketDisplay = glXGetCurrentDisplay(); RacketDrawable = glXGetCurrentDrawable(); GLXContext RacketGLContext = glXGetCurrentContext(); int Screen; glXQueryContext(RacketDisplay, RacketGLContext, GLX_SCREEN, &Screen); glXQueryContext(RacketDisplay, RacketGLContext, GLX_FBCONFIG_ID, &ConfigId); std::vector ConfigAttributes; ConfigAttributes.push_back(GLX_FBCONFIG_ID); ConfigAttributes.push_back(ConfigId); ConfigAttributes.push_back(None); int Count; GLXFBConfig* Found = glXChooseFBConfig(RacketDisplay, Screen, ConfigAttributes.data(), &Count); if (Count == 1) { GLXFBConfig Config = *Found; XFree(Found); } else { XFree(Found); // TODO: Halt and catch fire. } std::vector ContextAttributes; // Request OpenGL 4.2 ContextAttributes.push_back(GLX_CONTEXT_MAJOR_VERSION_ARB); ContextAttributes.push_back(4); ContextAttributes.push_back(GLX_CONTEXT_MINOR_VERSION_ARB); ContextAttributes.push_back(2); // Request Core Profile ContextAttributes.push_back(GLX_CONTEXT_PROFILE_MASK_ARB); ContextAttributes.push_back(GLX_CONTEXT_CORE_PROFILE_BIT_ARB); // Terminate attributes list. ContextAttributes.push_back(0); // TODO: Wait what was that about a XSetErrorHandler...! 😵 UpgradedContext = glXCreateContextAttribsARB(RacketDisplay, Config, RacketGLContext, true, ContextAttributes.data()); if (UpgradedContext == NULL) { // TODO: Halt and catch fire. } glXMakeCurrent(RacketDisplay, RacketDrawable, UpgradedContext); } else { // TODO: Halt and catch fire. } #endif if (gladLoadGL()) { std::cout << "Qapla'!\n"; std::cout << glGetString(GL_RENDERER) << "\n"; std::cout << glGetString(GL_VERSION) << "\n"; }

Getting HOT

So, the render thread. Once you have the library loaded, you'll want to start your render thread, which will finish the rest of your setup and then enter its steady state. The first thing the thread needs to do is call wglMakeCurrent (or, the glX one) to associate the new rendering context to your new thread. The rest of the setup is whatever your application needs, eg shaders. Finally, it'll enter a loop until the application is shut down.

Or at least, I wish that were the whole story, but as I mentioned above, this isn't really possible on Linux due to thread safety issues, so you're going to have to make some compromises. Instead of having C++ create a thread where it'll loop on the rendering code, you'll have to have racket call your renderer directly every frame. There's a lot of ways to do this, but everything I've tried so far results in about ±1ms of noise in the time between frames. This approach probably won't result in dropped frames for simple projects, but this is problematic for more serious work.

There's no reason you can't do both though, and if you do, then you might end up with something like this:

std::thread* RenderThread; std::atomic_bool RenderThreadLive = true; void Renderer() { #if _WIN64 wglMakeCurrent(RacketDeviceContext, UpgradedContext); #elif defined(__GNUC__) glXMakeCurrent(RacketDisplay, RacketDrawable, UpgradedContext); #endif #if _WIN64 while (RenderThreadLive.load()) #endif { static int FrameNumber = 0; ++FrameNumber; double DeltaTime; double CurrentTime; { using Clock = std::chrono::high_resolution_clock; static Clock::time_point StartTimePoint = Clock::now(); static Clock::time_point LastTimePoint = StartTimePoint; Clock::time_point CurrentTimePoint = Clock::now(); { std::chrono::duration FrameDelta = CurrentTimePoint - LastTimePoint; DeltaTime = FrameDelta.count(); } { std::chrono::duration EpochDelta = CurrentTimePoint - StartTimePoint; CurrentTime = EpochDelta.count(); } LastTimePoint = CurrentTimePoint; } // Your Rendering Code Here! #if _WIN64 SwapBuffers(RacketDeviceContext); #elif defined(__GNUC__) glXSwapBuffers(RacketDisplay, RacketDrawable); #endif } }


Sorry, I got really excited. I really love rendering.

Now What?

This code already manages buffer swapping, so do not do it in Racket as well. If you're just targeting wGL, there is no event scheduling required to make it work, and you'll get a buttery smooth responsive UI on both the racket/gui side and the OpenGL side. If you're targeting GLX, well, some experimentation will be needed to schedule your frame draws in a way that you are happy with. Marking the renderer function with #:in-original-place? #t and #:blocking? #t and repeatedly hammering it from a Racket place seems to work out ok, but maybe there is a better way.

Beyond that, you'll probably want to add some more functions to your backend for communicating with Racket. For example, you'll need a way to shut down if the renderer encounters a fatal error, you'll want some kind of error reporting for debugging and for bug reports, and you'll want some way to pass in user input, resize events, and so on. How to best manage this is left as an exercise for the reader.