The Dev Blog

Rendering Pixels in Rust with egui

Recently I've been trying out egui as a possible candidate framework for my SDR project, uda. There's lots I like about egui, and I think its immediate-mode GUI will be nicely suited to SDR which tends to involve lots of visualizations.

One type of chart that uda will need is a Waterfall plot, a type of plot that visualizes signal energy in different frequencies over time. egui does not seem to provide a Waterfall plot primitive that we'd need, so it makes sense to consider building our own. Specifically, we'll probably want to build a big contiguous buffer of pixels and then ask egui to show our buffer on the UI somewhere. Over time, as we collect more samples, we'll keep making gradual changes to our buffer and then refreshing the UI.

This post will explore a couple of options I explored to render a waterfall plot on my UI. The same techniques should apply for anyone who's looking to splat some pixels on their egui UI. This could apply to any sort of algorithmically derived pixel content like fractal renderers, Conway's Game of Life, and so on. This post will not get into the use of fragment shaders or other GPU accelerations, although those would likely offer even better performance.

Approach 1: egui texture + ColorImage

My first attempt at putting some pixels on my UI was to build an egui texture and then call texture.set().

In our setup:

        self.texture = cc.egui_ctx.load_texture("my_texture", ColorImage::example(), egui::TextureOptions::NEAREST);
        self.pixel_height = 800;
        self.pixel_width = 800;
        self.pixels = vec![0; self.pixel_width * self.pixel_height * 4];

And then at every following rendered frame:

        // Construct our rolling buffer of pixels elsewhere into self.pixels.
        // Store it as an RGBA format (4 u8s)

        // Now ask egui to render it
        // Note that this calls .collect() -- creates a copy of the whole buffer
        let image = egui::ColorImage::from_rgba_unmultiplied([self.pixel_width, self.pixel_height], &self.pixels);
        self.texture.set(image, egui::TextureOptions::NEAREST);
        let ui_image = egui::Image::new(&self.texture).max_width(800.0);
        ui.add(ui_image);

This does work, and seems to be suggested as a good way to get our image on the UI. Unfortunately, it creates more churn than we need. Even with a release build, this came at a moderate CPU cost on my laptop.

As a baseline, without the waterfall plot, the entire application as it exists so far has a CPU utilization of around 15% (single core). This is a very simple application but does gather samples from my RTL-SDR, run an FFT on them, and plot the FFT. Once the waterfall plot was added to this same application, the CPU single core utilization increased to 45-50%.

This is a passable performance, but I wanted to see if I could put together something better.

Approach 2: Custom egui wgpu Integration

The issue I saw with the first approach was that we incurred more copies of our big buffer than we needed. The buffer we're using only needs to write one new line of pixels per refresh, so any full-buffer copies are considerably more costly than the algorithm generating the actual content.

This lead me to some searching. What we'd really prefer to do is keep a texture handle that's just backed by our buffer directly. egui (wisely) does not offer this, and it's likely this will lead us to writing some unsafe code.

I considered using this winit + egui + wgpu template as an alternative to eframe. This would be no small change. Abandoning eframe would give us very fine-grained control of the rendering process. To me, this felt like more trouble than it would likely be worth for this single use case. I ultimately abandoned this idea as it felt it would lead too far out of the happy path for what is otherwise a pretty standard graphical interface.

Approach 3: frame.register_native_glow_texture

While researching the 2nd approach, I noticed an interesting-sounding function name in eframe - register_native_glow_texture. From the description, this could be just the thing we need. My research pulled up little in the way of usage, so I did a bit of experimenting to see if I could get it working for my waterfall plot.

First we do some setup:

        let glow = cc.gl.as_ref().unwrap();
        self.tex = unsafe {
            glow.create_texture().unwrap()
        };
        self.tex_id = None

And then a little more work to render:

        let glow = frame.gl().unwrap();
        unsafe {
            // here we can do some custom slicing or zooming on our pixels if we want
            // i have omitted the slicing for clarity
            let data = glow::PixelUnpackData::Slice(Some(&self.pixels));
            glow.bind_texture(glow::TEXTURE_2D, Some(self.tex));
            glow.tex_parameter_i32(glow::TEXTURE_2D, glow::TEXTURE_MIN_FILTER, glow::NEAREST as i32);
            glow.tex_parameter_i32(glow::TEXTURE_2D, glow::TEXTURE_MAG_FILTER, glow::NEAREST as i32);
            glow.tex_image_2d(glow::TEXTURE_2D, 0 as i32, glow::RGBA as i32, self.pixel_width as i32, self.pixel_height as i32, 0, glow::RGBA, glow::UNSIGNED_BYTE, data);
        }

        if self.tex_id.is_none() {
            // our newly discovered function call gives us an egui texture id from our glow texture
            self.tex_id = Some(frame.register_native_glow_texture(self.tex));
        }
        // create an egui Texture object
        let texture = egui::load::SizedTexture::new(self.tex_id.unwrap(), [self.pixel_width as f32, self.pixel_height as f32]);
        let image = egui::Image::new(texture).max_width(800.0);
        ui.add(image);

In order to make this work, we will need just a very little bit of glow. If were instead using the wgpu backend for eframe, I suspect a similar approach could work but haven't tested it. glow doesn't offer much in the way of documentation but the functions seem to behave nearly identically to OpenGL, so khronos makes a good resource e.g. glTexImage2D for glow's tex_image_2d.

After making this change, I measured CPU utilization again. This time, we clock in at 22% single core utilization. This is still significantly higher than our no-plot baseline of 15%, but much better than the 45% with the first approach. For now, this seems like an acceptable cost for an important component. Although it is a bummer to have to use some unsafe code, the tradeoff seems worthwhile. For smaller regions that can be drawn more cheaply, it is probably not worth using this method.

Hopefully this helps another dev who just wants to put some pixels on an egui window. I may return to this topic if I revisit the performance of this component, but next I hope to work on commands and interactivity for uda. Seeing the waterfall plot come to life has been a great moment in this project's development.