Learning Rust and WebGPU

I’ve looking at getting more into graphics programming. So I’ve mostly worked within game engines and dabbled a bit in OpenGL. Why Choose WebGPU? Vulkan Super modern and powerful, but the API is really verbose and has a pretty steep learning curve. You get amazing performance and control though. DirectX 12 The go-to for Windows stuff; modern and fast with lots of low-level control, but it’s Windows-only (though DirectX 11 is still pretty common). OpenGL/WebGL Been around forever and works everywhere, but has a lot more CPU overhead and lacks modern graphics features. No native computer shaders. Metal Apple’s graphics. Super powerful and simpler syntax but I don’t got a Mac. WebGPU A bit higher level as it maps to modern API’s like Vulkan and Metal. Simpler syntax to get started. Still offers modern features and great performance versus WebGL. Great multi-platform targetting and easily runs in the browser. Will be nice for sharing project! Some good documentation exists luckily but not nearly as much as some of the more mature options. C++ or Rust? I had to decide between using C++ with Google’s Dawn implementation or Rust with the wgpu crate for my WebGPU journey. ...

September 26, 2025

Creating Volumetric Fog of War

While I was working on Inner Alliance, I needed to implement a fog of war but I realized a simple 2D screen effect wouldn’t cut it when we had a dynamic 3D camera. That led me down the path of raymarched fog. Here, we’ll take a look at how I built it in Unity URP. Intro to Ray Marching Raymarching is a powerful rendering technique that casts rays from the camera and steps along them to sample shapes or volumes. Instead of relying on complex meshes, it uses signed distance functions (SDFs) to represent objects like spheres, fog, or clouds. ...

June 7, 2025

Shadow Detection: Why Render Textures Were a Mistake

During a game jam centered around the theme of light, I worked on a project that required a shadow detection system. You can check out the game here. The challenge was to detect when objects were in light or shadow and identify the closest shadow to an object. Initially, I thought render textures would be an elegant solution, but this approach turned out to be a mistake. Here’s why. The Initial Appeal of Render Textures The idea was to use a camera positioned above the scene to capture light and shadow data into a render texture. This texture could then be sampled to determine if an object was in shadow or to find the nearest shadow by analyzing surrounding pixels. It seemed promising because it provided a dynamic map of the environment, avoiding the performance cost of casting multiple rays while also letting us easily find the closest shadow. ...

October 28, 2024

Real Time Rendering Notes

Some notes I’ve made while reading Real-Time Rendering Fourth Edition. Chapter 2 Summary: The Graphics Rendering Pipeline flowchart LR A[Application]:::app --> B[Geometry Processing]:::geo B --> C[Rasterization]:::rast C --> D[Pixel Processing]:::pix classDef app fill:#f9f9a3,stroke:#333,stroke-width:2px; classDef geo fill:#9fe0a6,stroke:#333,stroke-width:2px; classDef rast fill:#b5d3ff,stroke:#333,stroke-width:2px; classDef pix fill:#f9c1a3,stroke:#333,stroke-width:2px; Overview The graphics rendering pipeline is a core concept in real-time graphics. Its purpose is to generate a 2D image from a virtual 3D environment, which includes objects, light sources, and a virtual camera. The pipeline ensures that objects are appropriately rendered by processing their geometry, materials, light interactions, and textures, among other factors. ...

September 5, 2024