On the latest Nightshift Galaxy weekly dev stream I demonstrated the specialized level editting tool I’m building inside the Unreal Level Editor that I’m calling Scaffold.
I gave an impromptu introduction to my motivations and inspirations live, but on reflection the topic deserves a written deep-dive.
In designing tools I’m guided by three high-level goals:
- Productivity. As a solo-developer, I need to automate as much of my workflow as possible, to ensure that I focus on places where my specific skill-set adds value, and avoid labor-intensive tasks that aren’t core to gameplay.
- Individuality. A risk of using a generic game engine, like Unreal or Unity, and common marketplace assets is producing a generic game. By developing unique systems, I can enable unique gameplay that has market differentiation.
- Performance. As an action game with a high skill-ceiling there is no “correctness vs. performance” tradeoff; 60+ FPS is a correctness requirement. Furthermore, in my experience, to confidently reach a larger audience with midrange commodity devices, one can’t simply chip-away at perf problems in the profiler later – one must architect for it.
Scaffold addresses these goals by exposing (i) an interactive interior design tool that prioritizes power-user hotkeys and hackability, (ii) gameplay systems that supplement Unreal’s built-in collision and navigation with opinionated parallel systems, and (iii) data-structures (inspired by 90s game engines) that are efficient by-construction.
Background: Convex Decomposition
To explain the details of the system, we need to first establish some background in Convex Geometry that’s relevant to the design. In a nutshell, the convex hull property describes any shape where the straight-line connecting any two points within the shape is also contained in the shape.
Sample Convex Shapes in 2D
This is in contrast to concave shapes that have notches or holes cut out of them.
Sample Concave Shapes in 2D
Most spatial algorithms in games, particularly raytracing and pathfinding, implicitly rely on the convex-hull property by partitioning complex scenes into a network of connected convex-volumes through a process called Convex Decomposition.
In this data-structure, the faces of the volumes fall into two categories: Walls that close the outside of the composed shape, and Windows in-between shared internal edges that are logically open.
Convex Decomposition
To trace a ray through the scene, we start by identifying which volume contains the origin of the ray (ususally from context). Because of the convex-hull property we know we can extend a straight-line all the way to a point on the edge, which we identify through ray-plane intersection. Then we check if the edge is a “wall,” in which case we’ve successful identified the hit, or a “window,” in which case we go back and repeat the process for the newly-entered volume.
Raycasting within a Convex Decomposition
This algorithm is local and for volumes with a fixed-capacity for the number of sides has O(1) complexity, relative to the size of the scene, as opposed to a brute-force algorithm which inspects every single wall and therefore has O(n) complexity.
We extend this algorithm to loose-props and moving-actors by associating with each volume a list of primitives, what I call an “object-bucket,” which is searched for a hit before advancing to the far-edge.
Raycasting with Loose Object “Buckets”
By assigning a fixed-capacity to buckets, this keeps the cost O(1). When we consider that every actor is doing their own traces to move every frame, that gives us something like O(n) total performance complexity, vs. brute force in which every actor is checking every other actor for O(n²) total performance complexity, which is unacceptable for interactive applications.
If the buckets have variable-capacity, we can avoid the “everyone crowds in one volume” edge-case by recursively subdividing buckets spatially and making a tree-like structure. For a uniform-distribution of actors, this has O(log n) trace complexity and O(n × log n) total complexity, which is the typical threshold in game programming for “fixable via perf profiling.”
Navigation
Another use-case for convex decomposition is pathfinding. By drawing waypoints in the middle of each “window” we know we can connect them across their shared volumes with straight-lines (again, due to the convex hull property). Furthermore, any actor inside a convex-volume can make their way to any of that volume’s waypoints, and as long as a connected path exists we can draw draw a path between any two points that is free of collision.
Pathfinding on a Convex “Navigation Mesh”
The algorithm to determine the if the path exists is beyond the scope of this writeup, but it’s called the A* Search Algorithm and there’s lots of easily-searchable literature about it online.
Collision and Navigation in General-Purpose Game Engines
A wrinkle in understanding how these concepts apply to modern general-purpose game engines, like Unity and Unreal, arises the architecture of their level editors. Both eskew the explicit construction of a spatial decomposition, and instead expose to the designer a simple list of loose objects that are hand-places one-by-one (“Actors” in Unreal, GameObjects” in Unity). Therefore, to apply these algorithms, dynamic data structures are built at run-time, on the game device, based on heuristic, fine-tuned methods.
For collision and raycasting, the spatial partition is built via an invisible lattice of implicitly-linked convex grid-cells (think like the lattice in Minecraft, except each cell has a object-bucket rather than a solid voxel).
Here’s a 2D “Quadtree” illustrated. The convex decomposition is limited to boxes and object-buckets, but no true “walls”.
The origin of the ray is determined via quantization, and proceeds in the same way outlined above, except the grid-cells have only “window” edges, so we’re only collision-checking the buckets. The step of quickly filtering via local buckets is described in the literature as “Broad-Phase Collision”.
Tracing a ray through a broadphase spatial partition (greatly simplified).
Because a rectangular grid cell cleanly divides into 8 sub-cells, this lattice data structure is called an “Octree.” Due to its simplicity, Octrees are the most common spatial partitions, but other interesting schemes exist, and the solution in production often has several interesting optimizations that are beyond the scope of this summary – but the principle is the same. Both Unreal and Unity used to rely on the PhysX library for this task, though Unreal has transitioned to an in-house solution called Chaos. Another notable solutions are Jolt, which was originally developed for the game Horizon Zero Dawn, and Havok, which has been around a long time, most notably/recently powering Nintendo’s Breath of the Wild.
“I’ll just develop my indie game the same way a multimillion dollar company develops theirs” is not exactly a winning strategy.
For navigation, general-purpose engines rely on hand-placed volumes to perform many continuous raycasts on background threads at runtime to asynchronously populate connected “ground shapes” on which pathfinding can be performed. This process is more art than science, and relies on a vast number of fine-tuning parameters that demand continuous designer tweaking for good results. Unreal, in particular, relies on the Recast library for this task.
Screenshot of the Recast Navigation system, from their official Github page.
The downside of these general-purpose dynamic solutions is that the CPU and RAM requirements are higher than precomputed explicit decompositions, and they’re more likely to exhibit the variable O(n × log n) complexity that demands laborious perf-tuning. Furthermore, they transfer all the work of constructing the patritions to the end-user’s gaming device, rather than allowing static scene data to be “cooked” ahead of time on developer workstations.
Looking Forward by Looking Backward
They’re workable in a large-team production setting, but in my case they resisted all three of my high-level goals. Therefore, I began searching for alternatives – not to replace these features whole-cloth, but the supplement them with parallel systems for specific cases where they’re not well-suited.
The fact that in principle they all share a common data structure gave me this feeling that I should be able to make a tool which does “double duty”.
In searching for inspiration I began thinking about mid-90s shooters, which accomplished a lot of what I want on very modest low-power commodity PCs, in particular DOOM (1993) and DESCENT (1995). How did those games work? And could I exploit those legacy techniques to surpass the performance and productivity limitations of the general-purpose solutions?
DOOM: Binary Space Partitioning
Playthrough of DOOM, developed by id Software in 1993
Loose objects like demons and bullets are pretty sparse in the original DOOM, with wall-collision accounting for most of the runtime complexity. The DOOM level editor allows for mostly arbitrary placement of walls, but then runs a special decomposition algorithm to break it down into explicit convex-regions called Binary Space Partitioning (BSP).
Screenshot of the origin DOOM level editor.
The idea is to pick a wall at random, and subdivide the remaining walls into those that are in-front or behind that wall’s plane (splitting walls that straddle the plane into two parts). For those two subsets you rinse-and repeat until you’ve exhausted all the walls, and what remains at the leaf-nodes of the subdivision are a convex partition of the scene, which can be computed and stored ahead-of-time with level-data.
Diagram from Valve’s Source Engine documentation demonstrating convex-decomposition via Binary Space Partitioning.
The original DOOM simplified these BSPs by imposing the restriction that all walls are perfectly vertical, so the partitioning could be mostly 2D, though later versions of Quake and Unreal Tournament used the same technique with sloped geometry, too.
While it’s interesting to study to understand the history of how broad-phase collision developed historically, I didn’t really feel like there was much for me to exploit
DESCENT: Convex By-Construction
Another interesting case I looked at was Descent, which billed itself as the first “full 3D” shooter with complex volume-filling geometry, as opposed to being mostly-flattened onto the horizontal plane like DOOM.
Playthrough of Descent, developed by Interplay in 1995
I better appreciated for how they implemented this by playing around with the mod community’s level editor DLE, where levels are built-up from convex “segments” that are each six-sided cuboid hexahedrons. At first glance this seems like a restrictive limitation which would give rise to rectangular chambers, but creative linking across edges and corners can actually give rise to pretty much any shape you want with enough creativity. Additionally, restricting faces to all be quads simplifies how textures and materials are applied, as opposed to having to account for weird triangular topology.
Screenshot of the Descent Level Editor. It looks like a mesh editor at first glance, but you’re actually editting linked cuboid “segments.”
In 2018 the original original devs reconvened to develop a spiritual successor Overload, with a community level-editor and tutorial series which expands on the same ideas in thoughtful ways. What really caught my eye was how they extended the idea of “applying materials” to quad-face to assembling kit-pieces which have a modern, rather than low-poly, look. For example, rock faces have rough geometry that fill-in crevices to give those areas an organic look.
First Video in a Tutorial Series on the Overload Level Editor
Playing with these tools was a big aha moment for me – that by building-up from convex shapes there’s no need to do any kind of “decomposition” like BSPs because the space is partitioned by-construction.
I decided to use this as a starting-point and then go in my own direction.
Nightshift Galaxy Scaffold
Instead of building a completely separate application like the Overload team did, I decided to build inside the Unreal Level Editor. This is a tradeoff – on the one hand it limits the ease of offering modding tools to the community, which is something on my mind. On the other hand, though, it greatly reduced how much I needed to code (since the application-framework/asset-handling/undo systems were already in place), and let me focus on my value-adds, as opposed to re-implementing all the other loose-object handling that Unreal already does perfectly-fine.
Gamedev Streaming resumes _tomorrow_ at noon, PST. Thinking of doing a demo of the “scaffold” tool I’ve been making for level design, and maybe a little lecture of scene geometry in 3D. #indiegamedev pic.twitter.com/eaWQEAOSvv
— Max! (@xewlupus) April 3, 2025
Early Sneak-Peak of Scaffold running inside the Unreal Level Editor.
I won’t go into all the boring details, suffice to say for Unreal-heads out there I started with the SplineComponentVisualizer as a reference and built my own tools from there. It took about a week to get a MVP version from about 2500ish lines of code. I have all the basic operations – selection modes for vertices, edges, faces, and “cubs” (my name for convex segment – short for “cubby hole”), basic extrude and bridge options, as well as “stamps” for common shapes like cubes, rings, and cylinders.
For the grey “blockout” walls, I’m using the DynamicMesh object from the GeometryScripting extension (though I intend to replace them with instanced decoration kit meshes), and to create “real collision” trigger volumes for each cub I added a CubComponent subtype of PrimitiveComponent by referencing the implementation of ShapeComponent and BoxComponent.
I wrote a “scaffold raytracing” test which, even in it’s simple quick&dirty implementation, is 100x faster than doing ordinary raytraces with Unreal’s built-in collision. It’s not something I was use for player bullets or general-purpose traces, but for specialized batches, like enemy bullet-hell patterns, it opens up a lot of opportunities.
Post-stream: updated the BulletManager to not only use framerate-independent time, but do sub-frame position sampling for higher-precise patterns. #ScreenshotSaturday #shmup
🎶 Venus Wars OST (Joe Hisaishi) 🎶 https://t.co/otWOFaQrOE pic.twitter.com/K1Alnu0J8v
— Max! (@xewlupus) March 1, 2025
Footage from an earlier bullet-hell experimentation stream.
But the real coup de grâce for me was that, as I expected, Scaffold data also works as a navigation system! I generate a waypoint for each shared cub-face, and do a quick check to see if the “floor” of the cub is a wall-or-window, and quickly generate a navigation graph that works for both ground-units and aerial units with no extra labor.
Update from yesterday’s stream — the Scaffold level editor is now generating nav-data for AI-pathfinding in addition to blockout wall collision. #screenshotsaturday pic.twitter.com/qaUStE3V1j
— Max! (@xewlupus) April 5, 2025
First test of Aerial Navigation working inside Scaffold.
At this point I’m confident in the direction of the solution, so now I’m fleshing out missing features, and adding more “smart” operations so I can layout spaces faster. The big difference between Nightshift and Descent is that I want to exit Spaceships/Asteroid-Bases/Sewer-Tunnels/Caves/Industrial-Interiors and fly around the outside, too, so I’ll need to start thinking about a “exterior hull” toolset as well. But I want to make a half-dozen maps with validated gameplay, first.