Dwarf Engine 0.1 Release Announcement 🥳
After years of on-and-off development on this little side project of mine, I’m proud to announce that I have reached my first milestone!
You can find the builds on the GitHub Release Page
But before I tell you all about it, allow me a moment to reminisce…
1. The beginnings
On November 25th 2021, I pushed my first commit to the Dwarf Engine repository—although development had actually started about a month earlier.
What began as a need for a simple template project—to practice graphics programming and avoid repeatedly implementing the fundamentals-quickly evolved into the vision of a full-blown 3D Application. I wanted to quickly author scenes, edit shaders on the fly, and have complete control and transparency of the rendering pipeline.
And I wanted that for any graphics API: OpenGL, Vulkan and Direct3D.
I realized that achieving this required several things: a project-based workflow, dynamic runtime asset loading and management, a graphics API abstraction layer, and an editor with a GUI to tie everything together.
Originally developed under the name Simple 3D Engine, I set out to get familiar with C++ and OpenGL—my first choice for the graphics API backend implementation.
Yes you heard that right—I had to start from scratch. My previous experiences with both C++ and OpenGL were pretty shallow, limited to what I needed to pass my computer science classes.
Since I primarily use Windows on my personal machines, the first thing I came across when researching how to set up a C++ project was Microsoft’s Visual Studio. I followed a few tutorials, wrote a simple “Hello, World!” program, and got familiar with the syntax. Everything went smoothly. I even managed to render some models on screen, thanks to LearnOpenGL.
So far, so good.
Then I tried setting up the project on the Ubuntu installation on my laptop—and accidentally opened Pandora’s box, revealing the horrors of the fragmented state of the C++ ecosystem…
2. My development experience with C++
On Windows, I was greeted by Microsoft’s Visual Studio suite of tools, which gave me a compiler and build system right out of the box.
On Linux, I was immediately overwhelmed. No Visual Studio. No “Build and Run” button in sight. I had to install a compiler myself—and then write a makefile to build the project manually.
I felt stuck.
So what do we do, when we are stuck? Research!
For me, it’s crucial to understand the tools I use on a fundamental level. So my first task was to understand what C++ actually is—and to figure out what Visual Studio had been quietly doing for me behind the scenes.
Learning that C++ is essentially just a standard, and that there are different compiler implementations, got me thinking:
How do I choose the right one?
There I was. Standing at the edge of a deep and dark hole in the ground. A rabbit hole. Ready to jump, as it called out for me.
And then, out of nowhere, someone pulled me back and gave me a good shake.
It was… myself?
“I know you!” I shouted. “I know your insatiable hunger for knowledge, and obsession with making informed decisions—making right decisions."
I blinked a few times, still dazed. What is happening? I asked myself. I have been in this position countless times-so why is there another me shouting at me?
“But does it matter right now?” I continued. “Look around you!"
I turned my head to the left—there was the Trello board I had created to structure my project vision.
Then to the right—the fragile first programs in C++, barely working.
Beyond that? Nothing.
I understood now. At this time, any compiler will do.
I just went with GCC.
And promptly found myself near another rabbit hole:
How do I make my build process cross-platform?
CMake was the most widespread answer.
Disliked by everyone—but it did exactly what I needed: generate a Makefile on Linux, and a Visual Studio project/solution on Windows. With it I could centralize my project’s configuration and free myself from the shackles of operating systems.
But it also meant I had to set up a lot more manually—and I still didn’t know how to handle external dependencies.
Still relatively fresh at the time was vcpkg, a package manager by Microsoft. I gave it a try and found it decent.
I liked the manifest mode, and the simple integration into CMake was a huge relieve at the time.
The last piece hidden under the fog of war was the entire compile process itself.
It’s something you inevitably need to understand when working with dependencies—especially when the linker starts throwing cryptics errors at you.
“Translation units”, “dynamic vs. static linking”, “compiler-specific arguments”—I’ve slain all of these beasts.
Though, some still come back to haunt me from time to time.
In the end, I was very satisfied with my setup. I could quickly switch between machines and get a build running in no time using VS Code and the CMake extension.
I still run this setup to this day.
3. I can’t do everything by myself
In light of the sheer size of my vision, I knew early on that doing everything myself wasn’t feasible.
Having worked extensively with web technologies, I was already used to not reinventing the wheel. So I revisited my requirements and started researching some general-purpose libraries that could help.
For the asset database, I found a data-driven approach—like the ECS library entt—to be the right tool.
For the user interface, I settled on Dear ImGui
Model importing was entrusted to assimp
And for cross-platform window management, I started with GLFW, eventually switching to SDL.
These choices allowed me to focus on the core features of the Dwarf Engine.
I understand the appeal of implementing everything yourself—you are fully in control, and it does fell kind of awesome.
But I had to set my ego aside. I wanted this thing to actually work.
If I hadn’t made those compromises, I’m sure I wouldn’t be anywhere near where I am right now.
4. Learning by failing
From here on, I got down to business. With the vastness of the internet at my fingertips, I rapidly implemented feature after feature.
Here’s a glimpse of what the Dwarf Engine looked like about two years ago:

At the time, development felt like smooth sailing. I worked on whatever I felt like. Every bit of programming taught me something new—about C++, about the project’s evolving requirements, and about myself as a developer.
I was moving fast. Maybe a bit too fast.
What I didn’t notice was the technical debt quietly piling up behind the scenes. As the project grew, so did the issues. First came dependency cycles. Then, design problems started rearing their heads. When hammer and nail were no longer enough to patch things up, I took a step back—and saw the textbook definition of spaghetti code staring back at me.
I was stuck.
And what do we do when we are stuck? Research!
The design issues I was facing weren’t new to the world of programming. Thankfully, I had access to the collective wisdom of those who had been there before me. Many of their solutions come in the form of design patterns—a kind of architectural toolkit full of rules, best practices, and warnings. One of the most well-known is a set of object-oriented design principles called SOLID.
The concepts were easy to grasp in theory. But applying them? That took time.
First, I had to unravel the mess—splitting responsibilities into dedicated classes and hiding them behind abstract interfaces. From then on, all communication between modules happened through those interfaces.
This added a ton of boilerplate. It fragmented the codebase and gave me a few more grey hairs. It wasn’t fun. The project was already sizable, and refactoring it forced me to rethink a lot of decisions.
It took nine months of on-and-off development.
But I did it. And it was worth it.
Sure, adding new features now means writing a bit more code up front—but it integrates cleanly, and maintenance is so much easier. Debugging is straightforward. I finally feel like I’m building on solid ground. (Pun intended)
5. The Result
After all the refactoring, head-scratching, and code cleanup, I finally reached a place I’m proud of. A big part of the workflow is now up and running—on both Windows and Linux.
Take a look:

What follows is the current feature set—no dreams, no promises, just what’s already working today.
🚀 5.1 Project Launcher
- Create new projects with:
- Custom name and path
- Choice of Graphics API (currently OpenGL only)
- Pre-made templates to get started quickly
- Keep track of existing projects and jump back in at any time
🛠️ 5.2 Editor
📦 5.2.1 Importing & Managing Assets
- Hot-reloading: Assets update live when changed on disk
- Textures (bmp, jpeg, png, tga, tiff)
- Import settings: color space, mipmaps, flip G channel
- Preview in Inspector
- Shaders
- Auto-detected by file extension (e.g. .vert, .frag)
- Materials
- Combine shaders & textures into reusable, serializable structures
- Preview in Inspector
- Models (obj, fbx, gltf)
- Preview in Inspector
- Scenes
- Serializable containers for your entire world
🎬 5.2.2 Scene Authoring
- Create, save, and load scenes
- Add and organize entities in a hierarchy
- Move them around with intuitive controls
📈 5.2.3 Performance Monitoring
- Real-time frametime graph and FPS counter
- VRAM usage, detailed breakdowns, and render device info
🎥 5.2.4 Free Camera Controls
- Smooth fly camera
- Adjustable clipping planes and field of view
🎨 5.2.5 Rendering Settings
- Control framebuffer resolution and multisampling
- Pick your tonemapping flavor: Reinhard, AGX, or ACES
- Fine-tune exposure
📊 5.2.6 Statistics report
- Keep track of:
- Draw call count
- Total triangles and vertices rendered
🧭 5.2.7 Optional Grid Overlay
- Customize opacity, Y offset, and toggle it on/off
🔥 5.3 Rendering Pipeline
- Forward rendering architecture (For now)
- Dynamic draw call generation from the scene
- Material- and shader-based sorting
- Mesh batching to reduce state changes
- OpenGL state caching to minimize overhead
- HDR rendering with post-processing tonemapping
🐞 5.4 Known Issues
- No transparency sorting (yet)
- Stability on non-Ubuntu distros is unknown
- Relative transforms in parent-child hierarchies are off
- Gizmo-based rotations may break Euler angles
- Lights exist, but aren’t wired into rendering (yet)
6.0 What’s next?
There’s still a lot to do, but for the upcoming 0.2 release, I’ve carved out a solid plan. It’s all about smoothing out workflows, improving usability, and laying the groundwork for more advanced rendering features.
🎛️ 6.1 Editor Improvements
- Drag & drop external files directly into the asset browser
- Outlines for selected entities
- Multi-object selection
- Texture channel preview (R, G, B, A)
- Thumbnail generation for assets
- Better, snappier camera controls
- Refactored asset input fields in the Inspector (include thumbnails instead of just names, better performance)
- Fix gizmo-based entity transforms (no more cursed rotations)
- Smarter transparency: let the user wire shader inputs (like MVP matrices or app time) instead of hardcoding
- Move certain settings (like tonemapping, exposure) from project-level to scene-level
🎨 6.2 Rendering Features
- Skybox support with selectable material
- IBL (Image-Based Lighting)
- Transparency ordering
- Feed actual scene lights to shaders (no more baked-in lights)
7.0 Finishing words
If you made it this far-thank you. Writing this post was a joy. Revisiting old screenshots and video clips gave me some serious “whoa, I made this?” moments.
Future blog posts will likely be shorter, more focused, and dive into specific features or rendering research. Whether it’s real-time rendering quirks, engine design, or shader tinkering, I want to document it all.
So, if you’re:
- Curious about graphics programming,
- Looking for a minimal native engine to play with shaders and scene,
- Or just want to follow along the journey-
Stick around! There’s plenty more to come.
P.S. I’d love your support in any form! Whether that’s testing new releases, sharing feedback, sending interesting resources my way, or just dropping by the Discord to chat. 😊
🔗 GitHub Project
☕ Support me on Ko-fi
👋 Support me on Patreon
🕹️ Join the Discord