News | Forum | People | FAQ | Links | Search | Register | Log in
Microbrush 3 3D Modeler
Hello! :)

One of my hobby projects is a little 3D modeler/level editor for brush-based engines such as Quake or Source. I originally started working on it because I was really annoyed by how long it took to make basic brushwork in Valve's Hammer. Some inspiration for it stems from the Radiant series of level editors.

It requires Win7 or higher to run and renders its stuff with OpenGL 3.3. If your graphics driver is up to date, it should work. It's portable, so it just needs to be unzipped and can then be used from the target folder.

Project page: http://shrinker.seriouszone.com/projects/IterationX/Microbrush3/
After unzipping, run "First start and tasty fresh cookies!.bat" and follow the instructions. For a reference of the configured shortcuts, have a look at the config.cfg file.

Here I've recorded myself building some random stuff with it: https://www.youtube.com/watch?v=wjjB8MLjvJ4

This is a Twitter account to go with it: https://twitter.com/shrinker42

At the moment, the focus is almost exclusively on brush work. The editor can't process texture or entity information yet, so please don't resave an existing world with it unless you want to get rid of all textures and entities. :)

It supports loading and saving
- its own textual or binary formats
- Half-Life 1 .map files
- Half-Life 2 .vmf files
- Quake 3 .map files

Additionally, you can run an export in its textual format with computed data (such as polygons) included, or export the same data as a Wavefront .obj file.

Pretty much all the business logic in the editor is written down in plugin code that's compiled in the setup stage. To look under the hood, check out datasourcesplugins.
The grid supports being skewed/rotated or configured to display ellipses instead of parallel lines. The respective shader and also the shader used to color the brushes can be seen in datashaders.

Hope this is useful to some. :)

Edit: updated URL
First | Previous | Next | Last
 
Note: The d = n*p0 representation is always unambiguous if n is normalized. If not, then there's an infinite combination of {n, d} per plane. too. 
Re 
Yeah I see your point about the plane representation. And now that I think of it, the n*d representation is a derived value for me too, because I actually store the plane in a three point format, which exhibits the same properties you describe. The advantage of the three point representation is that it maps directly to the face points in the .map file.

I don't understand how you map your plane representation to the three points in the map files. How do you compute these three points so that they are integer? Or do you use floats in the map file? If so, that's a source of infinite trouble for mappers as the slightest deviations produce microleaks. These things become even more difficult once you have vertex manipulation and arbitrary rotation.

If you have a smart solution for these problems, I'd love to hear it because I have been struggling with these issues for a long time in TB. As long as you limit yourself to simple things like 90 deg rotation and clipping (with integer clip points), everything is good. But more complex tools that rely on the vertex representation fuck everything up. 
 
I use floats and write them into the files as they are. I discourage arbitrary rotation for brushes, but if that's needed, it becomes the problem of the next tool in the pipeline. At the moment I'm not rounding any values, and if you use only the regular tools and align your work normally, you don't get such odd values at all. 
Yup 
That's the beauty of the limited toolset. When I started I wanted the exact same thing - just do everything with the 3 point clipper. But then everyone moaned how much they need vertex editing, etc. 
 
"But then everyone moaned how much they need vertex editing, etc."

Well, yeah. A level editor without vertex editing is basically useless... 
 
I never moaned about it. TB was already pretty much feature complete when it came out and it basically stopped me using WC 1.6 
Excuse My Ignorance, But 
"there was a lot of fine-tuning the algorithms for computing... for merging brushes and stuff."

is that not what QBSP does? 
No 
Qbsp merges faces. It hardly cares about brushes at all. 
 
@WarrenM: I'm sorry then. :(

@Shamblernaut: QBSP might be able to do all of those calculations, but I can't just harness them out of the box for my own realtime rendering of stuff. That's why both I and SleepwalkR had to implement our own versions of the respective algorithms. This was needed to turn a list of plane representations into polygons that you can see. :) 
 
Ha! I meant useless for Quake editing ... your project is cool in it's own right. 
If I Pick The Pie Icon Often Enough, Do I Get Actual Pie? 
I could edit Quake and Doom 3 levels in Radiant just fine without needing its vertex edit mode.

o_o 
You Can 
But vertex editing just makes some things much easier to do, esp. if it doesn't create invalid brushes like WorldCraft does. 
This Icon Has Actually Valid Chinese Symbols In It 是的 
I just find it pretty harsh to label it _useless_ for making level geometry. I've already produced a lot of geometry with it just fine. 
 
Apologies. I used hyperbole to make a point. 
 
Just released a new version, with the "drag facing planes" mode finally implemented (key F): http://shrinker.beyond-veils.de/projects/IterationX/Microbrush3/

And SleepwalkR made a case about proper undo/redo to me, so I'm now pondering about how I could implement proper undo/redo. 
 
I've always wondered, what IS a good way to implement undo/redo?

Maya, for example, has a class for every action you can perform, instantiated and stacked every time you perform one, and each one contains full information for reversing itself and maintaining references/indices in the process.

Effective, but kind of overkill? You're essentially writing all your functionality twice (backwards and forwards), wrapped in a shitload of cpp glue. 
That's The Textbook Solution 
Aka the command pattern. Even though it is a pest to implement, it has a number of advantages. Mostly because you can easily extend it to support macros in your app. And with command collation you can keep memory requirements low. 
 
Yeah, that's the really, really nice thing about Maya - a python window that'll let you do damn near anything (like write a .mdl exporter!)

That's for a very complex environment, though. Quake maps only contain two fairly simple kinds of data, one a pile of keyvalues and the other a pile of planes. 
 
a dumb way is to just make a copy of the document after every action, and store that in a stack. 
 
metl - Which for some apps, is great. It's brain dead but it's 100% perfect and failsafe. :) 
 
The only trouble is, of course, memory. If ZBrush did that it would be problematic... 
Taking Snapshots After Every Action 
would consume way too much memory and also be quite slow on large maps. The command pattern uses the minimal amount of memory. For easily invertible operations such as transformations, you don't need to store anything but the transform matrix. For other operations such as vertex edits, you do have to store a snapshot, but only of the brushes you were editing.

Also, taking snapshots is far from being 100% perfect and failsafe. There are plenty of opportunities for bugs in that scenario, too. 
Data Sharing Is Magic 
People seem to assume you need to copy the whole data structure if you want to make a copy of it. This isn't true if the data structure is designed in a particular way.

If the data structure is immutable and has a tree-like structure, you can easily share parts of it between multiple versions. Updating such a data structure requires only modifying the branch of the tree that contains the updated data.

These kinds of data structures that you can update while sharing data are called persistent data structures in the literature, as opposed to the ephemeral kind with which all slightly experienced programmers are familiar. Persistence here means simply that if you modify the data structure in state s_n to state s_n+1, you continue to have access to both states in the future. Persistent data structures can of course be implemented in multiple ways, with immutability and sharing being just one of them.

Don't confuse this use of persistent with that in persistent data, which means saving the data between program invocations in another place.

I don't know if anyone has made a non-trivial 3D modeler using immutable persistent data structures. It would certainly be an interesting experiment.

For more information, I refer you to the literature. If you want a brief intro, this might work.

http://bartoszmilewski.com/2013/11/13/functional-data-structures-in-c-lists/

It uses C++ to work through a simple example of a persistent list. 
Long Post Which I Will Certainly Want To Edit After Pressing Submit 
Functional stuff! I've done a lot of stuff in Haskell and sometimes I wonder if I should remake it all in Haskell. :X

So far I've sketched out working on creating and destroying brushes as a whole all the time, with extra memory only required for otherwise deleted versions. I think I already got the undo steps done cleanly on paper, but the redo I still have to ponder about a bit.
Since I basically create everything from my plugin scripts, I'll make it so that you can define the borders of the undo steps in those.
One interesting realization I had so far is that if _within_ the borders of an undo step I create and then destroy a brush, it doesn't need to be saved, because duh, after the undo step is used, there's no brush either.

Microbrush uses a spatial data structure to store all of its geometry in clusters. If you edit neighboring brushes, only those will be rerendered. I've done it this way with good performance when editing 100k brushes per scene in mind.
Now when I want to do a brush transformation, for instance, I first remove the brush from that data structure, edit its data, and add it back in. While removing and re-adding stuff, the spatial data structure rebalances itself, merging or splitting nodes as necessary. It has lower and upper thresholds for inner nodes per inner node and objects per leaf node.

For an undo step, I'll store the old version of a brush (before a transformation, before deleting it, etc.), and the new version will be linked with the step so it can be tracked down and deleted again. It all comes down to a bit of fiddly bookkeeping of pointers forth and back and then a bunch of testing to make sure it doesn't explode. So far, I am a bit proud about how stable Microbrush runs. If you manage to make one of its plugins crash (e.g. the grid or the camera), the plugin is unloaded and you can usually still save your work. :D 
 
One interesting realization I had so far is that if _within_ the borders of an undo step I create and then destroy a brush, it doesn't need to be saved, because duh, after the undo step is used, there's no brush either.

Why would that happen though? I make something I like, accidentally delete it, hit undo, and instead of getting it back I undo whatever I last did before I made it. That's a table flip.


Snapshots of the entire document might be wasteful, but that seems easy to pare down. If the interface is requesting changes from the data, the interface already has (theoretically) perfect knowledge about what bits of the document have changed and therefore belong in or out of a snapshot.

Looks like this morning's coffee reading will be the command pattern. 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.