News | Forum | People | FAQ | Links | Search | Register | Log in
Quake Custom Engines
Discuss modified Quake engines here, I guess. What engines do you use? What are the pros/cons of existing engines? What features would you like to see implemented/removed?
First | Previous | Next | Last
Right, Thanks 
Although, I can't find anything that mentions iqm support in DirectQ other than a post by mh here saying he doesn't think DirectQ will get iqm support...

http://forums.inside3d.com/viewtopic.php?p=36908 
RMQEngine Had IQM Support. 
The RMQ Winter 11 demo is a working example. 
IQM 
Yeah, I coded up IQM a few times. I don't think it's a viable format for general replacement of MDLs for a few reasons.

Skeletal animation realy needs to run on the GPU otherwise it can get horrendously slow. That means that those working on software engines can't really join the party, whereas those using hardware accelerate may need to bump the entry level to a point beyond which they're comfortable.

Personally I don't think that's a big deal - it's 2014 and everybody has good shader support these days - but for some people retaining the old "Quake will run on anything" philosophy might be important. (There's an irony here in that Quake actually needed quite high-end hardware on it's original release, but that's probably a topic for a separate discussion.)

Also I favour taking a minimally invasive approach towards content replacement formats. Some of the thinking behind BSP2 was that it should be easy to add to any engine and/or toolset, that it should work in both software and hardware, and that it should use the very same map format as before. I think this really helps adoption - if someone can more easily add it to their engine and if it just works without compromise, then they're more likely to add it. On the other hand if it involves importing 1000s of lines of code they may not even understand well, if it doesn't play nice with existing code, and if half of it breaks in software, then it's not going to be as popular.

That's why BSP2 didn't get features like coloured light, 32-bit textures, surface flags, removal of fixed hulls, etc. All of these would have been cool to have, but they would have also been barriers to adoption. (The fact that it went from not even being on the radar to being fully specified and coded over the course of an evening or so also contributed, as well as explaining some weirdness like the fact that I missed out on switching node and leaf bboxes to 32-bit in the first version.)

A replacement format doesn't need to be perfect or have lots of extra features; it just needs to be sufficiently good enough and easy enough to implement so that it can drive adoption.

A 16-bit (or why not just go full 32-bit?) variant of MDL would meet that description. 
16 Bit Backwards Compatible 
I'm sure I read a suggestion once before to bung the 16 bit info at the end of a standard mdl file, so essentially it's all in a single backwards compatible file. Just literally after the standard frame data (which is already at the end of the file) have a second set of vertex coordinates which "refines" the 8 bit data with a second 8 bits of precision:

pos_16_x = orig_x * 256 + refine_x

That kind of thing. I'd be able to create a test mdl or two like this pretty easy with qmdl at the stage it is now.

QME might choke on it though, as once upon a time that's where it stored extra metadata for editing models. On the other hand, the existence of QME created models that are carrying that kind of added payload does suggest most engines let you get away with extra data appended to model files... 
Mh 
yeah that's totally what I was thinking. Just taking the existing model format and going from 8 to 32bit for storing vertices and skin coords. (like you said, why not!)

It means getting existing tools to work with the new format would be very easy too because it'd just be a matter of changing the header being used to read the files and changing the storage data types to 32 bit.

I think skeletal animation can be really cool, but straight up vertex deformations are no longer possible on the mesh and now need to be driven by bones which increases model and animation complexity by quite a bit.

For example, say you wanted to animate something weird like a tentacle that can stretch. You might go with splines if you were doing a normal .mdl, but the engine would not recognize that kind of skeletal structure, so you'd have to rig some crazy deforming bone hierarchy thing which, while possible, is nuts...

And under normal circumstances, the only real boon to be had by it is so you can get the feet to line up with slopes via IK.
Any mod that's going to do something really wild with the skeleton like having some kind of procedurally generated stuff would be some really heavy duty mod that'd be better off on a crazy modder engine anyway. 
Preach 
that is very clever, and I definitely see how it can be a good thing in that you only have one model that will automatically use better precision if available. 
Thanks For The Useful Info Guys 
I agree - my preferred choice would simply be a .mdl with more bits for vertices. Sounds like it's easy to implement and stands the highest chance of getting into as many engines as possible.

I'm not too into skeletal formats in quake, for the same reason necros describes.

Ok, basically I'm making monsters for a mobile game. I own the assets and can do whatever I want with them, so it occurred to me one day that I'm building these very quakey looking monsters with a very quakey polycount (about 400 pollies for most), and it might be nice to one day turn 'em into quake monsters, but then the quake 8-bit vertex thing rears its misshapen, wobbly head and kind of deflates my enthusiasm a bit. 
 
Preach, that's a cool idea of tacking the lower 8 bits of each coordinate on at the end of the file. Only concern I have is code to find that refinement block could be ugly; you'd have to parse all of the structures in the mdl keeping track of the largest byte offset from the start, then if that offset + 1 is still within bounds, try to parse it as the refinement data.

Perhaps a cleaner way would be just leave the mdl file alone and dump the refinement data into another file (.bsp + .lit style), so you'd have shambler.mdl + shambler.16b (or whatever). Whenever loading a .mdl, the engine could check for a .16b file, and if present, use it to enhance the precision.

It feels a little wrong to create yet another model format, but the beauty of this would be it would work on all quake engines from the beginning, just without the enhanced precision. 
Ericw 
I dig that, sounds like a nice way of keeping the fallback .mdls "pure".

best of both worlds! 
 
it would also avoid potential problems if the model was loaded into QME. you wouldn't have to worry about the old editor clobbering the new data. 
 
Actually, qme already supports 16bit mdls. If you use the 'save' option instead of the export option, it'll write out an mdl format with a load of qme-specific data on the end of the file.
figure out the extensions and get some engine authors to support it, as well as a blender exporter.

regarding iqm, there's no reason that an engine couldn't just transform verts on load and then interpolate from there the same as they do with mdl (or just skip interpolation completely, if you really like vanilla quake).
Even geforce2 drivers will emulate support for vertex shaders.

iqm omits skingroups, so cannot be used as a pure superset (ignoring animation differences). luckily these are rarely used, and where they are used, they're often better replaced with a shader instead anyway. assuming your engine supports shaders...

mdl has other issues, like texture coord positions precise only to a texel, a single surface, implied flood fill requirements, onseam stuff is ugly as hell (and unsupported in any tool but qme (which breaks when onseam stuff isn't used)), palette limitations.
but worst, many of the limitations of mdl are not with the format itself, but with the engine.
by the time you have 1000 verts, all of those glVertex calls will be a significant time waster.
frankly, if you're going to rewrite enough of the engine to remove those limitations, you might as well add support for md3 too, which is typically easier to export to anyway.
ultimately, I don't see any point in mdl extensions unless qme can write them. backwards compatiblity is pointless if its going to just look terrible anyway. 
 
really? i wouldn't really let QME be a deciding factor at all. it's shit...
Best results with models always involves avoiding the use of QME in the workflow. 
 
backwards compatiblity is pointless if its going to just look terrible anyway

I tend to agree. The only reason to use 16bit or more accuracy is for large models with a large range of movement that *would* look terrible in 8-bit. If most users are going to see an awful mangled .mdl version of it anyway, then I would say the idea has failed. 
things like the vermis come to mind. or monsters that move around their model space a lot. 
Well 
It could be that using md3 makes more sense than a MDL + new 16bit refinement format.

I could have a look at how difficult a minimal md3 implementation for fitzquake/qs/glquake would be. 
Md3 
would be just awesome. 
Kinn We Get A Preview Of Your Models? 
Pwease ^_^ 
Ericw 
maybe, except with preach's idea, a 16 bit model would be fully compatible with even winquake/glquake. 
To Be Honest 
If you're gonna have a separate file, you may as well make it a MD3... 
 
True dat. Add MD3 support to the engine and boo-ya... 
 
If nothing else, md3 support saves creating a new standard. :P
Plus its already supported by a load of engines.


Software renderers might get something out of: http://sourceforge.net/p/fteqw/code/2002/tree/trunk/engine/sw/sw_model.c#l2908 but its too long ago for me to really remember much about it.

I'm tempted to port fte's mdl/md2/md3/iqm/zym/dpm/psk/md5 loader+renderer to glquake, but I doubt any engines would actually use any resulting patch which kinda makes it pointless, plus I'm feeling lazy. 
@Spike 
I'm tempted to port fte's mdl/md2/md3/iqm/zym/dpm/psk/md5 loader+renderer to glquake, but I doubt any engines would actually use any resulting patch which kinda makes it pointless

I touched on this above, but this is actually a great example of why certain features don't make it to widespread adoption.

You're right in that nobody would use the resulting patch, and the reason why is that the resulting can only be overly complex. 8 model formats and a renderer (I assume it's a single common renderer for all 8) is just too much. It touches too many parts of the code, and adoption would involve too much surgery, particularly if an engine author has already done work on these parts of the code themsleves.

In order to drive adoption features need to be easy to adopt. By way of contrast, an MD2 loader that brutalizes it into the same in-memory format as MDL (and they're not too different so it wouldn't be too difficult) would be just a simple matter of a drop-'n'-go loader, some simple struct modifications and another line or two here and there. That's the kind of feature that gets adopted. 
@mh 
1) but this is actually a great example of why certain features don't make it to widespread adoption

2) particularly if an engine author has already done work on these parts of the code themselves

In very recent times, I have become increasingly convinced that many programming languages --- especially C -- are broke by design.

At first, this sounds crazy. Because C is incredibly powerful and amazingly well constructed. And the assertion that C is "bad" means virtually every programming language is bad becomes most of them mimic the behaviors.

But I am convinced C is a terrible programming language (and almost all the ones we use):

Let's assume there is problem X and solution Y.

1. An "ideal programming language" (which probably does not exist at this time), the abstract solution does not get reduced to code. The abstract solution becomes a very specific implementation that drifts away from the abstract solution.

This is a characteristic of most programming languages -- and it is terrible.

2. C offers too many ways of doing things. Many of these ways are ridiculous. And coding towards a specific implementation vs. abstract solution is actually rewarded by the language. So you get stuff like this:

g = (byte)((((k & 0x03) << 3) + ((j & 0xE0) >> 5)) << 3);

3. A good example of C encouraging drifting away from the abstract solution is short-circuit evaluation. In a specific implementation, this might make sense but backing up a bit would deny a highly intelligent or advanced compiler from deriving intent. The order of logic in a statement using short-circuit evaluation doesn't obscure the logic --- it actually removes the information permanently -- so a highly advanced compiler could never know if the order was important or the programmers best guess of the right order for speed.

As a result the language doesn't offer:
1) Collective works
2) Additive gain
3) People spend their time reinventing the wheel.

And because the language offers so many coding styles and ways of doing things, you could draw an entire org chart of different styles.

This results in the kind of failures seen in Quake, where engines struggle to adopt features that have been common place in other engines for a decade --- because the abstract solution never gets reduced to code, but an implementation-specific solution.

[Of course, if a language is ever written that does this right, it will have to be written in C ;-)] 
@Baker 
You've more or less restated the problem that OOP was meant to solve.

The nirvana here was supposed to be that you could have components expressed as reusable objects and then all that you need to know is the public interface of that object and you can link to it and use it without having to worry about it's internal implementation.

Likewise if you're writing such a component you specify and document the public interface and the same magic happens for anyone using it.

At a sufficiently high level of abstraction using these objects becomes broadly analogous to putting Lego bricks together. Join a "model" object to a "renderer" object and you have support for a model format. Want to support another model format? Just drop in another object and if it understands the interface to the "renderer" (and vice-versa) you have the awesome.

True, the person writing such an object still has a lot more work to do, but that's a one-time-only job and then everyone gets to join the party.

Of course the end result we have today is somewhat different to this ideal world, but hey-ho, you can't win them all. 
 
http://www.cs.yale.edu/homes/perlis-alan/quotes.html -> " 93. When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop. " 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.