News | Forum | People | FAQ | Links | Search | Register | Log in
Quake Custom Engines
Discuss modified Quake engines here, I guess. What engines do you use? What are the pros/cons of existing engines? What features would you like to see implemented/removed?
First | Previous | Next | Last
Thought 
is it possible glquake/fitzquake/etc. are missing a gamma correction step somewhere? The second shot "pops" more, I'm not sure it's just the banding doing it 
 
those are both from FTE

i think what you're noticing is the first shot has blurry floors and ceilings from the trilinear filtering (plus the lack of anisotropic.) Second shot appears to use GL_NEAREST filtering. 
 
You need to compare thoses images on a fairly bright monitor to really appreciate the difference, and when you do it's crystal clear that the colours are totally different, especially in the shadows, because the first image is just combining the textures with the lightmap using multiply or whatever, whilst the second image is using quake's colormap shenanigans. You can see it especially in the wood texture, which darkens to a rich, almost purpley shade in the software version. The colours have a lot more life in them, because the colormap bojangles shades and highlights the textures in a way that changes the hue in subtle ways, and your usual Fitz-esque lighting doesn't. 
 
Hmm, here is fitz085 vs winquake at 1024x768, gamma 1 on both, and gl_overbrights is turned on in fitz:
http://i.imgur.com/16ekm0r.png
http://i.imgur.com/FDd880C.png

The difference I'm looking at is the metal around the Q vs the A. In the winquake shot it's brighter around the A than the Q, in fitz they're about the same. I guess it's probably just an artifact of the lightmap * texture being snapped to a fixed set of values in WQ, but i was wondering if there was some gamma curve in the colormap table that WQ uses. 
Software Lighting Appreciation 
You should be thankful that the colormap in Quake is not nearly as crude and abysmal as the one in Doom. 
 
There's no special magic in the colormap.

It's just a 2D table which takes a palette index and a lighting intensity and returns another palette index as the lookup result.

Because Quake is an 8-bit renderer the returned palette index isn't a straightforward multiplication but instead a nearest colour match (I assume; I haven't done any deep analysis of this beyond a rough comparison with what the result would be if it had been a multiplication, and determined that they match well enough).

So sometimes what would be a brown, or a green, if it had been a multiplication comes out as a yellow or an orange instead and hence software Quake can seem to display a greater variety of colour; but you shouldn't fool yourself into thinking that it's anything other than a flawed approximation that looks good by accident rather than by design.

This same applies to mipmap level reduction. In GLQuake it's a straightforward average of 4 texels to produce the reduced texel, but that can give results that aren't in the Quake palette, and so it's not possible in software Quake. Again, I assume that ID did nothing more than nearest-colour-match this to the Quake palette, and again that means that you can get results that were never in the top-level image, which again can give the illusion of more detail, or more colours.

A GL-based engine should gamma-correct it's mipmap reduction, but most don't. At the most basic level this means square each texel, then do the average, then take the square root of the result. That can give a huge improvement in the look of GLQuake right off the bat and without anything fancy needed. 
 
Most rows of the quake palette tend to gain saturation as they darken. It's an aesthetic choice (a very painterly one) that causes shadows to richen a little bit as the indexes are stepped down by shadows.

Very carefully tuned colored lighting could achieve a similar effect ... 
@Lunaran 
Software Quake lighting doesn't step down the palette indexes though. It's perfectly possible for a column in the colormap to jump from, say, palette index 89 to 21 as part of a single step in the lighting ramp.

For a real-world example, if we look at column 128 of the colormap, as the lighting darkens we step from palette index 254 to 15 to 14 to 161 to 128; at one point it goes from 4 to 36 to 3 over the space of 3 consecutive steps.

Now, obviously there's nothing in the Quake palette between indexes 4 and 3, so that 36 must have come from taking what it would have been and performing some colour matching to select a palette index to use.

So it's a good deal weirder than just a simple stepping-down of palette indexes. 
 
I can reproduce the same "painterly" effect in Photoshop in about 10 seconds by overlaying a banal black gradient over the Quake colors, and then quantizing the result into the Quake palette. 
 
for reference, here is the colormap converted to a PNG:
http://i.imgur.com/BGldFo6.png
(I used lordhavoc's lmp2pcx tool to convert to tga, then gimp to convert that to a png, but had to hardcode the width/height into lmp2pcx because the colormap doesn't have the lmp header) 
@dwere 
The thing is, whether the effect is painterly, or aesthetic, or deliberate, or some, all or none of the above, is totally irrelevant because it's not what software Quake uses for lighting. 
 
Now it should be noted that when you simply overlay black, the original colors tend to LOSE saturation, not gain it.

Ultimately, the amount of saturation you lose after quantization depends on the palette you quantize to. In Doom, for example, a lot of colors mutate into browns and greys, because the palette sucks for lighting. With a well-balanced palette (like Quake's) it's possible to preserve as much saturation as there was before quantization (but after darkening). Maybe even more in places, but not much more, and only by accident.

Unless you have a very deliberate algorithm for building the colormap, but it's not the case with Quake. 
Mh 
Software Quake uses colormap. I was talking about reproducing said colormap (its lower part, at least) with a very simple and lazy method. Not sure what you're getting at. 
The Seldom Discussed Qlumpy Palette Tool 
The lightmap colortable generator seeks the indexed 8-bit color that is closest to the desired 24-bit color for each entry based on least distortion.
distortion = dr*dr + dg*dg + db*db
dr is the red delta, etc.

It runs through the whole palette (minus fullbrights) for each table entry. Slow on a DOS machine, which is why it was generated in advance. ToChris introduced an alphatable to the mix. Engoo embedded the table generation into the engine for colored lighting, fog, and effects. 
 
but i was wondering if there was some gamma curve in the colormap table that WQ uses.
just to dismiss this idea, I took the 8th color in the first block of browns, plotted the red values from the top of the colormap to the bottom, and it follows a jagged but straight line. 
I Like It 
When clever people talk. 
 
I should probably point out that at this time, my code cheats and disables mipmaps for those surfaces.
It really only gets away with it because people tend to not run quake at 320*200 any more.
I suspect spirit's regular-gl screenshot used min=linear, mag=nearest, mip=nearest filtering (which is an impossible setting in vanilla glquake).

Its entirely feasable to upload the 4 mips to GL and use only those, but I didn't get around to doing this yet. This would allow it to avoid needless precision loss from mipmapping.
Most gl engines mipmap recursively, accumulating imprecision with every mip, so there's likely benefits to uploading the 4 mips even when not using colourmapping.
Considering this would boost loading times, and reduce imprecision, its a wonder gl engines don't already utilise all 4 mips, yet I don't know of any glquake engine that actually does this - GL_TEXTURE_MAX_LEVEL=3, combined with npot, and us engine devs have NO excuse for shoddy mipmaps.

There'll still be precision differences from the lightmap, and when mipmap boundarys occur. I won't claim that it matches software rendering exactly, but its close enough that you do have to stare quite hard at the walls. 
Well... 
that assumes the mips in the bsp aren't "shoddy" -- do we know what method was used to create them? (obviously might differ between id textures and user-made textures) 
@metl 
that assumes the mips in the bsp aren't "shoddy" -- do we know what method was used to create them? (obviously might differ between id textures and user-made textures)

qlumpy source code again; the GrabMip and AveragePixels functions (in quakegrb.c) show that it's just a simple averaging, followed by a nearest colour match, with some error diffusion mixed in at the end for non-fullbright texels.

An obvious risk with user-made textures is that the creator may have never bothered with making miplevels beyond 0. That's something that wouldn't show up in GLQuake and we all know that a lot of content was never even tested in software Quake back in the day (hence stray fullbright texels, bad overbrighting, etc). 
 
Software Quake lighting doesn't step down the palette indexes though.

Right, it uses the colormap ericw posted, which defines lighting per intensity and palette index as a lookup. And, in almost every case, the values present in that lookup are from the same palette row. That's all I meant.

Either way, shadows gain saturation because the darker values in the palette are more saturated.


Anybody know how Texmex generates mips? Since that's prooobably what everyone's using ... 
 
>> Either way, shadows gain saturation because the darker values in the palette are more saturated.

Not really.

But sometimes they have different hue (compared to the brighter colors of the same row). 
 
And, in almost every case, the values present in that lookup are from the same palette row.

Actually in most cases they're not.

Each palette row has 16 colours, whereas each palette entry in the colormap has 64, so straight away you can see that there's a huge amount that must come from other rows.

The darkest entries in each palette row also each have 64 lighting gradations, and because there's nothing darker in that entry's row the gradations absolutely must come from other rows.

By way of an example, here's a copy of the colormap with the 6th palette row entirely replaced with bright pink: http://imgur.com/5abrpMu

We can see that entries from this row are used in regions of the colormap that correspond to other rows, and we can also see how much of the region of the colormap corresponding to this row actaully comes from elsewhere in the palette.

As ericw has demonstrated above, the colormap is pretty much a straight line as it steps down, which is expected as the code that generates it uses "frac = 1.0 - (float)l/(levels-1);" to calculate the fraction of the base colour for a given light intensity.

What I suspect you're really seeing is that eventually each column in the colormap goes to palette row 0, i.e the greyscale row, so as shadows in software Quake darken they lose their colour too. 
Althoughh... 
i remember distinctly that the darkest non-black color is a deep red. You see it a lot on the edges of pitch black shadows in software mode. 
 
>> Each palette row has 16 colours, whereas each palette entry in the colormap has 64, so straight away you can see that there's a huge amount that must come from other rows.

Not necessarily from other rows. "Duplicates" (if you could call it that) are very common in colormaps. 
 
Not necessarily from other rows. "Duplicates" (if you could call it that) are very common in colormaps.

That falls down on two counts.

First of all, each palette row corresponds to (16*64 =) 1024 colormap entries, which would necessitate far too many duplicates.

Secondly, if you actually look at the colormap data itself you'll see that they mostly do come from other rows.

I've posted an example from row 6 above that you can look at and check; over two-thirds of the colormap entries for row 6 come from other rows. The same example can be repeated for every other row if necessary.

I'm not sure what part of this you're not understanding. Some of us have actually looked at the colormap, we've analyzed the data in it, we've read the code that generates it. We know what we're talking about and we're not talking theoretically. 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.