News | Forum | People | FAQ | Links | Search | Register | Log in
Real-Time Vis = How?
Interpret byte-code. No problem. Unwind brushes. Ok. Point-polygon collision? At the end of the day, that one isn't so hard. Physics, manageable.

Real-time vis? I don't even know how to begin to develop a plan for that.

Yet somehow has to know how this can be done, Saurbraten did it, right? What are the fundamental concepts in developing a strategy for visibility?
First | Previous | Next | Last
Noob Question About Overdraw 
I'm having some trouble understanding the technical bits - so I assume drawing X triangles that are all stacked behind each other (lots of overdraw) is slower than drawing the same number of triangles spread out on a sheet but still all in view (no overdraw) ? 
 
Carmack himself stated a while ago, you can ultimately throw brute force at every problem once the hardware is fast enough. Vis is a clever solution to a problem, precalculate instead of doing things during runtime.

Afaik the main problem with overdraw is related to shading, ie too many fragment-shader invocations per pixel ( scenes with many overlapping particles tend to show this, makes older GPUs rev up nicely ), but shouldn't early-Z take care of this for opaque surfaces ?

But it probably still overdraws during the z-phase as you end up trying to draw a lot of unnecessary polygons, so it depends on the player hardware.

Any software engine people want to chime in ? I guess it might be quite bad for them at least. 
How Do I Show FPS In Quakespasm? 
Everything seems to run well for me, I guess the intel GPUs are finally good with Broadwell :-) 
 
"but shouldn't early-Z take care of this for opaque surfaces "

It does. If the depth buffer rejects a pixel, you won't eat the rendering.

Overdraw is really only a problem on systems without depth buffers or stacks of non-opaque surfaces - like particle systems or glass. 
 
The main reason to always use vis, even if an engine is faster by disregarding vis entirely, is serverside culling and networking bandwidth even if the client totally disregards it.
culling realtime lights via vis is also very useful, although I suppose you could also use oclusion queries for that.

@ericw
use GL_ARB_bindless_texture
atlasing or texture arrays are also an option, but more fiddly (but also more likely to be supported by hardware).
water+skys could be done with subroutines.

@Kinn
overdraw is when you draw the same pixel multiple times. the earlier times become redundant and are essentually become a waste of memory bandwidth.
typically, graphics cards utilize an 'early z' optimisation which massively reduces the cost of overdraw, so if you draw the only world's depth first, then draw it normally (with depthfunc gl_equal), then you're not wasting time calculating the colours+textures of geometry which will never be seen.
really the advanttage depends on how expensive your fragment shaders are (including the cost of texture lookups+bandwidth).

Quake's software renderer had a zero-overdraw strategy. vanilla glquake draws triangles as they come from the bsp tree (nearest first). all modern glquake ports instead batch by texture, which can result in excess overdraw.
it'd be nice to return to a single-draw-call nearest-first renderer. best of both worlds - assuming your hardware+drivers are recent enough... 
 
it'd be nice to return to a single-draw-call nearest-first renderer

I don't know much about current hardware capabilities, but is this even possible, given that there are typically dozens to 100 textures in a bsp, plus a bunch of lightmaps that, even with atlassing, probably can't fit in a single texture? Are there enough texture units on modern cards to accommodate all of this? 
@metlslime 
GL_ARB_bindless_texture
no binding = no texture unit limit.
pass the texture via a vertex attribute.

GL_ARB_shader_subroutine
efficient branching, based upon vertex attributes.

both together and you have some serious dependancies on modern hardware... but should be able to draw the entire world in a single draw call - so long as your graphics card has enough memory (probably not an issue with vanilla textures, but will undoubtably be an issue with replacements).

I've not used either, so while I'm sure its possible, I'm not sure on the actual practicalities, but hey... 
New Vis Tool? 
Post #38, quoting here (from mh)

I've been thinking more about the idea I mentioned above (using occlusion queries) and it seems to me that something may be possible by using a combination of the world bounds and reducing to the same 16-texel scale as lightmaps.

So what I'm thinking here is to divide the world into a grid of 16x16x16 boxes, then for each box do a Mod_PointInLeaf on the center. If it's in solid don't bother, otherwise run a 6-view-draw with occlusion queries and merge the resulting leafs into visibility for this leaf (which will have been cleared to nothing visible at initialization).

Obviously there are probably edge cases that I haven't fully thought through, but overall this is an interesting enough approach that I might even code something up.


Ignoring the realtime thing (the subject of this thread) - if an offline vis tool was developed that used such a GPU-based occlusion approach to create the vis data - would this theoretically lead to higher quality visdata than the current portal-based method?

It would certainly allow for much more open maps, surely? 
 
Kinn, it does sound like a tempting/cool idea.
I'm not sure if the vis quality difference would be noticeable.

The biggest advantage, I think, would be vis time being proportional only to the interior volume of the map. Also func_detail would be unnecessary.

The disadvantage is you'd be moving to a system that could, in corner cases, draw less than it should. I'm thinking a hole in the wall where you have to stand just in the right place to see through, and the sample points used by vis never line up with that spot. Probably a 16x16x16 grid would be fine enough that it'd never happen in practice.

The other concern is, how fast will it be? An 8192x8192x8192 box is the worst case. That's 512^3 vis sample points using a 16x16x16 grid, and if each vis sample point can be computed in 1ms (rendering all 6 views and getting the occlusion query results) that gives you 37 hours. 
 
The problem with open maps is not how vis tests visibility, it's how the world it's testing is split up into a tree. If the splits in the tree don't correspond to occlude-able pockets of geometry, it doesn't matter what method you use to determine which ones can see which ones, you're always going to be 'seeing' geometry you think you shouldn't.

The solution you're looking for is careful construction and planning of your big open map, and hint brushes. 
Or 
Hint the lot and trust the player has 256 allocated... 
 
The biggest advantage, I think, would be vis time being proportional only to the interior volume of the map. Also func_detail would be unnecessary.

vis time was the main thing I was thinking of, and also that vis time would scale linearly with map size meaning that complex "vis breaker" maps would no longer be an issue. 
Right 
I'd need to do some research to understand more how hint brushes work in order to apply them to a big open map.

Is there a way to visualise portals in quakespasm or another engine? 
Zendar Uses Hint Brushes 
 
Is There A Way To Visualise Portals In Quakespasm Or Another Engine? 
The portalization is discarded following the vis process; all that's stored in the BSP file is a list of which leafs are potentially visible from each leaf. 
Right 
I have found Darkplaces has r_drawportals 1 command. So what is that visualising? 
The Prt File, Presumably? 
 
Can't Be 
load up any map in darkplaces, and do r_drawportals 1. Doesn't need a prt file. 
 
Intersections between leafs or something? 
 
IIRC DarkPlaces does it's own realtime portalization. That would be what it's visualizing. 
 
I'm afraid I'm gonna be asking some stupid questions for a while.

The only other portal-based rendering I'm familiar with is Doom 3 and the portalling there is hand-placed and is coarser than quake I think? (it's done per room, more or less).

Is the portalling in quake on a per-leaf basis? i.e. (ignoring detail brushes for now), are visportals created for each bsp leaf? 
 
There's no such thing as a stupid question; particularly when it comes to something like this, where the knowledge actually isn't anywhere that's publicly accessible.

Anyway - don't know. Somebody else is going to have to chime in with that one; this part of tools work makes my head hurt. 
 
Is the portalling in quake on a per-leaf basis? i.e. (ignoring detail brushes for now), are visportals created for each bsp leaf?
afaik that's correct, same with what Spirit said, "Intersections between leafs". So, they're super fine-grained. With "r_drawportals 1" in DP you are seeing the portals, but it's also a visualization of the leafs at the same time.

Bringing detail brushes into the picture, the info on which faces/leaves were detail is not stored in the bsp file, so DP's "r_drawportals 1" will be showing portals as if all detail was converted to world first. 
 
Thanks guyz, so...if I compiled the map without detail brushes (I think the "jury-rigged" bjp compiler lets me do that)...then viewed it in DP with "r_drawportals 1", can I trust that i'd be seeing the actual portals that vis.exe will be using? 
Yes 
you can load the prt file q3radiant aswell, there is a plugin for that.
Quark also loads and displays the portals of a map.
Comparing those may help you. Idk what you are up to tho. 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.