News | Forum | People | FAQ | Links | Search | Register | Log in
Real-Time Vis = How?
Interpret byte-code. No problem. Unwind brushes. Ok. Point-polygon collision? At the end of the day, that one isn't so hard. Physics, manageable.

Real-time vis? I don't even know how to begin to develop a plan for that.

Yet somehow has to know how this can be done, Saurbraten did it, right? What are the fundamental concepts in developing a strategy for visibility?
First | Previous | Next | Last
 
Thanks for that link and perhaps I can use it to find others like it. 
GPU-accelerated Vis 
I've been thinking more about the idea I mentioned above (using occlusion queries) and it seems to me that something may be possible by using a combination of the world bounds and reducing to the same 16-texel scale as lightmaps.

So what I'm thinking here is to divide the world into a grid of 16x16x16 boxes, then for each box do a Mod_PointInLeaf on the center. If it's in solid don't bother, otherwise run a 6-view-draw with occlusion queries and merge the resulting leafs into visibility for this leaf (which will have been cleared to nothing visible at initialization).

Obviously there are probably edge cases that I haven't fully thought through, but overall this is an interesting enough approach that I might even code something up. 
 
I guess your only concern there is memory usage but these days, who cares? 
 
what about the case where there is a point in between your chosen sample points that can see more than any of the neighboring sample points? That seems like a really common case. 
@metl 
Like I say, I haven't really thought through everything yet. It may be possible to use a finer grid, or you may be able to say things like "if leaf A can see leaf B then leaf B can also see leaf A", or whatever.

I'm not going to let "what ifs" detract from putting together a proof of concept; at the very least I'm interested in comparative performance on a known vis-breaker, and if it runs well enough then it'll be worthwhile putting in the extra effort to deal with this stuff. 
 
Might I suggest a certain UnVISable Qonquer map from the last map jam. :P 
 
(I've long wondered if vis doesn't intentionally show a little bit behind walls for WinQuake's dynamic lighting. Like a lavaball on the other side showing through. Because I'm not aware of anywhere that a dynamic light won't show through a wall, which means the server sent the rocket or lavaball to the client or a player with Quad.) 
@WarrenM 
That's a good suggestion.

One other advantage I can think of to this approach is that it should no longer be necessary to seal a map.

@Baker: start.bsp in ID1 - if you go to the normal skill hall, stand near the left-hand wall and look towards the right-hand wall: no shine-through. It's easy enough to add an r_lockpvs cvar for testing purposes. Otherwise the answer is in SV_FatPVS. 
 
I think you'd still need a seal BSP tho. Otherwise you'd have no way of knowing what is void and what is valid game space. 
 
technically you don't, you'll just have tons more leafs, faces, marksurfaces, lightmaps, clipnodes, etc. I think vis can be modified to accept leaky maps as well, it will just take a lot longer due to all the extra leafs it needs to process. 
 
I guess that's true ... there WOULD be void on the inside of brushes. But you'd have to basically place those vis sampler nodes he's talking about throughout the entire Quake world grid/cube since it would all be playable game space in a leaking map. 
 
yeah true. With this technique, the processing time scales with the total volume of non-solid space, rather than the total number of leafs. 
 
The point about the GPU-accelerated approach is that this shouldn't matter. It will still take longer because a lot of formerly CONTENTS_SOLID leafs will now be CONTENTS_EMPTY, but it should take significantly shorter than software vis because you're just rendering 6 views and reading back occlusion query results. The resulting visdata may also be much higher quality.

For release-quality maps sealing is of course a must, but as a development aid, faster vis times and a higher quality result should be a win. 
 
Why not just make a realtime raytracing engine. It's been done before (albeit with hardcore CPUs)

http://www.q3rt.de/
http://www.q4rt.de/ 
Q3RT 
 
If I Have Time 
I want to try making a version of light.exe that runs on the gpu (OpenCL).
I'm pretty sure it's possible, only question is whether it will be much faster than the cpu version. 
Source Code 
Where can I find source code for modern vis tools? I'd like to learn how some of this stuff is implemented. 
 
You probably don't. :) Modern tools are stapled on top of the old code. And the old code will drive you to drink, trust me. 
 
There's no hope for this project if the code is THAT bad :-) 
 
He's talking about a whole new methodology ... something done at runtime. I haven't seen the game code itself so I don't know how hard it is to mod but I suspect it's been improved in the various engine code bases.

The tools ... not so much. :) 
It's A Hefty Task 
and shit, if you're re-writing engine code to vis during runtime, then you might as well make a whole new map format to boot. 
The Problem With Vis... 
...is that it really operates on too fine-grained a level for a modern renderer. Like a lot of things in Quake, it made sense for a software renderer on a lower-specced PC, where every polygon you could save performance by not drawing was a win, but with even halfway reasonably decent hardware acceleration that just goes out the window.

Some relevant notes about the XBox 360 port of Quake 2: http://www.eurogamer.net/articles/digitalfoundry-2015-quake-2-on-xbox-360-the-first-console-hd-remaster - it just didn't bother using vis at all and still managed 60 fps with 4x MSAA at HD resolutions.

Culling of unseen polygons is also eliminated in the Xbox 360 version, deemed unnecessary due to the paltry number of triangles used per map - meaning that the entire world is drawn each and every frame.

That's fine for original content but is obviously going to fall down (badly) on some of the more brutal modern maps. But it does highlight that the really fine-grained per-leaf visibility is essentially disposable when dealing with more modern PCs than the original engines targetted.

if you're re-writing engine code to vis during runtime, then you might as well make a whole new map format to boot

This can seem to make sense on the surface, but you need to dig a little deeper. One of the reasons why BSP2 was successful is that it changed as little as possible in the format. There were discussions about what features it should have while it was being specced (and I did the original spec and implementation so I can be 100% certain about this) and it kept on coming back to making it as easy for other engine authors to implement as possible. So while it could have had features like built-in RGB lightmaps, 32-bit textures (or even a separate palette per-texture), or others, it didn't. It didn't even change the .map format so that mappers could continue using their favourite editors, and all that was required in the engine and tools was a few #defines, some new structs and a bunch of copy-and-paste code.

What's really required to make Vis more efficient is to change it's granularity from per-leaf to something like per-room. I have no idea what that would entail in terms of tool-work, but engine-side it could lead to better efficiencies from less BSP tracing while drawing and being able to build static batches from world geometry. 
Although 
we have per-room vis already, in a way, if the mapper makes heavy use of func_detail.

crazy idea, maybe you can recover the "leaf clusters" in the engine if you want coarser granularity vis data for rendering. Just group all leafs together that have the same visdata? 
Stuff 
BSP2 was limited by the requirement that it needed to work with Worldcraft and a Fitzquake derived engine, because switching editors proved to be an unpopular idea and dropping our sympathic newborn engine for Darkplaces or FTE seemed too heartless even to me at the time, although it would have been the right thing to do in retrospect (and it was the first thing I did afterwards.)

In the big picture, BSP2 is a foul compromise but a nice thing if you want to keep the Q1BSP pipeline.

About reducing the VIS detail:

Add a compiler switch that lets the mapper disable automatic vising. Then add a new custom texture (like "trigger") that lets the mapper create portals manually.

I did a similar thing in my single-player maps (which are both very large and very detailed) when I still used FBSP and it resulted in a HUGE performance boost. Despite already using detail brushes. I inserted just enough portals to cull far away areas of the map, instead of going overboard with it like the Quake compilers do by default. Performance is then mostly limited by batching.

The performance increase was comparable to the improvement in Vis time after using func_detail.

I got the idea from looking at how Call of Duty (1) does it, since that's a Quake 3 based game with relatively large outdoor maps. Turns out they changed it completely and yes, the mapper has to manually portal the map in that game.

Quake 1 (and Quake 3) vising was developed for corridor shooters running on 90s consumer hardware. No wonder they tried to cull every little bit whenever possible. But hardware and Quake mapping have changed so much that this formerly very effective method has turned into an obstacle, and a massive obstacle at that.

It is probably less noticeable with deathmatch maps, and thus Quake 3 maps. But single player maps are bogged down by this massive amount of unnecessary info. 
Naive Musings... 
So just for laughs I flew around jam6_ericwtronyn - which is about the heaviest thing I can throw at the quake engine right now (I think) - and noted that in the most epic view I could find, I was getting around 30,000 wpoly and 70,000 epoly. I think this map is unvised, but looking at the structure of it, I can't imagine vising it would bring those polycounts down much.

Now, unless I'm missing something, those kinds of polycounts shouldn't trouble any sort of even vaguely modern hardware (didn't Doom 3 have like 150,000 polys in a typical scene in 2004?)

So...questions...

I get a solid 60fps with jam6_ericwtronyn on a reasonably modern laptop running Quakespasm. Does anyone here get bad performance in this map, and if so - what hardware/engine you running it on?

Are there other factors at work that cause unvised quake maps to perform slowly that are not to do with polycount? Things like 400 monsters running LOS checks? 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2024 John Fitzgibbons. All posts are copyright their respective authors.