News | Forum | People | FAQ | Links | Search | Register | Log in
General Abuse
Talk about anything in here. If you've got something newsworthy, please submit it as news. If it seems borderline, submit it anyway and a mod will either approve it or move the post back to this thread.

News submissions: https://celephais.net/board/submit_news.php
First | Previous | Next | Last
Yeah 
but with regular zip/tar, demos compress surprisingly poorly. They're just immune to the regular lzw techniques. 
 
But nobody wants massive downloads when a quick compression can save anywhere up to 10 minutes on upload and download, even more for dialup users.

7zip isn't too arbitrary, dzip neither - both being pretty standard q1 formats (ok 7zip isn't as well known, but its a lot better).

You can get away with standard zip of . . . tar? But every now and again it corrupts the compression or something. I've never once had a corrupt file in 7zip - touch wood.

Cue wood touching Jokes in 5 . . . 
Hehe 
"Arbitrary compressors" ... There's nothing arbitrary about the extremely well established and stable WinRar or 7-zip, they just provide superior compression and a wide range of possibilities to get a good compromise between speed and ratio.

Using the 20-year old zip tech is only interesting when you don't care much about compression ratio, want widest accessibility or are mainly targetting users with low computer knowledge.

And even in the latter case, you can just create a self-extracting archive, just like many installers already do. 
Dzip 
usually used for Quake demos because Quake can access the archive without it needing to be extracted.

I think the reason it's usually used for demos is that it was designed specifically to compress quake demos really well. Quake can't open them. If some of the custom quake engines can, that is a recent development and people have been using dzip long before that :) 
DZip 
can be seen as a multimedia filter wrapped around std zip and usually offers major compression gains while still keeping the speed of zip.

I actually made a brief attempt to see if DZip could be used in conjunction with 7-zip for even more compression, but that didn't seem to yield much. The DZip filter already removes redundancy, leaving little left for any other compressor.

And several engines can load dzips directly since several years; my NehQuake, JoeQuake and any of its descendants. 
Err 
1) It wasn't a demo
2) The file is 249,729b large. doubling the size would mean about one more second of download on my line (i can't even save the file that fast), and this is the smallest dsl you can get around here.
3) you can just create a self-extracting archive and alienate experienced/other os users in turn.

i just don't see a reason to use anything else than zip on such small files besides the oss argument (7-zip?). :) 
Crazybump 
i found this on another board: http://www.crazybump.com/

it can take a typical shaded diffuse map (quake, quake3) and extrapolate a normal map from it with pretty good results...

on top of that, it can render out the regular greyscale map for regular bump mapping as well as make a fake ambient occlusion map based on the normals of the texture.
i've been playing around with it for a bit, and i've gotten really good results with some quake 3 textures. lower res it has a little bit more trouble, but anything over 128x128 seems to be pretty good. 
Crazybump 
Yeah it's nice, good to use along side the photoshop normal plugin to get best results. 
I Only Post On Func While Drunk 
So sup. 
Thanks For That Necros 
Looks like a viable alternative to the Nvidia toolset. 
It Appears 
to be based on difference-of-gaussian filtration, which I often do by hand when I'm trying to pull a more accurate normal out of a diffuse. (There's nothing I hate more than seeing a brick wall normal map with a ridge at the top of every brick and a trough at the bottom because someone just fed the raw diffuse map into the nvidia plugin.) 
Lun 
that still happens sometimes with this. it's far from perfect, but much better than just sticking the diffuse in there.

you can fiddle with different levels of detail to bring out (or hide) different aspects of the image. the 'very large' slider will accentuate (usually) the larger differences in height. the 'analyse for 3d' thing is hit and miss. sometimes it works, but othertimes, not so much.

you could probably get a good mix with doing the basic shape of the stuff yourself by drawing the grayscale in photoshop and then mising in the greyscale this program creates, masking out the less good areas or something. 
Different Thing 
I've heard alot of arguments about retouching normalmaps; hand painting, basicly. The constant pompous response by programmers is that it's no longer a normalmap if retouched.

I'll be trying this out monday and through the week by which time I'll be (maybe) posting some intelligent feedback.

When putting the diffuse through the juggins of normalmapping are you putting the exact same diffuse or selected layers of it? Obvious question, I think, but I've been evolving methods for a while and how others have approached it is interesting.

Especially when some twat trying to justify thier paycheck by saying everying needs diffuse, normal, specular, etc. Even though on screen its smaller than a little finger nail. 
Well 
The constant pompous response by programmers is that it's no longer a normalmap if retouched.

Artists are capable of understanding what a "normalized vector" is and how that translates to color because they're reading the normal map in the first place. Programmers don't trust artists to be smart. :) (and granted, it's usually because artists are putting 2048 textures on things like donuts)

Don't ever just feed the diffuse into the normal map filter. At the very least, pick the color channel that looks the most like a heightmap and retains the least color/dirt/lighting information. Invert it if that helps. But remember that the height map you feed the filter will not look like the diffuse map at all, and sometimes it takes a lot of work with overlays/airbrushing/dodging and burning/etc, but it's always worth it. 
 
The constant pompous response by programmers is that it's no longer a normalmap if retouched.

Well technically, to be a normal map all vectors have to have the same length, which can easily not be true if you retouch an RGB image. But since those vector components are pretty much passed directly into the dot3 equation, it probably doesn't matter and none of the math really requires that the vectors are all normalized as far as i know. 
Uh... 
But any artist with half a clue will obviously normalise his map once he's messed around with it by hand, so the map is just as valid as it was to start with really.
I pretty much do what Lun says, and I split up my diffuse alot, when I'm making the diffuse I think alot about what different layers I should keep to be able to produce the best normal map, using different settings on each layer as I transform it, and then combining them together. When I have the time I do a quick object in max or zbrush though. 
Question: 
how do you normalize a normal map -- is there a plugin or filter in photoshop that does it? 
Metl... 
Bal: 
ah, I'd used that before but didn't know it could normalize an existing normalmap. 
Btw, Are 
asset creators already using that four flash camera or some laser scanner or something to make the normalmaps from real world stuff, along with the diffuse ones... specular might be harder? 
Hrm 
i wonder if you couldn't extract the 3d from two slighltly shifted photographs (like eyes) 
Bambuz 
what are we, rich? 
Lun 
since it costs so much to make them by hand (artist salaries), I'd imagine some equipment that speeds up the creation by many times would quickly pay itself back.

But maybe the textures are already bought from third companies and this is their money making secret. 
Bambuz 
The use would be limited, as most of the time you just don't have what you want a texture of available as a real object. 
Interesting Bal 
Maybe that's the case. Or maybe it depends on the genre then. I remember the guys at Remedy making Max Payne going to New York and taking a huge amount of photographs of everything and a lot of that ended up in the game. (The dev showed us from where many textures had come from.)

On the other hand if you're doing some alien stuff maybe then original material is of limited use... But I even remember Jurassic Park guys using a laser 3d scanner so they could use elephant skin for the big dinosaur renders...

Everybody remembers that camera with four flash bulbs in the corners, which fire sequentially and thus a lot of depth information from the image can be gotten automatically and quite easily. Was some project, maybe at MIT? 
First | Previous | Next | Last
You must be logged in to post in this thread.
Website copyright © 2002-2025 John Fitzgibbons. All posts are copyright their respective authors.