make darkplaces look as crappy as you can get it
Moderator: InsideQC Admins
29 posts
• Page 2 of 2 • 1, 2
LordHavoc wrote:If/when I add shadowmaps they will work in 16bit, but I expect them to be slower (and hence optional).
My own efforts with shadowmaps have stalled, as it's seemingly difficult to make them work decently with accuracy in a game env. I have them working nicely in a test map that is a single room, though the shadow resolution kind of sucks. The larger problem is how to deal with multiple light sources without killing peformance, and keeping it reasonably accurate. I suppose the depth buffer could be copied off and accumulated in another buffer for each light source/caster group, but seems kinda slow.
http://red.planetarena.org - Alien Arena and the CRX engine
- Irritant
- Posts: 250
- Joined: Mon May 19, 2008 2:54 pm
- Location: Maryland
Irritant wrote:My own efforts with shadowmaps have stalled, as it's seemingly difficult to make them work decently with accuracy in a game env. I have them working nicely in a test map that is a single room, though the shadow resolution kind of sucks. The larger problem is how to deal with multiple light sources without killing peformance, and keeping it reasonably accurate. I suppose the depth buffer could be copied off and accumulated in another buffer for each light source/caster group, but seems kinda slow.
The way I intend to approach them is in a "film strip" texture, 6 cubemap sides encoded as a strip layout, each one with slightly more than 90 fov so that filtering won't go off the edge of a given side, the lighting shader would pick one of the 6 projections for each pixel's direction from the light source.
The real trick is in having your meshes in a form that you can deliver QUICKLY 1-6 times per light (and that's a lot of times per scene) depending on the side you're rendering at the time.
It's worth noting that it's possible to use the instanced drawing capabilities in OpenGL 3.0 to project 6 copies of a vertex to different output locations in the vertex shader based on the instance ID, thus rendering all 6 images in one call, but I'm not sure how efficient this really is. (There are some articles on it however)
Honestly the only things shadowmaps are really good at are spotlights/flashlights (which they were designed for), and sun shadows, however sun shadows have such a resolution issue that you have to use parallel split shadowmapping (3 or more shadowmaps of the same image size but mapped to larger and larger portions of the world, where geometry nearby uses the high quality one, and further away another shadowmap, and so on).
Omnidirectional lights are really the heart of a game's lighting however, and they're not very good at it, but people keep claiming they're faster than stencil shadows done on the CPU, we'll see.
The other aspect is that shadowmaps suffer badly from jaggies, so you have to implement your own GL_LINEAR filtering (facilitated somewhat by GL_ARB_texture_rectangle whose coords are pixels) and use float textures (you can however use a depth texture and use the builtin shadowmap sampling, but it varies by vendor - NVIDIA supports GL_NEAREST and GL_LINEAR, ATI only supports GL_NEAREST, Intel uses some horribly slow high quality filter that makes the builtin shadowmap functionality completely unusable).
But even filtering doesn't entirely cure the jaggies, so usually people implement Variance Shadowmapping to smooth the edges, which isn't the same as true soft shadowing (for one thing it becomes blurrier the further from the light, not the further from the occluder), and that eats GPU power like nothing else (a couple 1D gaussian blur shader passes per shadowmap texture, before even rendering the lighting in the main framebuffer, yay).
So I'm far from enthused about shadowmaps... But the CPU cost of stencil shadowing is becoming a problem with high poly skeletal models (which would render considerably faster with a shader doing the skeletal processing).
- LordHavoc
- Posts: 322
- Joined: Fri Nov 05, 2004 3:12 am
- Location: western Oregon, USA
basically its a way to simulate real world shadows.
lots of ways to do it shadowmaps shadowvolumes etc.
because they try to cast of every lightsource there pretty heavy on cpu time though, so its pretty much needed to cull sources you cant see or the engine would slow to a crawl.
sorry not good at explaining
.
lots of ways to do it shadowmaps shadowvolumes etc.
because they try to cast of every lightsource there pretty heavy on cpu time though, so its pretty much needed to cull sources you cant see or the engine would slow to a crawl.
sorry not good at explaining
-

revelator - Posts: 2567
- Joined: Thu Jan 24, 2008 12:04 pm
- Location: inside tha debugger
a shadow map is an image (or 6) that is projected upon the world.
its drawn with some sort of funky blending such that where its value is less than the screen's depth, a 0 is produced, and where the screen's depth is less than the shadowmap's depth value, a 1 is produced.
Thus due to the projection, anything closer than the closest surface to the light is lit, anything behind that surface is dark.
and the surface itself? well its probably flickering randomly on random bits of the surface due to depth buffer precision. Thus offsets and depth need to be biased a little such that the precision of the depth buffer of the screen and the depth of the shadow map never shadow a surface that should be casting a shadow.
The real advantage to shadow maps is that you can push a million triangles in one without caring about side facing triangles, without extruding anything off to infinity.
The disadvantage to shadow maps is that the shadows start behind the surface casting the shadow (and some times infront). In order to generate a full spherical light, you need 6 shadow maps in a cube.
Due to the projection and radial stuff etc, the shadowmap will likely have nice crisp edges that really show the pixelation in the expanded shadow map image.
So yeah, stencil shadows take two passes to draw the shadow volume (two sided stencils can do it in one pass), and one extra pass to blend in the light.
Shadow maps take 6 passes to draw the shadow volume, although these are individually cheaper passes than the shadow ones, and the results can be cached. When you want to draw them on the screen you need to render the shadow stuff to a 7th texture and blend that a few times to get rid of the pixelation in the shadow buffer (or probably just use a fragment program to do that). And *then* you can start applying the light's actual light values to the lit surfaces.
They might be the future, but they're frikkin 'orrible to play with.
its drawn with some sort of funky blending such that where its value is less than the screen's depth, a 0 is produced, and where the screen's depth is less than the shadowmap's depth value, a 1 is produced.
Thus due to the projection, anything closer than the closest surface to the light is lit, anything behind that surface is dark.
and the surface itself? well its probably flickering randomly on random bits of the surface due to depth buffer precision. Thus offsets and depth need to be biased a little such that the precision of the depth buffer of the screen and the depth of the shadow map never shadow a surface that should be casting a shadow.
The real advantage to shadow maps is that you can push a million triangles in one without caring about side facing triangles, without extruding anything off to infinity.
The disadvantage to shadow maps is that the shadows start behind the surface casting the shadow (and some times infront). In order to generate a full spherical light, you need 6 shadow maps in a cube.
Due to the projection and radial stuff etc, the shadowmap will likely have nice crisp edges that really show the pixelation in the expanded shadow map image.
So yeah, stencil shadows take two passes to draw the shadow volume (two sided stencils can do it in one pass), and one extra pass to blend in the light.
Shadow maps take 6 passes to draw the shadow volume, although these are individually cheaper passes than the shadow ones, and the results can be cached. When you want to draw them on the screen you need to render the shadow stuff to a 7th texture and blend that a few times to get rid of the pixelation in the shadow buffer (or probably just use a fragment program to do that). And *then* you can start applying the light's actual light values to the lit surfaces.
They might be the future, but they're frikkin 'orrible to play with.
- Spike
- Posts: 2892
- Joined: Fri Nov 05, 2004 3:12 am
- Location: UK
I'd like to point out that both shadowvolumes and shadowmaps are friendly to caching techniques at the geometry level.
Shadowmaps can be cached as geometry (a set of 6 meshes to draw into the textures each frame) or as texture - store one shadowmap for world shadows and another for entity shadows, clear and update the entity shadows each frame, it's technically possible to do this as one texture (using different color channels for static and dynamic) but it's not clear to me whether it's faster to use one or two textures, due to the glColorMask involved for one.
Besides the need to store two shadowmaps for caching at the texture level, you also run into the problem with infinity in a Variance Shadow-Mapping method (the world shadowmap is not likely to contain infinity in the depth buffer, but the entity shadowmap definitely will).
So if you're using variance shadowmapping I think you always end up replaying the geometry data, since infinity wrecks the edge smoothing.
The chief difference between shadowvolumes and shadowmaps is whether the depth comparisons are being done in the main view (stencil shadows are essentially marking pixels as within shadow or not) or the light view (shadowmaps are marking pixels as within shadow or not based on the distance from the light source encoded in a texture).
Obviously shadowvolumes will always be razor sharp (but have no jagged edges) because they are done in screenspace, and shadowmaps will always suffer from jagged edges or over-blurring, as well as "detachment from the caster" - the shadow ends up separated by a slight distance from the object casting the shadow, to avoid shadowing the object itself (known as "shadow acne", where some pixels are dark and some light, randomly)
John Carmack briefly touched on his technique to solve "shadow acne" in the QuakeCon 2004 keynote, apparently it involves rendering two shadowmaps, one using frontfaces and LESS compares, the other using backfaces and GREATER compares, and then checking if the pixel is within the range specified - I don't understand how this cures shadow acne without using any bias but he had said it worked great in testing, at a higher rendering cost.
Shadowmaps can be cached as geometry (a set of 6 meshes to draw into the textures each frame) or as texture - store one shadowmap for world shadows and another for entity shadows, clear and update the entity shadows each frame, it's technically possible to do this as one texture (using different color channels for static and dynamic) but it's not clear to me whether it's faster to use one or two textures, due to the glColorMask involved for one.
Besides the need to store two shadowmaps for caching at the texture level, you also run into the problem with infinity in a Variance Shadow-Mapping method (the world shadowmap is not likely to contain infinity in the depth buffer, but the entity shadowmap definitely will).
So if you're using variance shadowmapping I think you always end up replaying the geometry data, since infinity wrecks the edge smoothing.
The chief difference between shadowvolumes and shadowmaps is whether the depth comparisons are being done in the main view (stencil shadows are essentially marking pixels as within shadow or not) or the light view (shadowmaps are marking pixels as within shadow or not based on the distance from the light source encoded in a texture).
Obviously shadowvolumes will always be razor sharp (but have no jagged edges) because they are done in screenspace, and shadowmaps will always suffer from jagged edges or over-blurring, as well as "detachment from the caster" - the shadow ends up separated by a slight distance from the object casting the shadow, to avoid shadowing the object itself (known as "shadow acne", where some pixels are dark and some light, randomly)
John Carmack briefly touched on his technique to solve "shadow acne" in the QuakeCon 2004 keynote, apparently it involves rendering two shadowmaps, one using frontfaces and LESS compares, the other using backfaces and GREATER compares, and then checking if the pixel is within the range specified - I don't understand how this cures shadow acne without using any bias but he had said it worked great in testing, at a higher rendering cost.
- LordHavoc
- Posts: 322
- Joined: Fri Nov 05, 2004 3:12 am
- Location: western Oregon, USA
@Lordhavoc:
I have a question regarding shadowvolumes...In my engine, I have a problem that if I draw my shadows after drawing the objects casting them, there are artifacts on the surface of the mesh, presumably because the shadow volume starts right on the surface. Is there some trick(well I know there is) to make it so this doesn't occur?
I have a question regarding shadowvolumes...In my engine, I have a problem that if I draw my shadows after drawing the objects casting them, there are artifacts on the surface of the mesh, presumably because the shadow volume starts right on the surface. Is there some trick(well I know there is) to make it so this doesn't occur?
http://red.planetarena.org - Alien Arena and the CRX engine
- Irritant
- Posts: 250
- Joined: Mon May 19, 2008 2:54 pm
- Location: Maryland
i agree thermoscopic picture is great, the top one makes me think TRON for some reason....
...but i'm always thinking of TRON for some reason..
...but i'm always thinking of TRON for some reason..
bah
- MeTcHsteekle
- Posts: 399
- Joined: Thu May 15, 2008 10:46 pm
- Location: its a secret
29 posts
• Page 2 of 2 • 1, 2
Who is online
Users browsing this forum: No registered users and 1 guest