Exploring Rendering
Moderator: InsideQC Admins
6 posts
• Page 1 of 1
Exploring Rendering
Just a general interest thread. Now that I've taken it upon myself to learn OpenGL, certain things are starting to make sense to me very rapidly.
Remembering Tomaz's comments, I wanted to check this out so I compared DarkPlaces and FitzQuake screenshots and sure enough it looks like DarkPlaces is sorting all the triangles.
DarkPlaces:

FitzQuake:

In the DarkPlaces screen shot, you will notice that none of the below ground triangles are showing.
It's really annoying seeing particles render under water when none of the rest of the entities do.
Tomaz wrote:This will work fine for things like a window, but for object with a more complex geometry it might not be enough to sort the object from back to front, one might have sort every triangle from back to front ( which i think DP does, or at least did at some point ).
Remembering Tomaz's comments, I wanted to check this out so I compared DarkPlaces and FitzQuake screenshots and sure enough it looks like DarkPlaces is sorting all the triangles.
DarkPlaces:

FitzQuake:

In the DarkPlaces screen shot, you will notice that none of the below ground triangles are showing.
TomazQuake didnt go that far, one thing I DID do tho was fixing so the particle system works with water ( which isnt covered here either ) which does a similar thing. There i sort all the particles as being either in front or behind the water plane. So the render order is, Particles behind water, water, particles in front of water.
It's really annoying seeing particles render under water when none of the rest of the entities do.
-

Baker - Posts: 3666
- Joined: Tue Mar 14, 2006 5:15 am
Actually it looks more to me like DP is not changing the Z-Buffer writing mode so the below ground triangles are simple being z-buffered away. Fitz is drawing very clean lines so it wouldn't surprise me if Fitz is clearing the z-buffer before doing the triangle outline rendering pass.
Or maybe I'm totally off base. At first blush, though, that's my take. I don't think DP is doing anything special other than not going to the extra work that Fitz is.
Or maybe I'm totally off base. At first blush, though, that's my take. I don't think DP is doing anything special other than not going to the extra work that Fitz is.
- Willem
- Posts: 73
- Joined: Wed Jan 23, 2008 10:58 am
the original glquake source had two world rendering modes.
gl_texsort 0 and gl_texsort 1.
in gl_texsort 0 mode, it just renders each surface in the order that it pulls it out of the bsp. This means that the entire world is fully depth sorted. nearest are drawn first, and further surfaces are depth clipped away and thus don't need so much work for the gpu.
However, the code just switches textures every single surface.
and changing state can be expensive (particuarly with textures).
thus sorting by texture (building a list of all the visible surfaces on the map then drawing them all in one go) is often faster simply because it cuts down on gl state changes. Even though the gfx card now has to actually draw half the pixels twice.
(for reference, the software renderer clips walls before rendering, thus has zero overdraw and doesn't even need a depth buffer to draw the world).
But yeah, changing state is bad. Not as bad as it is in D3D. But its bad. What you want to do, really, is to do everything in one go.
Blending makes this a pain, however, and requires you to start sorting things. And it all goes downhill from there. :)
If you have 200 particles flying around the map, don't draw them as 200 separate objects, but merge them into one huge trianglesoup and actually draw them in a single loop.
with 200 of them, preferably using vertex arrays.
ideally your bsp would be loaded such that all verticies in the bsp are loaded into a single array (which is locked at the start of rendering). With each surface in the world having its consecutive verticies in a certain chunk. This single vertex array is best stored inside a vertex buffer, obviously. Your index array is then generated on a per-frame basis building all the triangles of the world, and drawing 30+ surfaces at once. State changes are bad.
gl_texsort 0 and gl_texsort 1.
in gl_texsort 0 mode, it just renders each surface in the order that it pulls it out of the bsp. This means that the entire world is fully depth sorted. nearest are drawn first, and further surfaces are depth clipped away and thus don't need so much work for the gpu.
However, the code just switches textures every single surface.
and changing state can be expensive (particuarly with textures).
thus sorting by texture (building a list of all the visible surfaces on the map then drawing them all in one go) is often faster simply because it cuts down on gl state changes. Even though the gfx card now has to actually draw half the pixels twice.
(for reference, the software renderer clips walls before rendering, thus has zero overdraw and doesn't even need a depth buffer to draw the world).
But yeah, changing state is bad. Not as bad as it is in D3D. But its bad. What you want to do, really, is to do everything in one go.
Blending makes this a pain, however, and requires you to start sorting things. And it all goes downhill from there. :)
If you have 200 particles flying around the map, don't draw them as 200 separate objects, but merge them into one huge trianglesoup and actually draw them in a single loop.
with 200 of them, preferably using vertex arrays.
ideally your bsp would be loaded such that all verticies in the bsp are loaded into a single array (which is locked at the start of rendering). With each surface in the world having its consecutive verticies in a certain chunk. This single vertex array is best stored inside a vertex buffer, obviously. Your index array is then generated on a per-frame basis building all the triangles of the world, and drawing 30+ surfaces at once. State changes are bad.
- Spike
- Posts: 2892
- Joined: Fri Nov 05, 2004 3:12 am
- Location: UK
without any context, i must assume that Tomaz was talking about transparent/alpha-blended polygons in your quote.
If so, what he meant is that for alpha blending to work correctly, objects must be drawn back to front so that the closest alpha-blended object is drawn last. Otherwise, you'll get the wrong contribution from each object onto your final framebuffer image, on pixels where multiple alpha-blended polygons were drawn. (similar to how re-arranging the layers in a photoshop file will change the results)
for objects that are convex (e.g. a simple window brush), depth-sorting objects is sufficient. For objects with concavity (e.g. a shambler), to be correct you'd have to draw each triangle in the right order.
Why is this not necessary for opaque objects? You can use the z-buffer for them, and objects can be drawn in any order (if the behind object is drawn second, it will fail the z-test on those pixels with closer depth values)
---
The screenshots you posted are unrelated to this, the only difference here is that darkplaces renders its wireframe edges with depth testing on, and fitzquake does it with depth testing off (so you can see wireframes through walls.)
If so, what he meant is that for alpha blending to work correctly, objects must be drawn back to front so that the closest alpha-blended object is drawn last. Otherwise, you'll get the wrong contribution from each object onto your final framebuffer image, on pixels where multiple alpha-blended polygons were drawn. (similar to how re-arranging the layers in a photoshop file will change the results)
for objects that are convex (e.g. a simple window brush), depth-sorting objects is sufficient. For objects with concavity (e.g. a shambler), to be correct you'd have to draw each triangle in the right order.
Why is this not necessary for opaque objects? You can use the z-buffer for them, and objects can be drawn in any order (if the behind object is drawn second, it will fail the z-test on those pixels with closer depth values)
---
The screenshots you posted are unrelated to this, the only difference here is that darkplaces renders its wireframe edges with depth testing on, and fitzquake does it with depth testing off (so you can see wireframes through walls.)
- metlslime
- Posts: 316
- Joined: Tue Feb 05, 2008 11:03 pm
in fitzquake you can see the world lines through the axe. this implies it draws the entire scene twice.
once properly, and once wireframe with depth testing disabled.
once properly, and once wireframe with depth testing disabled.
- Spike
- Posts: 2892
- Joined: Fri Nov 05, 2004 3:12 am
- Location: UK
metlslime wrote:without any context, i must assume that Tomaz was talking about transparent/alpha-blended polygons in your quote.
If so, what he meant is that for alpha blending to work correctly, objects must be drawn back to front so that the closest alpha-blended object is drawn last. Otherwise, you'll get the wrong contribution from each object onto your final framebuffer image, on pixels where multiple alpha-blended polygons were drawn. (similar to how re-arranging the layers in a photoshop file will change the results)
for objects that are convex (e.g. a simple window brush), depth-sorting objects is sufficient. For objects with concavity (e.g. a shambler), to be correct you'd have to draw each triangle in the right order.
Why is this not necessary for opaque objects? You can use the z-buffer for them, and objects can be drawn in any order (if the behind object is drawn second, it will fail the z-test on those pixels with closer depth values)
Yeah, thats exactly what I was taling about
- Tomaz
- Posts: 67
- Joined: Fri Nov 05, 2004 8:21 pm
6 posts
• Page 1 of 1
Who is online
Users browsing this forum: No registered users and 1 guest