[GLQuake] Depth Buffer Precision and Clearing Fix
Posted: Sat Jul 03, 2010 12:43 pm
This one is depressingly common, I'm afraid. What I'm going to do here is give the Windows API version of what needs to be done; for other operating systems you should be able to figure it out.
First of all a bit of an introduction. OpenGL is great in that it shelters you from having to deal with some of the more down and dirty aspects of the hardware. Unfortunately there are places where the abstraction leaks, and when that happens you can be bitten quite hard. So you do need to roll up your sleeves and get your hands dirty after all.
Most GLQuake-based engines request a 32-bit depth buffer, and leave it at that. However, these engines are most likely actually running with 16-bit depth on everyone's machines. The reson why is that there is actually no such thing as a 32-bit depth buffer on most consumer hardware. Your ChoosePixelFormat call is selecting a 16-bit depth buffer and unless you check what you actually get you'll never know.
I've seen this in the current versions of 3 major engines. So let's fix it.
Assuming you agree that 16-bit depth isn't enough, the first thing to do is pick a better format. Our available formats are going to be 16-bit depth, 24-bit depth (with 8 unused) and 24-bit depth (with 8 stencil). So open gl_vidnt.c, find the bSetupPixelFormat function, and change this line:
to this:
The next step (same function) is needed because the PIXELFORMATDESCRIPTOR you pass doesn't provide an absolute ruling on what you get; GDI may decide to give you something different (we already saw that when we asked for 32 but got 16). So there is a possibility that we got a stencil buffer too. So just before the "return TRUE" line, add this:
Every engine I've tested this on also gave us 8 bits of stencil. Why is this important? Simply, if you also have a stencil buffer, even if you don't actually use it, you should always clear it at the same time as you clear your depth buffer. Otherwise your performance will suffer quite a huge dropoff.
So add a global qboolean called something like gl_havestencil to your gl_vidnt.c, set it to true if your pfd.cStencilBits is greater than 0 (after calling DescribePixelFormat), and then extern it so that it's accessible to gl_rmain.c
Then, when clearing, check if gl_havestencil is true, and if so, add GL_STENCIL_BUFFER_BIT to your glClear call. Easy.
First of all a bit of an introduction. OpenGL is great in that it shelters you from having to deal with some of the more down and dirty aspects of the hardware. Unfortunately there are places where the abstraction leaks, and when that happens you can be bitten quite hard. So you do need to roll up your sleeves and get your hands dirty after all.
Most GLQuake-based engines request a 32-bit depth buffer, and leave it at that. However, these engines are most likely actually running with 16-bit depth on everyone's machines. The reson why is that there is actually no such thing as a 32-bit depth buffer on most consumer hardware. Your ChoosePixelFormat call is selecting a 16-bit depth buffer and unless you check what you actually get you'll never know.
I've seen this in the current versions of 3 major engines. So let's fix it.
Assuming you agree that 16-bit depth isn't enough, the first thing to do is pick a better format. Our available formats are going to be 16-bit depth, 24-bit depth (with 8 unused) and 24-bit depth (with 8 stencil). So open gl_vidnt.c, find the bSetupPixelFormat function, and change this line:
Code: Select all
32, // 32-bit z-bufferCode: Select all
24, // 24-bit z-bufferCode: Select all
DescribePixelFormat (hDC, pixelformat, sizeof (PIXELFORMATDESCRIPTOR), &pfd);So add a global qboolean called something like gl_havestencil to your gl_vidnt.c, set it to true if your pfd.cStencilBits is greater than 0 (after calling DescribePixelFormat), and then extern it so that it's accessible to gl_rmain.c
Then, when clearing, check if gl_havestencil is true, and if so, add GL_STENCIL_BUFFER_BIT to your glClear call. Easy.