First of all a bit of an introduction. OpenGL is great in that it shelters you from having to deal with some of the more down and dirty aspects of the hardware. Unfortunately there are places where the abstraction leaks, and when that happens you can be bitten quite hard. So you do need to roll up your sleeves and get your hands dirty after all.
Most GLQuake-based engines request a 32-bit depth buffer, and leave it at that. However, these engines are most likely actually running with 16-bit depth on everyone's machines. The reson why is that there is actually no such thing as a 32-bit depth buffer on most consumer hardware. Your ChoosePixelFormat call is selecting a 16-bit depth buffer and unless you check what you actually get you'll never know.
I've seen this in the current versions of 3 major engines. So let's fix it.
Assuming you agree that 16-bit depth isn't enough, the first thing to do is pick a better format. Our available formats are going to be 16-bit depth, 24-bit depth (with 8 unused) and 24-bit depth (with 8 stencil). So open gl_vidnt.c, find the bSetupPixelFormat function, and change this line:
Code: Select all
32, // 32-bit z-buffer
Code: Select all
24, // 24-bit z-buffer
Code: Select all
DescribePixelFormat (hDC, pixelformat, sizeof (PIXELFORMATDESCRIPTOR), &pfd);
So add a global qboolean called something like gl_havestencil to your gl_vidnt.c, set it to true if your pfd.cStencilBits is greater than 0 (after calling DescribePixelFormat), and then extern it so that it's accessible to gl_rmain.c
Then, when clearing, check if gl_havestencil is true, and if so, add GL_STENCIL_BUFFER_BIT to your glClear call. Easy.