It looks like I've figured out how to implement kernel-filtered dithering in the MDL drawing code. This week has been crazy busy, but I'll test my theory this weekend.
However, I'm not really excited about this, since what I really wanted to implement I haven't figured out how to yet.
This part of the code is so much more tricky to understand than the rest of the MDL rendering (setup/interpolation/transformation/clipping) code, it almost feels like it was written by a different person. I'll keep reading and studying it.
Makaqu
Re: Makaqu
Possibly a difference between Carmack and Abrash style?mankrip wrote:This part of the code is so much more tricky to understand than the rest of the MDL rendering (setup/interpolation/transformation/clipping) code, it almost feels like it was written by a different person. I'll keep reading and studying it.
I copy-pasted D_PolysetCalcGradients and it worked well. Haven't benchmarked it.
Re: Makaqu
I don't know. Some functions in the clipping code also have optimized x86 ASM versions, but their style still feels different.
Re: Makaqu
I think it's more ordered-dither than kernel-filter in this case. Shading isn't texels.
i should not be here
Re: Makaqu
Which is why I've filtered it unidimentionally.leileilol wrote:Shading isn't texels.
[edit] To be more precise, I'm filtering it during the color shading map lookup. Textures are bidimentional, and the color shading is unidimentional.
Re: Makaqu
I couldn't manage to port Engoo's lightmap dithering, but figured out another solution.
Kernel-filter dithering of lightmaps on BSP surfaces, fully implemented. Was a bit more tricky than I thought, because the lightmap brightness had to be extra clamped; padding the color shading map wasn't enough.
Padding the color shading map fixed a problem with wrong colors on very dark pixels of MDL models when chiaroscuro is enabled, though.
There's still a lot of work to do on the MDL model renderer.
Kernel-filter dithering of lightmaps on BSP surfaces, fully implemented. Was a bit more tricky than I thought, because the lightmap brightness had to be extra clamped; padding the color shading map wasn't enough.
Padding the color shading map fixed a problem with wrong colors on very dark pixels of MDL models when chiaroscuro is enabled, though.
There's still a lot of work to do on the MDL model renderer.
Re: Makaqu
Okay, so here's the reason why I'm studying the MDL renderer so much: I want to implement perspective correction in it.
That is the only real advantage GLQuake still has over Makaqu, and I don't want to implement additional features before getting that out of the way. The plan is to make it optional, with options for on, off, and viewmodel-only.
But the code for affine rendering is heavily optimized, and I'm having my ass handed to me.
I'm also reading this perspective-correct texture mapping document as well as GameDev.net's Perspective Corrected Texture Mapping document to try to figure out how to do it.
So far, what I've found out is:
- a_sstepxfrac, a_tstepxfrac and a_ststepxwhole, which are set in D_PolysetCalcGradients_C, are only used for optimizing the affine drawing algorithm, and can be safely discarded;
- According to the perspective correction documents linked above, fv->v[2] and fv->v[3] should also be projected in R_AliasProjectFinalVert. Either by doing fv->v[2] = (int) ( ( (float) (fv->v[2]) / zi) ); or by doing fv->v[2] /= fv->v[5];
- However, the finalvert_t structure has a leftover field called "reserved", so we can do fv->reserved = zi; to store the 1/Z value into it, and use it to calculate the UV projection in D_PolysetCalcGradients_C instead, I guess.
Other than that, I'm wasting a lot of time on failed experiments to get perspective correction working. I believe that at least D_RasterizeAliasPolySmooth and D_PolysetScanLeftEdge_C will also need significant code changes. The right thing should probably be to do the same thing I've done on the underwater screen warp - figure out how to undo absolutely all of the original optimizations, implement the needed corrections to the unoptimized code, and then finally re-optimize it all again.
I'm facing some complicated situations IRL though, so that may take a really long time to be done.
That is the only real advantage GLQuake still has over Makaqu, and I don't want to implement additional features before getting that out of the way. The plan is to make it optional, with options for on, off, and viewmodel-only.
But the code for affine rendering is heavily optimized, and I'm having my ass handed to me.
I'm also reading this perspective-correct texture mapping document as well as GameDev.net's Perspective Corrected Texture Mapping document to try to figure out how to do it.
So far, what I've found out is:
- a_sstepxfrac, a_tstepxfrac and a_ststepxwhole, which are set in D_PolysetCalcGradients_C, are only used for optimizing the affine drawing algorithm, and can be safely discarded;
- According to the perspective correction documents linked above, fv->v[2] and fv->v[3] should also be projected in R_AliasProjectFinalVert. Either by doing fv->v[2] = (int) ( ( (float) (fv->v[2]) / zi) ); or by doing fv->v[2] /= fv->v[5];
- However, the finalvert_t structure has a leftover field called "reserved", so we can do fv->reserved = zi; to store the 1/Z value into it, and use it to calculate the UV projection in D_PolysetCalcGradients_C instead, I guess.
Other than that, I'm wasting a lot of time on failed experiments to get perspective correction working. I believe that at least D_RasterizeAliasPolySmooth and D_PolysetScanLeftEdge_C will also need significant code changes. The right thing should probably be to do the same thing I've done on the underwater screen warp - figure out how to undo absolutely all of the original optimizations, implement the needed corrections to the unoptimized code, and then finally re-optimize it all again.
I'm facing some complicated situations IRL though, so that may take a really long time to be done.