The "Z going up" thing is in the modelview matrix so it's not relevant if we're talking projection.
Using a znear of 0.01f is a baaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad idea. znear should be as high as you can tolerate going, otherwise you'll get really bad precision falloff in your depth calculations (as the final projected depth is non-linear). Adjusting depth to linear in your vertex shader might look tempting (costs a few instructions but nothing earth-shattering so far as Quake is concerned) but then you lose precision for nearby objects, which looks equally bad, if not worse.
So, I'm not 100% familiar with glm but some other things I can see going wrong here.
You're not actually adjusting for r_refdef.viewangles[2] anywhere here. OK, it's 0 or near 0 most of the time, but you need it for completeness.
Does glm::mat4(1.0f) do what you think it's doing?
Your translation has origin[0] and origin[2] swapped.
Anyway, concepts are the same and what's learned/used in one API can transfer to another so here we go.
Generally rather than load each of translate/rotate as separate matrixes, I would multiply them on top of each other. So you'd have:
Code: Select all
projectionMatrix = glm::perspective(r_refdef.fov_y, (float)vid.width / (float)vid.height, 4.0f, 8192.0f); // note znear and fixme - does glm require that we make this identity first?
worldMatrix = glm::mat4(1.0f); // let's assume this sets to identity instead of loading 1 onto each component...
viewMatrix = glm::mat4(1.0f); // let's assume this sets to identity instead of loading 1 onto each component...
worldMatrix = glm::rotate(worldMatrix, -90, glm::vec3(1.0f, 0.0f, 0.0f)); // put Z going up
worldMatrix = glm::rotate(worldMatrix, 90, glm::vec3(0.0f, 0.0f, 1.0f)); // put Z going up
viewMatrix = glm::rotate(viewMatrix, -r_refdef.viewangles[2], glm::vec3(1.0f, 0.0f, 0.0f));
viewMatrix = glm::rotate(viewMatrix, -r_refdef.viewangles[0], glm::vec3(0.0f, 1.0f, 0.0f));
viewMatrix = glm::rotate(viewMatrix, -r_refdef.viewangles[1], glm::vec3(0.0f, 0.0f, 0.1f));
viewMatrix = glm::translate(viewMatrix, glm::vec3(-r_origin[0], -r_origin[1], -r_origin[2]));
Then just multiply the 3 for your final MVP which you'll send to your shader. That should work.
Note that I've put the player position into view but the Z going up into world; I just find this approach seems a little clearer as view is then exclusively used for player positioning; all other transforms (including entity transforms) can either go on world or can use their own matrixes. It also enables you to use the view matrix based solely on player position for fog calculations, which is neat.
You don't have to do it that way, in fact it generally doesn't matter which way (load them all onto projection if you wish, even) so long as you get the order correct and are consistent throughout your code.