Page 1 of 1

Screen coordinates of BSP polygon vertexes

PostPosted: Sun May 20, 2018 7:53 am
by mankrip
I'm having a hell of a hard time trying to do this on a BSP polygon:
- Get the on-screen vertex coordinates which are nearest and farthest from the screen plane;
- Calculate their midpoint (mid = near + 0.5 * (far - near), for each axis);
- Display the crosshair over it (setting cl_crossx to mid[0] and cl_crossy to mid[1], properly offset from the screen center).

The thing is, it gets screwed up right at the first step. The other two are fine. Sometimes it does work, but if I rotate the camera or walk a bit, it gets screwed up. It only works properly in a very few non-predictable angles & positions.

The x86 Assembly code is already removed from my engine, so there's no conflict there.

What I've done is, I've replaced the float nearzi with a vec3_t vertexnear and added a vec3_t vertexaway in both edges and surfaces, as well as replacing the float r_nearzi global with a vec3_t r_vertexnear and vec3_t r_vertexaway combo. So, every time the nearzi is calculated, the onscreen x and y coordinates (which the engine calls u and v) are stored in vertexnear along with the zi component. The coordinates of the farthest vertex are stored in a similar way.

And the problem is that despite the zi value being consistent at all times (both near and far away), the x and y coordinates of both vertexes jumps around randomly when I try to read them. It's driving me fucking nuts. I've double-checked the code multiple times and there's nothing outside of my code that could change the x and y coordinates.

Re: Screen coordinates of BSP polygon vertexes

PostPosted: Sat May 26, 2018 4:19 pm
by mankrip
Here's a video showing the problem.

The polygon midpoint algorithm should point the crosshair (cl_crossx.value, cl_crossy.value) to the midpoint (between the nearest and farthest vertexes) of the polygon of the wall in front of the slipgate, but it keeps pointing to random places in the floor when the floor comes into view.

Re: Screen coordinates of BSP polygon vertexes

PostPosted: Sun May 27, 2018 1:41 am
by Spike
Can you not just do:
vec3_t mid;
VectorMA(r_origin, surf->texinfo->vecs[0], (surf->texturemins[0]+surf->extents[0]*0.5-surf->texinfo->vecs[0][3]), mid);
VectorMA(mid, surf->texinfo->vecs[1], (surf->texturemins[1]+surf->extents[1]*0.5-surf->texinfo->vecs[1][3]), mid);
float t = surf->plane->dist-DotProduct(mid, surf->plane->normal);
VectorMA(mid, surf->plane->normal, t, mid);
and then project mid?
Or, decompose the polygon into triangles, find the midpoints of each, sum those weighted by their areas, and divide by the sum of the area to give you a more usable midpoint.

Either one should give more robust results than something that depends upon the camera's position.
If you're trying to detect what the crosshair is targetting then it generally helps to stay in 3d the whole way, which is quite easy if you can guarantee the crosshair to be in the middle of the view...

You should then be able to rip the logic from eg R_EmitEdge (the u0+v0 coords it calculates appear to be screen coords). Note that you'll need R_RotateBmodel or so in order to calculate the correct modelview matrix (at least for non-world entities) - remember to preserve some of the globals that it overwites, in case you want to restore some previous settings.
But it sounds like you already have that part sussed.

Re: Screen coordinates of BSP polygon vertexes

PostPosted: Sun May 27, 2018 6:22 am
by mankrip
The thing is, the polygon rasterizer already does all the projection calculations, so having to project the polygon again is a waste of processing.
Plus, the vertex data from the rasterizer is already clipped to the frustum and to the other bmodels, so it should give a more accurate result which won't result in off-screen coordinates.

Yes, the u0 and v0 in R_EmitEdge are screen coords, although there are also screen uv coordinates from another vertex and things gets complicated pretty fast during the edge sorting (clipping the bmodels to the world works in worldspace only, the edge sorting clips the polygons in screenspace).

The crosshair is only being used to check the accuracy of the midpoint algorithm, the video was just to show what I was talking about. What I actually need those coordinates for is something else (no need to complicate the thread). But I already have some ideas for alternative approaches. The only thing I'm lacking is time, but I'll slowly finish learning all this stuff.

I usually study things until I start having some guesses, and then I try to make those guesses work until I realize that they were wrong or missing critical stuff. After finally being fully frustrated it's time to start studying things more carefully and in greater detail, to either have better guesses or finally understanding things in full. If the better guesses works, I finish learning things in full during the final cleanup, refactoring & optimization steps.