The thing is, the polygon rasterizer already does all the projection calculations, so having to project the polygon again is a waste of processing.
Plus, the vertex data from the rasterizer is already clipped to the frustum and to the other bmodels, so it should give a more accurate result which won't result in off-screen coordinates.
Yes, the u0 and v0 in R_EmitEdge are screen coords, although there are also screen uv coordinates from another vertex and things gets complicated pretty fast during the edge sorting (clipping the bmodels to the world works in worldspace only, the edge sorting clips the polygons in screenspace).
The crosshair is only being used to check the accuracy of the midpoint algorithm, the video was just to show what I was talking about. What I actually need those coordinates for is something else (no need to complicate the thread). But I already have some ideas for alternative approaches. The only thing I'm lacking is time, but I'll slowly finish learning all this stuff.
I usually study things until I start having some guesses, and then I try to make those guesses work until I realize that they were wrong or missing critical stuff. After finally being fully frustrated it's time to start studying things more carefully and in greater detail, to either have better guesses or finally understanding things in full. If the better guesses works, I finish learning things in full during the final cleanup, refactoring & optimization steps.
Ph'nglui mglw'nafh mankrip Hell's end wgah'nagl fhtagn.