In vague terms, is anyone able to explain in a rough manner how, say, WinQuake renders a totally unclipped surface?
I've looked through the source of WinQuake and tried to follow the code and it is rather unclear to me at this point.
Software Rendering Face (of a Cube)
Software Rendering Face (of a Cube)
The night is young. How else can I annoy the world before sunsrise?
Inquisitive minds want to know ! And if they don't -- well like that ever has stopped me before ..
Re: Software Rendering Face (of a Cube)
I never understood it fully, but from what I know the BSP renderer generates a list of spans based on the surface's edges, UV maps the vertexes of the surface's edges to the surface cache, and draws the spans, using perspective correction.
There's always some clipping, because the surfaces aren't unlimited. Surfaces are always clipped on their edges, and the engine also checks their edges against other surface's edges to see if they needed to be clipped even more, eliminating any overdraw.
The MDL renderer works a bit differently. It only draws triangles, so all surfaces will always have three edges, making it unable to compare those edges against other triangle's edges, which is why it can't avoid overdraw and must use a Z buffer for depth testing. It also calculates gouraud shading (which takes the neighboring triangles into account), reads the texture unmodified from memory instead of using a lightmapped surface cache, and draws it without perspective correction.
There's always some clipping, because the surfaces aren't unlimited. Surfaces are always clipped on their edges, and the engine also checks their edges against other surface's edges to see if they needed to be clipped even more, eliminating any overdraw.
The MDL renderer works a bit differently. It only draws triangles, so all surfaces will always have three edges, making it unable to compare those edges against other triangle's edges, which is why it can't avoid overdraw and must use a Z buffer for depth testing. It also calculates gouraud shading (which takes the neighboring triangles into account), reads the texture unmodified from memory instead of using a lightmapped surface cache, and draws it without perspective correction.
Re: Software Rendering Face (of a Cube)
Maybe some of the stuff that Mike Abrash wrote at the time would be helpful? http://www.bluesnews.com/abrash/contents.shtml
We had the power, we had the space, we had a sense of time and place
We knew the words, we knew the score, we knew what we were fighting for
We knew the words, we knew the score, we knew what we were fighting for
Re: Software Rendering Face (of a Cube)
You are forgetting the mode where it's a point cloud pretty much (the distant rendering of models).mankrip wrote:The MDL renderer works a bit differently. It only draws triangles,
i should not be here
Re: Software Rendering Face (of a Cube)
I remember listening to Michael Abrash talking about it in a video, but I've never noticed it in the code. It must be in some of those parts of the code I don't understand yet.
There's also triangle subdivision for the MDLs, but I haven't mentioned it because afaik it's only used for clipping triangles against the edges of the screen.
There's also triangle subdivision for the MDLs, but I haven't mentioned it because afaik it's only used for clipping triangles against the edges of the screen.
Re: Software Rendering Face (of a Cube)
I'll have to read that. I was looking for something like that, but could not find.mh wrote:Maybe some of the stuff that Mike Abrash wrote at the time would be helpful? http://www.bluesnews.com/abrash/contents.shtml
Well, I see this is a bit more complex than I thought.mankrip wrote:I never understood it fully, but from what I know the BSP renderer generates a list of spans based on the surface's edges, UV maps the vertexes of the surface's edges to the surface cache, and draws the spans, using perspective correction.
There's always some clipping, because the surfaces aren't unlimited. Surfaces are always clipped on their edges, and the engine also checks their edges against other surface's edges to see if they needed to be clipped even more, eliminating any overdraw.
The MDL renderer works a bit differently. It only draws triangles, so all surfaces will always have three edges, making it unable to compare those edges against other triangle's edges, which is why it can't avoid overdraw and must use a Z buffer for depth testing. It also calculates gouraud shading (which takes the neighboring triangles into account), reads the texture unmodified from memory instead of using a lightmapped surface cache, and draws it without perspective correction.
Maybe I should start with taking a look a glProject (like some open source one) to see where a pixel in XYZ space is in XY space and then read the Abrash stuff. I saw that edges stuff in the code and started getting confused. And frankly at this point, I'm not sure what a span is. But I do want to know.
The night is young. How else can I annoy the world before sunsrise?
Inquisitive minds want to know ! And if they don't -- well like that ever has stopped me before ..
Re: Software Rendering Face (of a Cube)
A span is a horizontal row of pixels.
Edges are the edges of faces, and the edges get projected on to the screen, and if you ignore clipping and overlapping for the moment then rendering the polygon is just a matter of iterating the Y coordinate from the highest Y to the lowest, and at each Y (screen row) draw a span from the current left edge to the current right edge.
Clipping vertically is just a matter of limiting the Y range to the screen.
Clipping horizontally is just limiting the current span to the screen (skipping the span if it is completely off the screen).
The edge list is a way of sorting the polygons into drawing order (closest ones overriding further ones), using the projected edges of the polygons. I couldn't say exactly how it works though.
Edges are the edges of faces, and the edges get projected on to the screen, and if you ignore clipping and overlapping for the moment then rendering the polygon is just a matter of iterating the Y coordinate from the highest Y to the lowest, and at each Y (screen row) draw a span from the current left edge to the current right edge.
Clipping vertically is just a matter of limiting the Y range to the screen.
Clipping horizontally is just limiting the current span to the screen (skipping the span if it is completely off the screen).
The edge list is a way of sorting the polygons into drawing order (closest ones overriding further ones), using the projected edges of the polygons. I couldn't say exactly how it works though.
Re: Software Rendering Face (of a Cube)
gluProject - projects a 3d world coord into a 4d screenspace coord...
yeah, that 2d screen is best represented with 4 dimensions.
to go from model space to screen space, you first need to transform by the modelview matrix (aka: world and view matricies) to put it in world space. this is the matrix(matricies) that specify the position of the view and the model. d3d keeps them separate while gl combines them.
after that, you need to transform by the projection matrix. This transforms the worldspace coord into viewspace. This coord system is your basic 4d coord system as specified by the output of a vertex program. Its coords are actually 'xw yw zw w', so multiply the first two elements by the viewport size, divide the first 3 by w and you get your xyz pixel/depth coords (homogeneous coordinates - allows the various matricies to all be multiplied together which keeps things fast and with proper perspective).
Once you have your 3 points, you can interpolate between them. find which is the highest vertex, and which is the lowest (y coord), find which side the middle coord is on, and interpolate down the triangle to the middle and then to the bottom vertex. for each y coord, find the interpolated position of the top-bottom edge and iterate over every single pixel across the horizontal span until you find the other side.
yeah, that 2d screen is best represented with 4 dimensions.
to go from model space to screen space, you first need to transform by the modelview matrix (aka: world and view matricies) to put it in world space. this is the matrix(matricies) that specify the position of the view and the model. d3d keeps them separate while gl combines them.
after that, you need to transform by the projection matrix. This transforms the worldspace coord into viewspace. This coord system is your basic 4d coord system as specified by the output of a vertex program. Its coords are actually 'xw yw zw w', so multiply the first two elements by the viewport size, divide the first 3 by w and you get your xyz pixel/depth coords (homogeneous coordinates - allows the various matricies to all be multiplied together which keeps things fast and with proper perspective).
Once you have your 3 points, you can interpolate between them. find which is the highest vertex, and which is the lowest (y coord), find which side the middle coord is on, and interpolate down the triangle to the middle and then to the bottom vertex. for each y coord, find the interpolated position of the top-bottom edge and iterate over every single pixel across the horizontal span until you find the other side.