Makaqu
Re: Makaqu
ogg would not be hard with external dll, see Quakespasm bgmusic, but I understand the principle of having it self-contained.
I didn't realize how many duplicate functions exist for interpolated vs. non-interpolated until searching occurrences of it in the code.
I didn't realize how many duplicate functions exist for interpolated vs. non-interpolated until searching occurrences of it in the code.
Re: Makaqu
I'm doing this to understand better how the MDL code works. After cleaning up and understanding everything, I'll see what else can be improved in it.
Re: Makaqu
Removed r_aliasuvscale and aliasvrect. Going to remove aliasvrectright and aliasvrectbottom now. Also replacing their x86 ASM calls and changing the indexes of refdef_t in asm_draw.h to compensate.
r_aliasuvscale was set to 1.0 and never changed, so it wasn't actually used. Definitely an experimental feature that Id Software dropped.
[edit] Removed a number of other unused variables and functions. It's amazing how much code there is that serves no other purpose than to confuse us.
A couple optimizations also went away, but those optimizations got disabled since the frame interpolation was added in the first versions of this engine.
r_aliasuvscale was set to 1.0 and never changed, so it wasn't actually used. Definitely an experimental feature that Id Software dropped.
[edit] Removed a number of other unused variables and functions. It's amazing how much code there is that serves no other purpose than to confuse us.
A couple optimizations also went away, but those optimizations got disabled since the frame interpolation was added in the first versions of this engine.
Re: Makaqu
I just figured out that the position/rotation interpolations can be made smoother by interpolating each axis independently.
Also implementing .scale and .scalev interpolations. Alpha blending interpolation could also be implemented, but doesn't make much sense in 8-bit rendering.
Also implementing .scale and .scalev interpolations. Alpha blending interpolation could also be implemented, but doesn't make much sense in 8-bit rendering.
Re: Makaqu
Refactored a lot of code. The clippling code for MDL models is a lot cleaner now, some more unused variables and functions have been removed, probably most of the interpolation code was rewritten, and the outlines for the cel-shading effect have been reimplemented in a much cleaner and faster way (though their actual drawing code still needs a huge rewrite).
There's still a lot to do. I've looked at how the MDL vertex compression works, and it should be easy to convert its vertex coordinates into floats upon loading.
There's still a lot to do. I've looked at how the MDL vertex compression works, and it should be easy to convert its vertex coordinates into floats upon loading.
Re: Makaqu
Got to the first point where I've got no clue of what's happening:
I've added some more comments, but they're just guesses.
What does D_PolysetSetEdgeTable do, exactly? Why does it only use the v axis (index 1) of each vertex?
After that, pedgetable is used by D_RasterizeAliasPolySmooth a number of times, and that's it. I'm afraid I won't be able to understand D_RasterizeAliasPolySmooth if I don't understand D_PolysetSetEdgeTable first.
Code: Select all
int
r_p0[6]
, r_p1[6]
, r_p2[6]
;
typedef struct
{
int
isflattop
, numleftedges
// each edgevert can be either r_p0, r_p1 or r_p2
, * pleftedgevert0
, * pleftedgevert1
, * pleftedgevert2
, numrightedges
, * prightedgevert0
, * prightedgevert1
, * prightedgevert2
;
} edgetable;
static edgetable
* pedgetable
, edgetables[12] =
{
{0, 1, r_p0, r_p2, NULL, 2, r_p0, r_p1, r_p2}
, {0, 2, r_p1, r_p0, r_p2, 1, r_p1, r_p2, NULL}
, {1, 1, r_p0, r_p2, NULL, 1, r_p1, r_p2, NULL}
, {0, 1, r_p1, r_p0, NULL, 2, r_p1, r_p2, r_p0}
, {0, 2, r_p0, r_p2, r_p1, 1, r_p0, r_p1, NULL}
, {0, 1, r_p2, r_p1, NULL, 1, r_p2, r_p0, NULL}
, {0, 1, r_p2, r_p1, NULL, 2, r_p2, r_p0, r_p1}
, {0, 2, r_p2, r_p1, r_p0, 1, r_p2, r_p0, NULL}
, {0, 1, r_p1, r_p0, NULL, 1, r_p1, r_p2, NULL}
, {1, 1, r_p2, r_p1, NULL, 1, r_p0, r_p1, NULL}
, {1, 1, r_p1, r_p0, NULL, 1, r_p2, r_p0, NULL}
, {0, 1, r_p0, r_p2, NULL, 1, r_p0, r_p1, NULL}
}
;
[...]
void D_PolysetSetEdgeTable (void)
{
int
edgetableindex = 0 // assume the vertices are already in top to bottom order
;
//
// determine which edges are right & left, and the order in which to rasterize them
//
if (r_p0[1] >= r_p1[1]) // vertex 0 is above or on the same height of vertex 1
{
if (r_p0[1] == r_p1[1]) // vertexes 0 and 1 are on the same height
{
pedgetable = &edgetables[ (r_p0[1] < r_p2[1]) ? 2 : 5]; // if both are lower than vertex 2, set the index to 2; otherwise, 5
return;
}
edgetableindex = 1; // if vertex 0 is above vertex 1, set the index to 1
}
if (r_p0[1] == r_p2[1])
{
pedgetable = &edgetables[9 - edgetableindex];
return;
}
if (r_p1[1] == r_p2[1])
{
pedgetable = &edgetables[11 - edgetableindex];
return;
}
if (r_p0[1] > r_p2[1])
edgetableindex += 2;
if (r_p1[1] > r_p2[1])
edgetableindex += 4;
pedgetable = &edgetables[edgetableindex];
}
void D_PolysetDraw_C (void) // mankrip - edited
{
[...]
r_p0[0] = index0->v[0]; // u
r_p0[1] = index0->v[1]; // v
r_p0[2] = index0->v[2]; // s
r_p0[3] = index0->v[3]; // t
r_p0[4] = index0->v[4]; // light
r_p0[5] = index0->v[5]; // iz
[...]
D_PolysetSetEdgeTable ();
D_RasterizeAliasPolySmooth ();
}
What does D_PolysetSetEdgeTable do, exactly? Why does it only use the v axis (index 1) of each vertex?
After that, pedgetable is used by D_RasterizeAliasPolySmooth a number of times, and that's it. I'm afraid I won't be able to understand D_RasterizeAliasPolySmooth if I don't understand D_PolysetSetEdgeTable first.
Re: Makaqu
when rasterizing a triangle, first you transform+clip the verticies into 2d space.
then you have to figure out which edegs are which so that you can walk across the span from one edge to the next.
that's what the edge table is about - the two vertical edges of the triangle.
a triangle can be rotated in all sorts of ways when its actually drawn on the screen, which typically requires walking from the top, expanding outwards towards the middle, then going back in at the bottom or so.
of course, if culling is used, the triangle is guarenteed to be in a (anti?)clockwise order. which means the layout of the triangle can be determined _purely_ by the vertical positions of the verticies.
the side that has the middle vertex needs to be drawn with 2 bits in the middle. the other side only needs 1. if two verticies are on the same line then you can skip that part of the triangle entirely.
so that code is basically deciding which rules need to be followed to generate the span lists and interpolants properly.
its not the sort of function that really needs extra features or anything. Its D_RasterizeAliasPolySmooth that calculates your interpolants, and its that function that will be needed if you want to add things like fog (though chances are you can set up fog on a per-triangle basis rather than per-pixel).
then you have to figure out which edegs are which so that you can walk across the span from one edge to the next.
that's what the edge table is about - the two vertical edges of the triangle.
a triangle can be rotated in all sorts of ways when its actually drawn on the screen, which typically requires walking from the top, expanding outwards towards the middle, then going back in at the bottom or so.
of course, if culling is used, the triangle is guarenteed to be in a (anti?)clockwise order. which means the layout of the triangle can be determined _purely_ by the vertical positions of the verticies.
the side that has the middle vertex needs to be drawn with 2 bits in the middle. the other side only needs 1. if two verticies are on the same line then you can skip that part of the triangle entirely.
so that code is basically deciding which rules need to be followed to generate the span lists and interpolants properly.
its not the sort of function that really needs extra features or anything. Its D_RasterizeAliasPolySmooth that calculates your interpolants, and its that function that will be needed if you want to add things like fog (though chances are you can set up fog on a per-triangle basis rather than per-pixel).
Re: Makaqu
Thanks, I'm almost starting to understand it. It's just that I really want to understand how the whole MDL drawing works.
Fog isn't in my plans. But if I were to do it, I'd either calculate it per-entity (the fastest and less accurate way) or per-pixel, for maximum accuracy.
Fog isn't in my plans. But if I were to do it, I'd either calculate it per-entity (the fastest and less accurate way) or per-pixel, for maximum accuracy.
Re: Makaqu
But why ambient? I have per-entity fog which works quickly down the model spans and it's more precise than the world spans fog although my distance exponents are incorrect right now.
I did try ambient per-entity fog before and I didn't like how models look like ghosts at certain fog ranges.
I did try ambient per-entity fog before and I didn't like how models look like ghosts at certain fog ranges.
i should not be here
Re: Makaqu
Because it should be the fastest method; get the entity's distance from the screen (which is already calculated by the depth sorting algorithm used for alpha blending), use the value to generate/select a tinted color shading map, and it's done. If multiple color shading maps are pre-generated for this, the impact on the rendering performance should be virtually zero. And unlike post-processed fog, this would also work properly on alpha blended entities.
I'd have to test it before deciding whether to use this solution, though.
Fog is a feature that requires an awful lot of testing and tweaking, don't help to speed up the content creation process (for mods or stand-alone games), and is also not a feature of the original Quake game, so it is something I don't feel inclined to work on. But I'm aware it's something that will be needed if I get serious about supporting community-made maps.
I'd have to test it before deciding whether to use this solution, though.
Fog is a feature that requires an awful lot of testing and tweaking, don't help to speed up the content creation process (for mods or stand-alone games), and is also not a feature of the original Quake game, so it is something I don't feel inclined to work on. But I'm aware it's something that will be needed if I get serious about supporting community-made maps.
Re: Makaqu
Almost understanding the D_PolysetSetEdgeTable algorithm... Still got the "column" variants at the end to figure out, but for now I need to sleep, and won't have time tomorrow.
I've added a bunch of visual comments to help me in this. Some are still wrong, but I'll remove them after figuring the rest out:
I've added a bunch of visual comments to help me in this. Some are still wrong, but I'll remove them after figuring the rest out:
Code: Select all
void D_PolysetSetEdgeTable (void)
{
/*
possible relative positions:
? ?
? ?
? ?
? ?
? ?
? ?
column and left
?
?
?
column and right
?
?
?
row and top
?
? ?
row and bottom
? ?
?
maybe "row and top" and "row and bottom" can also work for the four corners, since the code does not distinguish the horizontal position between vertical variations:
? ? ? ?
? ?
? ?
? ? ? ?
"downward" diagonal (leftmost vertex on the center, rightmost vertex on the bottom)
?
?
?
"upward" diagonal (leftmost vertex on the center, rightmost vertex on the top)
?
?
?
"downward" diagonal (rightmost vertex on the center, leftmost vertex on the bottom)
?
?
?
"upward" diagonal (rightmost vertex on the center, leftmost vertex on the top)
?
?
?
PS: maybe the diagonals can be treated like "column and left" and "column and right", since the code does not distinguish the horizontal position between vertical variations
*/
int
edgetableindex = 0 // assume the vertices are already in top to bottom order
;
//
// determine which edges are right & left, and the order in which to rasterize them
//
if (r_p0[1] >= r_p1[1]) // vertex 0 is above or on the same height of vertex 1
{
if (r_p0[1] == r_p1[1]) // vertexes 0 and 1 are on the same height
{
pedgetable = &edgetables[ (r_p0[1] < r_p2[1]) ? 2 : 5]; // if both are lower than vertex 2, set the index to 2; otherwise, 5
/*
edgetableindex 5
0 1
2
edgetableindex 2
2
1 0
*/
return;
}
// if (r_p0[1] > r_p1[1])
edgetableindex = 1; // if vertex 0 is above vertex 1, set the index to 1
/*
relative position of vertex 2 is still unknown
? 0
?
? 1
?
*/
}
if (r_p0[1] == r_p2[1])
{
pedgetable = &edgetables[9 - edgetableindex];
/*
for edgetableindex 1
2 0
1
for edgetableindex 0
1
0 2
*/
return;
}
if (r_p1[1] == r_p2[1])
{
pedgetable = &edgetables[11 - edgetableindex];
/*
for edgetableindex 1
0
2 1
for edgetableindex 0
1 2
0
*/
return;
}
if (r_p0[1] > r_p2[1])
edgetableindex += 2;
/*
for edgetableindex 1
0
?
1
?
for edgetableindex 0
1
0
2
*/
if (r_p1[1] > r_p2[1])
edgetableindex += 4;
/*
for edgetableindex 1
0
1
2
for edgetableindex 0
1
2
0
for edgetableindex 3
0
1
2
for edgetableindex 2
1
0
2
*/
pedgetable = &edgetables[edgetableindex];
}
Re: Makaqu
So, there are real reasons for the D_PolysetSetEdgeTable algorithm to be hard to understand: There are two redundant outcomes AND two (unreachable) paradoxes in it!
Also, see this line, early in the function? Can you guess why it uses the values 2 and 5 as the possible outcomes for the index?
It's because those two values are the edgetableindex values of the two unreachable paradoxal outcomes! Id Software reused those indexes to avoid unnecessarily allocating more memory for the edgetables array.
Here's my complete set of comments within the function:See how every possible combination of clockwise vertex order and relative position is covered.
Now I can move on to studying the next functions.
Also, see this line, early in the function? Can you guess why it uses the values 2 and 5 as the possible outcomes for the index?
Code: Select all
pedgetable = &edgetables[ (r_p0[1] < r_p2[1]) ? 2 : 5];
Here's my complete set of comments within the function:
Code: Select all
void D_PolysetSetEdgeTable (void)
{
/*
possible relative positions:
? ?
? ?
? ?
? ?
? ?
? ?
column and left
?
?
?
column and right
?
?
?
row and top
?
? ?
row and bottom
? ?
?
"row and top" and "row and bottom" are also used for the four square corners,
since the code does not distinguish the horizontal position between vertical variations:
? ? ? ?
? ?
? ?
? ? ? ?
? ? ? ?
? ?
? ?
? ? ? ?
? ? ? ?
? ?
? ?
? ? ? ?
these diagonals are treated like "column and left" and "column and right",
since the code does not distinguish the horizontal position between vertical variations:
"downward" diagonal (leftmost vertex on the center, rightmost vertex on the bottom)
?
?
?
"upward" diagonal (leftmost vertex on the center, rightmost vertex on the top)
?
?
?
"downward" diagonal (rightmost vertex on the center, leftmost vertex on the bottom)
?
?
?
"upward" diagonal (rightmost vertex on the center, leftmost vertex on the top)
?
?
?
*/
int
edgetableindex = 0 // assume the vertices are already in top to bottom order
;
//
// determine which edges are right & left, and the order in which to rasterize them
//
if (r_p0[1] >= r_p1[1]) // vertex 0 is above or on the same height of vertex 1
{
if (r_p0[1] == r_p1[1]) // vertexes 0 and 1 are on the same height
{
pedgetable = &edgetables[ (r_p0[1] < r_p2[1]) ? 2 : 5]; // if both are lower than vertex 2, set the index to 2; otherwise, 5
/*
edgetableindex 5
0 1
2
edgetableindex 2
2
1 0
*/
return;
}
// if (r_p0[1] > r_p1[1])
edgetableindex = 1; // if vertex 0 is above vertex 1, set the index to 1
/*
relative position of vertex 2 is still unknown
? 0
?
? 1
?
*/
}
if (r_p0[1] == r_p2[1])
{
pedgetable = &edgetables[9 - edgetableindex];
/*
for edgetableindex 1, now 8
2 0
1
for edgetableindex 0, now 9
1
0 2
*/
return;
}
if (r_p1[1] == r_p2[1])
{
pedgetable = &edgetables[11 - edgetableindex];
/*
for edgetableindex 1, now 10
0
2 1
for edgetableindex 0, now 11
1 2
0
*/
return;
}
if (r_p0[1] > r_p2[1])
edgetableindex += 2;
/*
for edgetableindex 1, now 3
0
?
1
?
for edgetableindex 0, now 2
1
0
2
*/
// else // if (r_p0[1] < r_p2[1])
/*
for edgetableindex 1
2
0
1
for edgetableindex 0
?
1
?
0
*/
if (r_p1[1] > r_p2[1])
edgetableindex += 4;
/*
for edgetableindex 3, now 7
0
1
2
for edgetableindex 2, now 6
1
0
2
... nothing changed
for edgetableindex 1, now 5 (this actually never happens here, because
a) the if(r_p0[1] > r_p2[1]) statement turns edgetableindex 1 into 3, and
0
1
2
b) for (r_p0[1] < r_p2[1]), vertex 1 can't be above vertex 2, because vertex zero is below vertex 2 and vertex 1 is below vertex zero
2
0
1
... so it's an unreachable paradox)
for edgetableindex 0, now 4
1
2
0
*/
// else // if (r_p1[1] < r_p2[1])
/*
for edgetableindex 3
0
2
1
for edgetableindex 2
(another unreachable paradox)
2 // if this was reachable, 2 would be on the top
1
0
2 // but 2 is on the bottom, because (r_p0[1] > r_p2[1])
for edgetableindex 1
2
0
1
... nothing changed
for edgetableindex 0
2
1
0
*/
pedgetable = &edgetables[edgetableindex];
}
Now I can move on to studying the next functions.
Re: Makaqu
D_PolysetCalcGradients (for MDL models) looks completely different from both D_SpriteCalculateGradients (for SPR models) and D_CalcGradients (for BSP models).
I've done a small optimization to it, but now I'm trying to understand why they're so different. What I already know is that MDL models uses 2D vertexes to map the textures, while SPR and MDL models uses axes (for position, rotation and scaling).
MDL models also have gouraud shading, but this shouldn't interfere in understanding the texture mapping itself.
Here's a slightly optimized D_PolysetCalcGradients:
The gradient functions for the other model formats also have interesting differences. BSP surfaces takes their mipsize in consideration (a feature that could be adapted for supporting hi-res SPR texture replacements), and SPR models also calculates their z buffer gradients in this function (BSP surfaces have their z buffer gradients calculated in a separated step). BSP surfaces have their normals pre-calculated (if I recall correctly), but for SPR models, their normal is defined here:
By the way, I'm probably using the term "normal" incorrectly, because I still only have a vague idea of its meaning. Gonna read more about it now...
I've done a small optimization to it, but now I'm trying to understand why they're so different. What I already know is that MDL models uses 2D vertexes to map the textures, while SPR and MDL models uses axes (for position, rotation and scaling).
MDL models also have gouraud shading, but this shouldn't interfere in understanding the texture mapping itself.
Here's a slightly optimized D_PolysetCalcGradients:
Code: Select all
void D_PolysetCalcGradients_C (int skinwidth) // mankrip - transparencies - edited
{
static float
xstepdenominv
, ystepdenominv
, t0
, t1
, p01_minus_p21
, p11_minus_p21
, p00_minus_p20
, p10_minus_p20
;
xstepdenominv = 1.0 / (float)d_xdenom;
ystepdenominv = -xstepdenominv;
// mankrip - optimization
p00_minus_p20 = (r_p0[0] - r_p2[0]) * ystepdenominv;
p01_minus_p21 = (r_p0[1] - r_p2[1]) * xstepdenominv;
p10_minus_p20 = (r_p1[0] - r_p2[0]) * ystepdenominv;
p11_minus_p21 = (r_p1[1] - r_p2[1]) * xstepdenominv;
t0 = r_p0[2] - r_p2[2];
t1 = r_p1[2] - r_p2[2];
r_sstepx = (int) (t1 * p01_minus_p21 - t0 * p11_minus_p21);
r_sstepy = (int) (t1 * p00_minus_p20 - t0 * p10_minus_p20);
t0 = r_p0[3] - r_p2[3];
t1 = r_p1[3] - r_p2[3];
r_tstepx = (int) (t1 * p01_minus_p21 - t0 * p11_minus_p21);
r_tstepy = (int) (t1 * p00_minus_p20 - t0 * p10_minus_p20);
// ceil () for light so positive steps are exaggerated, negative steps diminished,
// pushing us away from underflow toward overflow.
// Underflow is very visible, overflow is very unlikely, because of ambient lighting
t0 = r_p0[4] - r_p2[4];
t1 = r_p1[4] - r_p2[4];
r_lstepx = (int) ceil (t1 * p01_minus_p21 - t0 * p11_minus_p21);
r_lstepy = (int) ceil (t1 * p00_minus_p20 - t0 * p10_minus_p20);
t0 = r_p0[5] - r_p2[5];
t1 = r_p1[5] - r_p2[5];
r_zistepx = (int) (t1 * p01_minus_p21 - t0 * p11_minus_p21);
r_zistepy = (int) (t1 * p00_minus_p20 - t0 * p10_minus_p20);
a_sstepxfrac = r_sstepx & 0xFFFF;
a_tstepxfrac = r_tstepx & 0xFFFF;
a_ststepxwhole = skinwidth * (r_tstepx >> 16) + (r_sstepx >> 16);
}
Code: Select all
void D_CalcGradients (msurface_t *pface)
{
mplane_t *pplane;
float mipscale;
vec3_t p_temp1;
vec3_t p_saxis, p_taxis;
float t;
pplane = pface->plane;
mipscale = 1.0 / (float)(1 << miplevel);
TransformVector (pface->texinfo->vecs[0], p_saxis);
TransformVector (pface->texinfo->vecs[1], p_taxis);
t = xscaleinv * mipscale;
d_sdivzstepu = p_saxis[0] * t;
d_tdivzstepu = p_taxis[0] * t;
t = yscaleinv * mipscale;
d_sdivzstepv = -p_saxis[1] * t;
d_tdivzstepv = -p_taxis[1] * t;
d_sdivzorigin = p_saxis[2] * mipscale - xcenter * d_sdivzstepu -
ycenter * d_sdivzstepv;
d_tdivzorigin = p_taxis[2] * mipscale - xcenter * d_tdivzstepu -
ycenter * d_tdivzstepv;
VectorScale (transformed_modelorg, mipscale, p_temp1);
t = 0x10000*mipscale;
sadjust = ((fixed16_t)(DotProduct (p_temp1, p_saxis) * 0x10000 + 0.5)) -
((pface->texturemins[0] << 16) >> miplevel)
+ pface->texinfo->vecs[0][3]*t;
tadjust = ((fixed16_t)(DotProduct (p_temp1, p_taxis) * 0x10000 + 0.5)) -
((pface->texturemins[1] << 16) >> miplevel)
+ pface->texinfo->vecs[1][3]*t;
//
// -1 (-epsilon) so we never wander off the edge of the texture
//
bbextents = ((pface->extents[0] << 16) >> miplevel) - 1;
bbextentt = ((pface->extents[1] << 16) >> miplevel) - 1;
}
Code: Select all
void D_SpriteCalculateGradients (void)
{
vec3_t
p_normal
, p_saxis
, p_taxis
, p_temp1
;
float
distinv
;
TransformVector (r_spritedesc.vpn , p_normal);
TransformVector (r_spritedesc.vright, p_saxis);
TransformVector (r_spritedesc.vup , p_taxis);
VectorInverse (p_taxis, p_taxis);
distinv = 1.0f / (-DotProduct (modelorg, r_spritedesc.vpn));
d_sdivzstepu = p_saxis[0] * xscaleinv;
d_tdivzstepu = p_taxis[0] * xscaleinv;
d_sdivzstepv = -p_saxis[1] * yscaleinv;
d_tdivzstepv = -p_taxis[1] * yscaleinv;
d_zistepu = p_normal[0] * xscaleinv * distinv;
d_zistepv = -p_normal[1] * yscaleinv * distinv;
d_sdivzorigin = p_saxis[2] - xcenter * d_sdivzstepu - ycenter * d_sdivzstepv;
d_tdivzorigin = p_taxis[2] - xcenter * d_tdivzstepu - ycenter * d_tdivzstepv;
d_ziorigin = p_normal[2] * distinv - xcenter * d_zistepu - ycenter * d_zistepv;
TransformVector (modelorg, p_temp1);
sadjust = ( (fixed16_t) (DotProduct (p_temp1, p_saxis) * 0x10000 + 0.5)) - (- (cachewidth >> 1) << 16);
tadjust = ( (fixed16_t) (DotProduct (p_temp1, p_taxis) * 0x10000 + 0.5)) - (- (sprite_height >> 1) << 16);
// -1 (-epsilon) so we never wander off the edge of the texture
bbextents = (cachewidth << 16) - 1;
bbextentt = (sprite_height << 16) - 1;
}