Forum

Vertex Array (Collecting info)

Discuss programming topics for the various GPL'd game engine sources.

Moderator: InsideQC Admins

Vertex Array (Collecting info)

Postby Baker » Fri May 20, 2011 1:29 am

With vertex arrays, do you have one vertex array per model or per frame. I'm referring here to alias models (q1 .mdl) although I'd assume this applies to other simple animated model formats like q2 mdl (maybe even q3 model).

I just want to collect some info in advance before I start to experiment with it.
The night is young. How else can I annoy the world before sunsrise? 8) Inquisitive minds want to know ! And if they don't -- well like that ever has stopped me before ..
User avatar
Baker
 
Posts: 3666
Joined: Tue Mar 14, 2006 5:15 am

Postby Spike » Fri May 20, 2011 3:24 am

as many as you specify
the gpu doesn't care where your array starts
all it cares is that the indexes specified by the index list are correct.

gl has no built in interpolation - that requires a vertex program. if you use one, you can specify two separate arrays, one for each frame, along with a weight and do the blend in your vertex program. in such a situation you really want to include vertex normals too, so it can do lighting in the vertex program too.

so either you rebuild the array each video frame with your interpolation code written in C (changing the data in the arrays), or you specify per-frame arrays and do the blending in a vertex program (changing the entire array itself.

if you don't have interpolation, then things get really easy! :P


FTE dynamically builds a new position+normal array each frame drawn if using blending. If not using blending, it just gives the driver the buffered info directly without extra copies.
the main advantage of arrays is that it reduces the overhead of calling glVertex multiple times for 500 vert models. Using glBegin requires you to specify the same vertex up to 3 times (depending on your fans/strips). If you're recalculating vertex normals 3 times per vertex, you can understand that using a vertex array would be faster, especially when you reach the number of verticies in more recent games.

but yeah, if you're using a vertex program, you can specify which vertex sets to blend from and just tell it the location of each frame's vertex array data, and do the blending entirely on the gpu, along with lighting calculations. The only reason FTE doesn't do that is because it creates a lot more pathways when you need to do blending with realtime lighting and things.
Spike
 
Posts: 2892
Joined: Fri Nov 05, 2004 3:12 am
Location: UK

Postby mh » Fri May 20, 2011 9:53 am

DirectQ never rebuilds it's VBOs for drawing MDLs, and interpolates entirely on the GPU. The D3D code looks something like:
Code: Select all
   SetStreamSource (0, mdl->Vertexes, Offsets[LERP_CURR], sizeof (aliasvertexstream_t));
   SetStreamSource (1, mdl->Vertexes, Offsets[LERP_LAST], sizeof (aliasvertexstream_t));
   SetStreamSource (2, mdl->TexCoords, 0, sizeof (aliastexcoordstream_t));

   SetIndices (mdl->Indexes);

So streams 0 and 1 point to different offsets in the same VBO. This is of course really only possible with a vertex shader. If you're stuck with the fixed pipeline you're SOL. There's a GL_ARB_vertex_blend extension but it was never really widely supported and doesn't seem to exist at all on modern hardware, probably because it emerged at about the same time as vertex shaders and so had no real reason to exist.

Like Spike said, if you don't have interpolation then things are basically trivial. You don't need a vertex shader, you just glVertexPointer to the offset for the frame you want to draw, and draw it.

Sorting your MDLs by frame is helpful here, otherwise you'll be doing lots of VBO switching (which is more expensive in OpenGL than in D3D, but that doesn't mean it's free in D3D either). Otherwise you could just store all vertexes for all MDLs in a single VBO, but you still need to switch the vertex pointers for each frame (and it complicates loading a little).

Geometry instancing is also helpful here. The theory is that if two or more MDLs have the same lastpose and currpose then they can just be drawn with one draw call and using blend weights, light data and a matrix as per-instance data. The pose order doesn't actually matter, if an MDL has the same poses but in the opposite order just switch them and invert the blendweights. This is more important for D3D as draw calls are more expensive. They're not free in OpenGL but you need much higher numbers before instancing becomes viable. It has it's own overhead (you've got to build a VBO of per-instance data dynamically, for example) but it's all about balancing tradeoffs so that you come out on top.

All of this makes me really really really wish that APIs allowed setting a different index buffer (or even just a different BaseVertexIndex param) per stream. :(
We had the power, we had the space, we had a sense of time and place
We knew the words, we knew the score, we knew what we were fighting for
User avatar
mh
 
Posts: 2292
Joined: Sat Jan 12, 2008 1:38 am

Postby Spike » Fri May 20, 2011 6:04 pm

You get a basevertexindex by fiddling with the offset into the buffer/system memory and then using an actual base index of 0.

which is one thing worth noting.
q1 models have weird texturing. texture coordinates can be either facing front, or facing back. triangles that are 'facing back' will add half the texture width to verticies flagged as onseam.
the easiest way to cope with this is to double the number of verticies and add to the vertex index when importing the triangles, then strip out the unused (figure out which ones are used, and generate a mapping table from the old index to the new, one list for verticies, one list for texcoords, and build your frame data from those mapping tables - there's no programming problem which can't be solved by an extra layer of indirection).
from what I remember, q2+ do not have this weirdness.
Spike
 
Posts: 2892
Joined: Fri Nov 05, 2004 3:12 am
Location: UK

Postby mh » Sat May 21, 2011 2:29 am

Code: Select all
         // check for back side and adjust texcoord s
         if (!triangles[i].facesfront && stverts[vertindex].onseam) s += hdr->skinwidth / 2;


;)

The main reason why I was thinking of moving to D3D11 some time ago was so that I could use a geometry shader for particles. That's a pretty pathetic reason, but I have a better one now: you can copy from one VBO to another entirely on the GPU with D3D10/11.
We had the power, we had the space, we had a sense of time and place
We knew the words, we knew the score, we knew what we were fighting for
User avatar
mh
 
Posts: 2292
Joined: Sat Jan 12, 2008 1:38 am


Return to Engine Programming

Who is online

Users browsing this forum: No registered users and 1 guest