Matrix Calcs CPU vs. GPU?

Discuss programming topics for the various GPL'd game engine sources.
Post Reply
Baker
Posts: 3666
Joined: Tue Mar 14, 2006 5:15 am

Matrix Calcs CPU vs. GPU?

Post by Baker »

In, say, OpenGL 2.0 or higher, as I understand it ...

For every entity effectively, you will be calculating your own matrix either via a library or your own matrix calculations code.

1. The projection and modelview matrix get blended together before the draw. (Which happens in OpenGL 1.0, gpu side, i.e.)
2. This is shifting a ton of matrix calculations to the CPU.
3. Many of them don't have to be recalculated constantly, but still since the projection matrix and the modelview matrix have to be multiplied, a camera location change means (i.e. you moved or turned) is going to require a ton of floating point matrix calculations in every frame.

Doesn't this to some degree fail to take advantage of what the GPU exists to do, namely handle a lot of calculations so the cpu doesn't have to?
The night is young. How else can I annoy the world before sunsrise? 8) Inquisitive minds want to know ! And if they don't -- well like that ever has stopped me before ..
taniwha
Posts: 401
Joined: Thu Jan 14, 2010 7:11 am
Contact:

Re: Matrix Calcs CPU vs. GPU?

Post by taniwha »

glLoadMatrix, glMultMatrix etc supposedly do the work on the hardware (well, depending on the hw and driver).

Certainly, caching matrices can help, but remember, if you cache them, calculating them on the CPU is probably cheaper (in the long run) than on the GPU due to having to wait for the GPU. Also, you don't want to be doing your per-entity matrix calcs for every vertex, so that's something else to remember.
Leave others their otherness.
http://quakeforge.net/
Spike
Posts: 2914
Joined: Fri Nov 05, 2004 3:12 am
Location: UK
Contact:

Re: Matrix Calcs CPU vs. GPU?

Post by Spike »

gpus are good at floating point stuff, but that's because they're good at batched operations rather than individual operations. the gpu doing your modelview calcs requires additional hardware, with additional syncronisation points/order dependance, which can leave the powerhouse parts of the card idle.
basically, sse is always better than taking the time to poke all the right registers to get the hardware to do your transformation between batches, and has no real issues with badly written programs that query the current matrix. simple is usually better.
mh
Posts: 2292
Joined: Sat Jan 12, 2008 1:38 am

Re: Matrix Calcs CPU vs. GPU?

Post by mh »

As a general rule, calcing a matrix once per x number of verts on the CPU is always going to be much more efficient. glTranslate/glRotate/etc are all calced on the CPU anyway, with the final matrix being uploaded to the GPU if needed, so there's no practical difference.
We had the power, we had the space, we had a sense of time and place
We knew the words, we knew the score, we knew what we were fighting for
Post Reply