Stock Quake uses 32 areanodes; no more, no less, irrespective of the size of the map. What this means is that bigger maps will generate bigger areas, these bigger areas will have more entities in them, so the number of tests goes up and server performance goes down.
Instead of pulling areanodes from a fixed-size static array we're going to allocate them in Hunk memory. Testing all of the ID1 maps we can see that for 80% to 90% of cases they stop generating areanodes when the node size in x and y goes below about 1024. Cross-checking this final code we can confirm that in the same percentage of ID1 maps it still generates 32 areanodes, so we've confirmed that's a good and valid maximum size.
Let's go.
All code is in world.c; look for the "ENTITY AREA CHECKING" comment heading and replace the existing code under it (down to and including SV_ClearWorld) with this lot:
Code: Select all
#define AREANODE_OPTIMIZATION
typedef struct areanode_s
{
int axis; // -1 = leaf node
float dist;
struct areanode_s *children[2];
link_t trigger_edicts;
link_t solid_edicts;
} areanode_t;
#define AREA_DEPTH 4
#define AREA_NODES 32
#ifdef AREANODE_OPTIMIZATION
static areanode_t *sv_areanodes = NULL;
#else
static areanode_t sv_areanodes[AREA_NODES];
static int sv_numareanodes;
#endif
/*
===============
SV_CreateAreaNode
===============
*/
areanode_t *SV_CreateAreaNode (int depth, vec3_t mins, vec3_t maxs)
{
areanode_t *anode;
vec3_t size;
vec3_t mins1, maxs1, mins2, maxs2;
#ifdef AREANODE_OPTIMIZATION
anode = (areanode_t *) Hunk_Alloc (sizeof (areanode_t));
#else
anode = &sv_areanodes[sv_numareanodes];
sv_numareanodes++;
#endif
ClearLink (&anode->trigger_edicts);
ClearLink (&anode->solid_edicts);
#ifdef AREANODE_OPTIMIZATION
VectorSubtract (maxs, mins, size);
// most id1 maps stop creating area nodes at this size
if (size[0] < 1024 && size[1] < 1024)
{
anode->axis = -1;
anode->children[0] = anode->children[1] = NULL;
return anode;
}
#else
if (depth == AREA_DEPTH)
{
anode->axis = -1;
anode->children[0] = anode->children[1] = NULL;
return anode;
}
VectorSubtract (maxs, mins, size);
#endif
if (size[0] > size[1])
anode->axis = 0;
else
anode->axis = 1;
anode->dist = 0.5 * (maxs[anode->axis] + mins[anode->axis]);
VectorCopy (mins, mins1);
VectorCopy (mins, mins2);
VectorCopy (maxs, maxs1);
VectorCopy (maxs, maxs2);
maxs1[anode->axis] = mins2[anode->axis] = anode->dist;
anode->children[0] = SV_CreateAreaNode (depth+1, mins2, maxs2);
anode->children[1] = SV_CreateAreaNode (depth+1, mins1, maxs1);
return anode;
}
/*
===============
SV_ClearWorld
===============
*/
void SV_ClearWorld (void)
{
SV_InitBoxHull ();
#ifdef AREANODE_OPTIMIZATION
sv_areanodes = SV_CreateAreaNode (0, sv.worldmodel->mins, sv.worldmodel->maxs);
#else
memset (sv_areanodes, 0, sizeof(sv_areanodes));
sv_numareanodes = 0;
SV_CreateAreaNode (0, sv.worldmodel->mins, sv.worldmodel->maxs);
#endif
}
In theory Quake servers will now run a little faster; in practice you might have other bottlenecks that prevent you seeing a measurable performance increase.