Code: Select all
ontop
{
sort nearest //insert into the last group of things drawn (these will not be hit by rtlights)
{
map $whiteimage //just a white image
rgbgen const 1 0.5 0.5 //tinted red.
blendfunc gl_one gl_one //simple addition with the current framebuffer
nodepthtest //disable depth tests (and writes)
}
}
no glsl required...
alternatively you can just drawpic it after the 3d scene if you're trying to do post processing.
that's what you originally asked for anyway.
glsl has these outputs:
gl_FragColor/gl_FragData[]/outs (not to be confused with the actual pixel colour)
gl_FragDepth (overrides the fragment's depth value - using this WILL disable early-z optimisations, so only use it the only other choice is more overdraw)
discard; (discards this part of the fragment entirely - can have performance implications as the z values are only known once the fragment shader completes, rather than before)
the fragment vs pixel distinction is valid - the glsl does NOT write the colour that the pixel will become, rather it writes a value that will be passed to the blend unit of the gpu. the fragment shader doesn't have access to the depth buffer or the colour buffer - only the blend unit does.
gl_FragDepth contains the depth of the fragment, not that of the framebuffer. Reading it is explicitly disallowed (you should be able to calculate it regardless). This is why you're normally expected to use render-to-texture or whatever first, if you want access to the 'framebuffer' (ie: by making a copy of it first) - this avoids weird race conditions.
so yeah, use something like this:
Code: Select all
vector vsize = argument_to_updateview;
clearscene();
vector psize = (vector)getproperty(VF_SCREENPSIZE); //get the actual size in pixels, so we don't end up with any scaling
setproperty(VF_RT_DESTCOLOUR, "colourbuf", 1, psize); //rgba8
setproperty(VF_RT_DEPTH, "depthbuf", 6, psize); //depth32, for lots of precision (3d writes to it, 2d reads from it)
renderscene();
//colourbuf and depthbuf now contain the scene colour+depth.
setproperty(VF_RT_DESTCOLOUR, ""); //2d is now drawing to the screen again.
setproperty(VF_RT_SOURCECOLOUR, "colourbuf"); //$sourcecolour now refers to 'colourbuf'
drawpic([0,0], "mypostprocshader", vsize, '1 1 1', 1, 0); //do post processing
setproperty(VF_RT_DEPTH, "depthbuf", 6, psize); //done reading from that now.
then if you have a mypostprocshader shader with maps $sourcecolour then $sourcedepth, then you can read from s_t1 - the red channel will hold the depth values. note that if you try to directly draw it then you'll find that there's not much difference between any of the pixels. be prepared to rescale it by a lot before you can actually see any clear differences.
note that you may need to draw two scenes if you want to do weird depth compares - first time to generate the normal scene, second time you have a depth buffer that you can read to compare against.
alternatively if you're using fte's deferred lighting, you can use the $gbufferN image indicated by gl_deferred_pre_depth, in any shader with a sort key of unlitdecal, banner, underwater, blend, additive, or nearest, or you can use it freely in post-processing shaders. note that you can sample any of the gbuffer images after that point, so you can have different (opaque) entities writing into one of the channels that you can then read out later. but yeah, I'll probably end up breaking that method again at some point. I get bored, see...
using .forceshader, you can draw the ent into the gbuffers, and then add it to the scene using a shader with a different sort key.
note that a 'sort nearest' shader with 'depthfunc greater' will draw only where the thing you're trying to draw was obscured.
using forceshader and two addentity calls you can get weird overlays working.
using the undocumented/untested VF_RT_DESTCOLOUR1 value, you can draw stuff to a different image (or you could try to figure out some way to keep the alpha channel usable, like using alphamask in all your other shaders). don't underestimate blend funcs using gl_dst_alpha either.
you can then run an edge finding post-process shader to draw outlines for obscured ents. no depth buffer reading needed.