…it’s the biggest jackshit of all ? (or how people let you think they’re a lot skilled/better than you than they actually are.)
Working with 3D Graphics with XNA by Seam James and the prelighting renderer (chapter3), adding one thing at a time, starting by depth buffer and normal buffer. I just discover the depth buffer is completely not what it is supposed to be (a picture of the scene with a nice gradient depending on the distance to vertices from camera), instead, I got a single coloured texture !
Before going deeper, one big issue with Sean James‘ code: he has use an insanely huge scale to render the scene, maybe because he was unable to deal with his own mess (chapter 3 is messy as hell, incomplete codes, classes appearing from nowhere (ex: PPPointLight), I challenge you to succeed to make the code in the books works as-is !)
Such size is a very bad start as this article for instance shows precision is already an issue (depth stored as floating points), others also point to the same issue.
Another questionable choice is to use a depth buffer with a single object, which doesn’t make much sense, AFAIK it is used to know which pixels of objects should be drawn/occluded in the scene.
Here is what I got so far:
Looking at this J.Coluna LPP implementation (see example picture with the lizard), it looks like the depth buffer I got is close to the wanted one or even it’s good, but it needs more investigation to be sure.
Here is another example of good looking depth buffer (unfortunately, just a show-off).
This another tutorial (from Riemer Grootjans) shows a good looking depth buffer, also built with z/w vertices coordinates, and this one works.
Reading the shader code, it just shows it is the same as Catalin Zima‘s deferred rendering tutorial in XNA 3.1 (and the XNA 4.0 conversion by Roy Triesscheijn)
Comparison of Catalin Zima and Sean James shader code
Got a clue in HLSL development cookbook book as it looks like render target index are CRITICAL as they’re assigned for a dedicated duty (0 = depth/stencil, 1 diffuse/specular, 2 normal and 3 specpower), in the prelighting renderer, slots 1 & 2 (in fact 0 & 1, need to be confirmed) are assigned in the wrong order (0 = normal, 1 = depth). Just add a dummy RT for slot 1 and putting them in depth, dummy and normal order, I finally get a working depth buffer (gradient according to distance), but loose the normal buffer !
In fact, using only normal and depth in that order in SetRenderTargets call looks like to make depth works (it may also be just normal in 8 bits instead of 24 bits), and using dummy, normal, depth in that order makes all “works”, only got a flat depth buffer as before, a dummy diffuse colour buffer and the normal buffer.
Catalin Zima deferred rendering with buffers displayed, still no good depth buffer ???
To quote what he wrote on his tutorial:
“The depth might seem all-white, but it isn’t actually: the values are close to white because of how depth precision is distributed in the scene.”
I’m not sure it’s true, see Riemer Grootjans‘s HLSL tutorial with shadow mapping, his depth buffer looks more like what it should be (many/most of examples I’ve seen about depth buffer/z-buffer shows example of a gradient coloured pictures), if it looks all white/black/red/any colour picked, it most likely means you’re wrong, as either you have chosen the wrong depth buffer creation type (the typical z/w is not linear along near to far plane distance, leading to ugly artefacts caused by z-fighting in the distance) or your near/far plane values are bad for your needs or what you’re trying to do.
Addition: in fact, what I can see with depth buffer render target is the same colour is assigned to a given model, so they all looks like using ambient lighting effect (flat), but distance SHOULD BE pixel-wise, not model-wise (as the ultimate goal of DB is to discard occluded pixels, not whole models).
Could it be my hardware which have issues ? Or the driver ?
Unless I’m wrong, if so please tell me, I get evidence depth buffer was not used for depth testing purpose as it should in this so popular deferred rendering “tutorial”, I’ve set depth to 0 in one case then to 0.99 in the other, instead of depth.x/depth.y (line 107 of effect RenderGBuffer.fx), and… surprisingly, there was no depth issues at all (polygon in front instead of being occluded for example, nor Z-fighting), but final light of the scene doesn’t look the same (with depth buffer value set to 1, scene is full pitch black.)
depth buffer force to 0 instead of x/y
depth buffer force to 0.99 instead of x/y
WTF ??? depth should not be used as diffuse colour or at least give those results when set as before !
Another example, “popular” rendering code from Jorge Adriano Luna (example from LPP game framework, under XNA):
LPP rendering from J. Coluna, still not better !
Again, same issue, depth texture is “binary” (its more looks like a mask for opacity testing than hat it is supposed to be.)
This is two new pictures of what I’ve got, as you may notice, there is something wrong, maybe related to culling or normals (but it would be weird if it is the case), the inside of the objects shows the expected result: a nice gradient from close to far vertices, whereas the outside of the object shows a single coloured, flat surface. WTF ? If you have any clue, please let me know.
view from inside the mesh
view from outside the meshes
In the meantime, this great article from Shawn Hargreaves’ blog gives lot’s of useful intel about rendertarget, depthbuffer, and related stuff.
The above examples of renderer suffer from the depth buffer precision (explained here for example), where 50% of precision is located in the [ nearplane, nearplane * 2 ] range, which is very very bad, as for a NP = 0.1, it means [ 0.1, 0.2 ] range hold 50% of he precision, if farplane = 1000, just do the maths ! In his HLSL tutorial, Riemer Grootjans use a NP = 5, and result is a lot more… useful I guess.) Another issue is most (probably all in fact) DirectX and OpenGL tutorials, documentations and talks use hardware depth/stencil buffer, but with XNA/MonoGame, we don’t use it, we use a simple texture, which is not the same thing and can hold issue regarding precision and performance.
Little update on the three usual tutorials quoted above: Catalin Zima (and so Sean James) made the mistake believing because some render target format is given, the data (more accurately pixel shader output) will offer magically the same precision, that’s wrong ! Rendertarget are just mere texture space without any particular meaning for the shaders.
Catalin Zima use in his shader:
half4 Depth : COLOR2;
output.Depth = input.Depth.x / input.Depth.y;
half4 is as its name suggest half a float4, so 16 bits are use for THE WHOLE vector (each four components: x,y,z and w or r,g,b and a, both subscript can be used, but not mixed)
Riemer Grootjans use:
float4 Color : COLOR0;
Output.Color = PSIn.Position2D.z/PSIn.Position2D.w;
A bit better.
But in both cases, the variables (ouput.Depth and Output.Color) will get the x/y or z/w values for all its four components (vector-to-scalar cast rule), so x,y,z,w (or r,g,b,a) will get the same value, which just lead to waste of (precious) memory. And they don’t care at all, this is just samples, not supposed to be the same things used for actual games, especially in the case of the deferred rendering tutorial.
Not convinced ? That’s good, try it by yourself, allow camera to move farer (especially in DR tutorial, which camera is, as a nice coincidence, clamped to very small moves, in order to hide the issue), you may see weird artefacts in the distance because renderer can’t get a proper precision for displaying a pixel in front on another, which is clear with Sean Jame‘s sample (because the incredibly ridiculous size use).
I’ve started to think of a “depth split” idea: proving four near/far planes to vertex shader, then calculating four values, for near to far pixels, each of them using 4 to 8 bits precision, and so maximizing use of memory space. We could use something like this:
as the critical point is near/far plane distance ratio, the smaller the better. And so shader looks like this:
output.Depth.r = input.depth1.x/input.depth1.y;output.Depth.g = input.depth2.x/input.depth2.y;...
The drawback of this is depth sampling will be a little harder as we need to get the colour at pixel (u,v) then get the relevant colour component by probably choosing the one which match best the currently processed pixel, ie: depth test < or > to a given threshold.
To be continued…