Infinite Controls: simple UI for your games

This is a quite WIP project I have with the aim of providing simple and limited UI controls, more game-oriented than usual UI-oriented, as games doesn’t needs the same set of features as modern OSes and they don’t even work the same way, at least for XNA/MonoGame based games (not event driven.)

More details on the github project page.

Here is a view of the sample:

sample1

InfiniteControls sample 1

Volume control currently doesn’t allow dragging of the cursor, and progress bars need lot more work (especially considering how much features I would like to add: vertical bar, progress/regress bar, with text and customisable justification, etc)

Also, some kind of menu framework is planned.

Advertisements
Posted in Non classé | Leave a comment

Depth buffer with XNA: did I miss something or…

…it’s the biggest jackshit of all ? (or how people let you think they’re a lot skilled/better than you than they actually are.)

Working with 3D Graphics with XNA by Seam James and the prelighting renderer (chapter3), adding one thing at a time, starting by depth buffer and normal buffer. I just discover the depth buffer is completely not what it is supposed to be (a picture of the scene with a nice gradient depending on the distance to vertices from camera), instead, I got a single coloured texture !

Before going deeper, one big issue with Sean James‘ code: he has use an insanely huge scale to render the scene, maybe because he was unable to deal with his own mess (chapter 3 is messy as hell, incomplete codes, classes appearing from nowhere (ex: PPPointLight), I challenge you to succeed to make the code in the books works as-is !)

Such size is a very bad start as this article for instance shows precision is already an issue (depth stored as floating points), others also point to the same issue.

Another questionable choice is to use a depth buffer with a single object, which doesn’t make much sense, AFAIK it is used to know which pixels of objects should be drawn/occluded in the scene.

Here is what I got so far:

This slideshow requires JavaScript.

Looking at this J.Coluna LPP implementation (see example picture with the lizard), it looks like the depth buffer I got is close to the wanted one or even it’s good, but it needs more investigation to be sure.

Here is another example of good looking depth buffer (unfortunately, just a show-off).

This another tutorial (from Riemer Grootjans) shows a good looking depth buffer, also built with z/w vertices coordinates, and this one works.

Reading the shader code, it just shows it is the same as Catalin Zima‘s deferred rendering tutorial in XNA 3.1 (and the XNA 4.0 conversion by Roy Triesscheijn)

DFRcodescomp

Comparison of Catalin Zima and Sean James shader code

Got a clue in HLSL development cookbook book as it looks like render target index are CRITICAL as they’re assigned for a dedicated duty (0 = depth/stencil, 1 diffuse/specular, 2 normal and 3 specpower), in the prelighting renderer, slots 1 & 2 (in fact 0 & 1, need to be confirmed) are assigned in the wrong order (0 = normal, 1 = depth). Just add a dummy RT for slot 1 and putting them in depth, dummy and normal order, I finally get a working depth buffer (gradient according to distance), but loose the normal buffer !

In fact, using only normal and depth in that order in SetRenderTargets call looks like to make depth works (it may also be just normal in 8 bits instead of 24 bits), and using dummy, normal, depth in that order makes all “works”, only got a flat depth buffer as before, a dummy diffuse colour buffer and the normal buffer.

czdr-sameissue

Catalin Zima deferred rendering with buffers displayed, still no good depth buffer ???

To quote what he wrote on his tutorial:

“The depth might seem all-white, but it isn’t actually: the values are close to white because of how depth precision is distributed in the scene.”

I’m not sure it’s true, see Riemer Grootjans‘s HLSL tutorial with shadow mapping, his depth buffer looks more like what it should be (many/most of examples I’ve seen about depth buffer/z-buffer shows example of a gradient coloured pictures), if it looks all white/black/red/any colour picked, it most likely means you’re wrong, as either you have chosen the wrong depth buffer creation type (the typical z/w is not linear along near to far plane distance, leading to ugly artefacts caused by z-fighting in the distance) or your near/far plane values are bad for your needs or what you’re trying to do.

Addition: in fact, what I can see with depth buffer render target is the same colour is assigned to a given model, so they all looks like using ambient lighting effect (flat), but distance SHOULD BE pixel-wise, not model-wise (as the ultimate goal of DB is to discard occluded pixels, not whole models).

Could it be my hardware which have issues ? Or the driver ?

Unless I’m wrong, if so please tell me, I get evidence depth buffer was not used for depth testing purpose as it should in this so popular deferred rendering “tutorial”,  I’ve set depth to 0 in one case then to 0.99 in the other, instead of depth.x/depth.y (line 107 of effect RenderGBuffer.fx), and… surprisingly, there was no depth issues at all (polygon in front instead of being occluded for example, nor Z-fighting), but final light of the scene doesn’t look the same (with depth buffer value set to 1, scene is full pitch black.)

WTF ??? depth should not be used as diffuse colour or at least give those results when set as before !

Another example, “popular” rendering code from Jorge Adriano Luna (example from LPP game framework, under XNA):

LPP rendering example, DB also wrong

LPP rendering from J. Coluna, still not better !

Again, same issue, depth texture is “binary” (its more looks like a mask for opacity testing than hat it is supposed to be.)

This is two new pictures of what I’ve got, as you may notice, there is something wrong, maybe related to culling or normals (but it would be weird if it is the case), the inside of the objects shows the expected result: a nice gradient from close to far vertices, whereas the outside of the object shows a single coloured, flat surface. WTF ? If you have any clue, please let me know.

In the meantime, this great article from Shawn Hargreaves’ blog gives lot’s of useful intel about rendertarget, depthbuffer, and related stuff.

The above examples of renderer suffer from the depth buffer precision (explained here for example), where 50% of precision is located in the [ nearplane, nearplane * 2 ] range, which is very very bad, as for a NP = 0.1, it means [ 0.1, 0.2 ] range hold 50% of he precision, if farplane = 1000, just do the maths ! In his HLSL tutorial, Riemer Grootjans use a NP = 5, and result is a lot more… useful I guess.) Another issue is most (probably all  in fact) DirectX and OpenGL tutorials, documentations and talks use hardware depth/stencil buffer, but with XNA/MonoGame, we don’t use it, we use a simple texture, which is not the same thing and can hold issue regarding precision and performance.

Little update on the three usual tutorials quoted above: Catalin Zima (and so Sean James) made the mistake believing because some render target format is given, the data (more accurately pixel shader output) will offer magically the same precision, that’s wrong ! Rendertarget are just mere texture space without any particular meaning for the shaders.

Explanations:

Catalin Zima use in his shader:

half4 Depth : COLOR2;

output.Depth = input.Depth.x / input.Depth.y;

half4 is as its name suggest half a float4, so 16 bits are use for THE WHOLE vector (each four components: x,y,z and w or r,g,b and a, both subscript can be used, but not mixed)

Riemer Grootjans use:

float4 Color : COLOR0;

Output.Color = PSIn.Position2D.z/PSIn.Position2D.w;

A bit better.

But in both cases, the variables (ouput.Depth and Output.Color) will get the x/y or z/w values for all its four components (vector-to-scalar cast rule), so x,y,z,w (or r,g,b,a) will get the same value, which just lead to waste of (precious) memory. And they don’t care at all, this is just samples, not supposed to be the same things used for actual games, especially in the case of the deferred rendering tutorial.

Not convinced ? That’s good, try it by yourself, allow camera to move farer (especially in DR tutorial, which camera is, as a nice coincidence, clamped to very small moves, in order to hide the issue), you may see weird artefacts in the distance because renderer can’t get a proper precision for displaying a pixel in front on another, which is clear with Sean Jame‘s sample (because the incredibly ridiculous size use).

I’ve started to think of a “depth split” idea: proving four near/far planes to vertex shader, then calculating four values, for near to far pixels, each of them using 4 to 8 bits precision, and so maximizing use of memory space. We could use something like this:

n1=0.1  f1=5.0
n2=5.0 f2=15.0
n3=15.0 f3=100.0
n4=100.0 f4=5000.0

as the critical point is near/far plane distance ratio, the smaller the better. And so shader looks like this:

output.Depth.r = input.depth1.x/input.depth1.y;output.Depth.g = input.depth2.x/input.depth2.y;...

The drawback of this is depth sampling will be a little harder as we need to get the colour at pixel (u,v) then get the relevant colour component by probably choosing the one which match best the currently processed pixel, ie: depth test < or > to a given threshold.

To be continued…

Posted in 3D, MonoGame, XNA | Tagged , , , | Leave a comment

How-to fix shader warnings (with MonoGame)

Usually when compiling shader with MonoGame tools, it is quite common, unfortunately, to get this kind of warning:

warning X3206: 'mul': implicit truncation of vector type
warning X3206: implicit truncation of vector type
warning X3206: implicit truncation of vector type
warning X3206: 'mul': implicit truncation of vector type
warning X3206: implicit truncation of vector type
warning X3206: implicit truncation of vector type

Such warnings are related to this kind of code

output.Normal = mul(input.Normal, World);

float3 lightDir = normalize(LightPosition - input.WorldPosition);

where the data are declared this way:

float4x4 World;
float3 Normal : NORMAL0;
float3 LightPosition = float3(0, 5000, 0);
float4 WorldPosition : TEXCOORD2;

(these lines come from Sean James‘s 3D graphics with XNA examples code, from chapter 3)

This nice article written by Jeremiah van Oosten on his blog contains a lots of useful informations about shader in HLSL for DirectX, especially types, padding and the good usage of mixing type with operators (padding is quite a lost knowledge from the old days of C/C++ programming, misaligning structure can lead to unnecessary memory consumption, because what is nice may not be efficient !)

Reading some of the HLSL code offer simple fix to those warnings (changes in bold):

output.Normal = mul(input.Normal, (float3x3)World);

float3 lightDir = normalize(float4(LightPosition, 0) - input.WorldPosition).xyz;

And this is a flawed fix (warning no longer reported by compiler) which create an issue instead of fixing one:

output.Normal = mul(float4(input.Normal, 1.0), World).xyz;

In this case, model appears duller, specular light from previous example looks like just missing or being very low.

The moral of the story is both some people are less skilled than you think they are and tutorials have been copied from one source to another along the years, included tiny mistakes which remains unexplained before being forgotten as “acceptable”.

A good source of well written shaders is also dhpoware.

Trust no one ! NEVER !

Bonus: MonoGame use a tight and strict shader compiler, so what was working before as:

//in HLSL code
float somevalue;
//in MonoGame code
int avalue;
Effect.Parameters["somevalue"].SetValue(avalue);

will raise an exception when running your game/app, because parameter is not of the expected type.

A simple way to “fix” such issue is to use the same kind of type, ie: int in shader as in C# code. But as I’m not neither a C#, a MonoGame or a HLSL expert, I can’t explain all thoroughly, I let you get right answers from people who really knows about this.

Look at MonoGame.Framework/Graphics/Effect/EffectParameter.cs source for more details
(GetValueInt32 method for the example above)

Note: after working on some shaders designed to be used with XNA, the use of POSITION0/SV_POSITION with MonoGame doesn’t look so good, SV_* semantic is quite specific (System-Value, especially the note about migration from DX9 t DX10.)

As opposite to what is wrote here: “On DirectX platforms use the SV_Position semantic instead of POSITION in vertex shader inputs.”, SV_Position semantic should not be used always and blindly, as Jeremiah van Oosten give an explanation here: “When used in a pixel shader, the value of the parameter bound to the SV_Position semantic will be the screen space position of the current pixel being rendered.”

AFAIK this means your vertex shader will be all messed-up, or the author would have meant pixel shader instead, which makes a little more sense.

Posted in 3D, MonoGame, tips | Tagged , , , , | 4 Comments

Adding transparency to a SharpNoise map

Using alpha parameter of gradients is not enough to get some values transparent in a texture generated from a noise map with SharpNoise (and probably libNoise too), so doing:

renderer.AddGradientPoint(-1.0, new Color(255, 0, 0, 0));

won’t work(ie: not as expected), red colour for -1.0 height value will be fully opaque.

=> Do not take what I write for granted, try it yourself !

Why this doesn’t work ? Because of this: (ImageRenderer.cs l573-577)

Color newColor = new Color(
                (byte)((int)(red * 255) & 0xff),
                (byte)((int)(green * 255) & 0xff),
                (byte)((int)(blue * 255) & 0xff),
                Math.Max(sourceColor.Alpha, backgroundColor.Alpha));

whatever you have set to your gradient colour, backgroundColor.Alpha is ALWAYS equal to 255 by default !

Why (again) ?

backgroundColor is the second parameter of CalcDestinationColor method, which is set like this (ImageRenderer l515-518):

if (BackgroundImage != null)
    backgroundColor = BackgroundImage[x, y];
else
    backgroundColor = new Color(255, 255, 255, 255);

If you don’t provide a background image (of the exact same size of the source Noise Map), the value above is null and an alpha of 255 is used instead, so the Max operator before is always a fight against 255, which will always win (FF is a very badass !).

So the way to do it is to create a dummy background picture like this:

private Image createAlphaLayer(Byte Red = 255, Byte Green = 255, Byte Blue = 255, Byte Alpha = 0)
{
    Image dummyBackground = new Image(width, height);

    for (int i = 0; i < width; i++)
    {
       for (int j = 0; j < height; j++)
       {
          dummyBackground.Data[i + j * width] = new SharpNoise.Utilities.Imaging.Color(Red, Green, Blue, Alpha);
       }
    }

    return dummyBackground;
}

The above method is supposed to be used once (in case you call render multiple times) as it may take a little time, even more if you use a noise map generated from one or more modules as a mask. But you may need to call it multiple times if you have many maps with various sizes.

Also, by default, alpha is set to zero, so only the alpha defined in the gradient (and interpolated if they’re not equal all along gradients’ steps) will matter.

On little advice to conclude: don’t forget to use a picture format which support alpha channel when saving your maps (avoid JPEG for example.)

I will eventually add a sample code later, stay tuned.

Posted in SharpNoise, tips | Tagged , , | Leave a comment

2D art: add a super easy 3D effect to your sci-fi floor tiles

In this brief tutorial, we’ll learn a simple technique to achieve a nice and easy “3D effect” for cracks or some details to a floor tile or a hull or anything, with Gimp, like this one:

spacehulk-style-floortile

a SCI-FI style (or whatever) floor tile with cracks

Maybe you may need to know that I’m not, and by far, a 2D artist, even hobbyist, I just like sometime to do thing like that, mostly because I don’t know anyone to help making texture for me for example.

We start with a plain colour layer (colour used: 989872):

a single coloured background

The base picture used in this short tutorial

IMPORTANT: always take care of creating a new transparent layer, NEVER paint on an already coloured/painted layer ! (or what you have add would be impossible to “extract” from the background)

select the paint brush, with the acrylic04 brush, set size to 15 and dynamic to dynamic random. And choose a colour darker than the floor,  2F2e10 in our example. Now paint something which may looks like a crack:

cracks-step1

step 1: “cracks” added

it doesn’t looks so good for now, don’t worry, it will become amazing soon.

Now use the smudge tool, set size to 25 (in fact bigger than the previous size), smudge the previous piece of art:

cracks-step2

step2: smudge tool does its magic

Tips: do small circular moves with your mouse (if you use one), sometimes also do straight moves from “inside” to “outside” to “stretch” the colour away, so it makes quite a gradient from many to very few coloured pixels.

Tips 2: use different zoom to figure out how its look, it is often looks good in close range but awkward when zooming out, usually not be a smooth gradient enough.

Then copy the layer, move it aside a bit (right/down for example), the way we move the layer will create the light effect in the direction we want.

cracks-step3

The copy of the layer has been moved a little bit down to the right

Finally set the layer mode from normal to grain extract, and adjust opacity (~40%), move it below the “source” layer (the one you made a copy from before), adjust the position to get the desired effect, and voilà:

cracks-step4

layer copy mode changed to grain extract and opacity reduced to 40%

Below are the final result (here the difference are hard to notice, check the download package below, the lighter part is a bit away, it could looks uglier or not depending of the result, I prefer the first one, the “adjusted position” looks fuzzier and less nice):

This slideshow requires JavaScript.

Here you’ll find the Gimp picture, with all successive steps, as well as the real project picture created.

==> download <==

In order to get a result close to the picture on the top, you just need to add some plasma noise layers with various mode and opacity setting (I don’t think there is kind of absolute recipe here, just play and keep the good results you’ve got.)

Happy “cracking” ! 😉

Another floor tile created using the same technique with some variation:

rusty-sewer-tile

An old and rusty/dirty floor tile (for sewer, sci-fi, etc)

The 3D effect of the holes have been created using the same as above for cracks, the rusty effect is a bit different, layer is just moved away 1 pixel away in both directions (more and it looks fuzzy/”paper 3D” with red/blue glasses), layer is desaturated (light only), then the bucket fill tool is used to coloured it back to a green/brown colours (as overlay mode is used on a grain extract mode layer, the colour used is a light blue/violet: d09ae1).

The full Gimp file is available on my 2D tiles library on this blog (coming soon).

Posted in art | Tagged , , , , , | Leave a comment

Farseer 3.5 creating a body from a texture

Can you believe there is actually no documentation at all about creating a physical object from a sprite with Farseer 3.5 (the last release, which has been very unfortunately NOT documented) ?

The nice samples does the opposite, they use part of a texture to use on the body built in the demos. This kind of trick is only useful for scenery and a few dummy objects, not for main objects and characters. This makes all the samples not very good after all, only a mere show off of what the engine can do. Even the game sample is “faked”, as Farseer‘s bodies are created the right way in order to perfectly fit texture, but not FROM texture.

Also, there are some questions we can find on gamedev.stackexchange for example but always with complex, concave shapes.

So the one million dollars/euros/credits/whateveryourcurrencyis question is: HOW CAN I SIMPLY CREATE A BODY FROM A SIMPLE (square) TEXTURE ?

Variants:

Do we really have to use a very complex decomposition algorithm for a damned mere box/circle ?

And one very important point: as Farseer is a C# port of Box2D engine by Erin Catto, all documentations and samples from Box2D are/can also be still applied to it. But the bad news is Box2D is stalled (not so much forum posts since years now, last release 2.3.1 on April 5th 2014, news sub-forum has even not been updated to talk about it).

So below you’ll find a simple example using a square and a circle sprites to create bodies and see them in action. The ball body use the simple fact that width/height are always match the diameter of the shape (don’t forget Farseer use radius to define a circle, so diameter needs to be “halfed”.) Also you may have notice the crate was initially made to be used with another engine ;).

==> download <==

In the example, F1 key can be used to disable/enable Farseer debug drawer and F2 to enable/disable sprites drawing (if both are disabled, nothing is drawn, obviously).

 

Final note: I have to say the things I’ve learnt with BEPU help me a lot understanding the basics of physics engine, how they work, how to create “things”, and so on. If you also work, or want to work, with another physic engine, read its doc and try the samples/demos, it may be better than Farseer/Box2D and help you anyway.

Bibliography:

Posted in Farseer, MonoGame, physic, sample | Tagged , , , | Leave a comment

New design !

Hi readers,

I finally get rid of this ugly Intergalactic theme to use a better, more intuitive theme (“Twenty Ten”), with menus for an easier browsing.

Hope you’ll like it :).


 

Another, completely unrelated, new (not so new I guess) is Shoecake games releasing its old games for free, give us the opportunity to discover some uncloned (yet) game concept like BombDunk, a nice mix of minesweeper and sudoku games. And don’t miss Peg it ! If you like solitaire game, this one will drive you crazy ;).

Posted in Non classé | Leave a comment