GPU Shader

Friday I was composing some object shaders and integrated them into the project. At some point we realized that under-water objects where reflected as they were above the water although I was using clipping panes. I thought I screwed up something but an idea made me check everything without the shaders and I found the trouble maker: the default shader of virtools is setting ClipPlaneEnable to zero! 

:-(

After the initial shock, I thought: interesting, can it entirely be setup and used inside the HLSL technique definitions? Soon I dropped that idea, because I would have to do technique-switching and that requires manual bookkeeping as Virtools offers no help with that. So how could I get back my correct reflections?

I thought about an old workaround I did somewhen in end 2006 or early 2007 but for this scenario it wouldn't work. This time I really would need some kind of "texkill" instruction that I did not found back then. And you know what? The HLSL counterpart is

Clip(x)

Probably you all already know about that, but for me it's my new best "HLSL  friend"! :)

All fine now 8)

(Btw. in regards to the default shader, it might be a better idea to omit that state overwrite. Don't know how this is helpful in the way it is right now. )

Sometimes I get shader compiler errors that don't really help me to spot the problem fast. Sometimes I don't write shaders for months, so it takes me always some time to get back into groove. Moreover I "ain't no" Shader-Guru so Shader issues are not always obvious to me.

So today it was time to rig up a new one, so I started with a basic structure. I have a couple of shaders from past projects that I use as reference but they don't use a common terminology or structure. Something I like to change. Therefore I set up those two structs for the VS/PS input streams and, as I had no real pixel shader code ready, I thought that I'll simply output the vertex position as dummy data until things are more detailed.

Bad decision. Compiler error says "invalid input semantic 'POSITION'". I was like "what the …". I mean if you don't have it as part of the struct/stream, it complains that it needs POSITION to be filled. Moreover I have plenty of shaders where POSITION is part of the struct and therefore part of the Input …. well the last part of the sentence is a bit wrong.

You can't access POSITION from within the Pixel Shader and that's what generates this compiler error. It's OK to have it in the struct so it can be filled by the Vertex Shader but don't use that field in the Pixel Shader. If you have to, you need to write the value inside the Vertex Shader into another register that can be used by the Pixel Shader.

Pretty obvious now … still the compiler error message could be more clear… 😉

If you read Beyond3D or Tomf's blog then you probably already heard of INTEL's project 'larrabee'. If not: the basic idea is to mix CPU and GPU functionality into one 'multi mini-core' system. One could call that "CGPU". The target for 2009 is up to 16 cores (maybe less) with each capable of 4 hardware threads.

The source article ends with: 

Now do you understand why AMD had to buy ATI to survive? 

In the context of the mixed architecture that might be more flexible to to program, raytracing (as alternatives to rasterasation) for real-time rendering is discussed. Another citation from the above article is

Now do you see why Nvidia is dead?

Maybe you know about Mental Images, the creator of the famous MentalRay offline renderer that ships with 3dsMax, Maya and XSI … … they have been bought by NVIDIA! Nvidia does provide with Gelato another rendering product for the DCC market but maybe their acquisition was also interesting for them in regards to the Intel's Larrabee project where raytracing for real-time (RTRT) is researched.

So what will the future bring us? Maybe instead of 2 pairs of separate competitors 'AMD<->INTEL and NVIDIA<->ATI' we will have a triangle 'INTEL<->NVIDIA<->AMD/ATI' ?

( P.s. Actually, as I am right now discovering, Larrabeee details are already public since April 2006 )

Why is CG 2.0 of special interest?

Let's have a look at this Nvida PDF_from the GDC2007 that talks about Geforce8 OpenGL extensions. You will see it's Shader 4.0 related… Shader 4.0 is DirectX 10 stuff! Now in this PDF it says the following, too:

Cg 2.0 -> Not a GL extension -> Will support the capabilities 

So when will CG 2.0 be available? Well … it's already released! The CG 2.0 Toolkit is part of the Nvidia OpenGL SDK

Please obtain the Cg 2.0 Toolkit from the full OpenGL SDK 10 installer.

I think this means DX10 features using a Geforce8 are already available under Windows XP!  Now  …. what if Virtools 4.x would come with a CG 2.0 rasterizer?

As Patrick said:

cool shit

Laughing

Do you know "Texture Shaders"? If not, don't worry as probably very few Virtools users know about it (if any at all). Though it's a Virtools "feature" since 3.x, I discovered them just recently, too. So, what is it? The docs say:

A texture shader uses a compiled high-level shader language (HLSL) function to fill each texel of each of its mipmap level.  It is commonly used to generate procedural 2D or volumetric texture such as wood, noise, gradients or precomputed lookup tables. Annotations on the texture parameter declaration allow you to specify how the texture must be created. It can be used to load a new texture from a resource, create a new texture that will be used locally by the shader, select an existing texture or fill an existing texture.

GPU Shader code to create texture content …. does it ring? Create procedural texture content faster due GPU power! Maybe you know "farbrausch" and their products for procedural content, like werkkzeugTE for texture generation. So this could help in having smaller footprints for webplayer content or faster procedural level generation (i.e. height maps) or creating floating-point texture-content at run-time (currently my intention).

But the title of this article doesn't sound positive, so where is the problem? Well, it's a "at-compilation-time" only feature. Currently there's no way to (re-)trigger it at runtime. In addition to that it does not consider manual parameters, so parameters have to be hard-coded and after each change you need to compile the shader.

Frown

At leasts that's how it seems to be so far – I might overlook something.
Imagine Virtools would have created a GUI for creating Shader code like ShaderFX…. do you see what kind of ideas I got when I discovered it this in the docs? 

It's there … for everybody now! Cool

Grab your FX Composer 2 beta copy!

NVidia FXComposer 2 - beta 3

I already played a bit with Mental Mill some time ago … but only very shortly. The GUI seems a bit unusual (and a bit slow), but of course it's the first iteration and of course it will take time to learn how to use this tool. It can export to HLSL and CGfX. FX Composer 2 also supports COLLADA FX (Model Data and Shader Data) … everything very interesting.

I am curious, let me know what you think about Mental Mill and FX Composer 2!