Anything related to game development (design, techniques, middlewares, libs, etc)

Last week I digged into the topic of exporting a lightmapped, vertex-colored scene from 3ds max to Unity. Unity3D does not come with special exporters but exports to standard formats like FBX and COLLADA. When exporting to FBX, 3ds max Shell Materials do not get converted automatically to a lightmapped shader upon import. No wonder as FBX (= Filmbox which became later MotionBuilder) is not specially targeted towards game content. Therefore you have to manually assign the shader and the second texture. Of course it would better if this could be automatized!

Here are two pictures from a test scene with vertex-colored houses we created once for Virtools-based projects. The houses are currently using only dummy lightmaps, but that's the result without any manual shader or texture assignments.

Raw 3ds max scene Result in Unity 3D

So, here is how I do it.

Preparations inside 3ds max with Maxscripting (MXS)

Unity allows to hook into the assets import pipeline via custom scripts. This is very cool concept and is similar to something we did for our Virtools Assets&Build pipeline which we called BogBuilder btw. The key concept is therefore to *somehow* pass hints to a assets post processor script. What I do is to tag material names inside 3ds max, for example using __lm__ to indicate that the material is a lightmapped one. I use two underscores on each side because it reduces the probability that the original name accidentally contains such a sequence of letters.

I did not found a way to extract the names of lightmap texture from FBX files inside a Unity post processor script. So I actually add the texture name to the material name itself too! Here is an example of how a material inside 3ds max can look like after preprocessing


Pretty long, hehe. But it helps!

The custom maxscript does thus the following for every shell material

  • take the texture from baked material and put it into the original material's self-illumination slot
  • add the lightmap tag to the original material name (if it's not already there)
  • add the lightmap texture filename (including extension) to the material name
  • assign the original material back onto the geometry

Don't forget to check if the original or baked material is of type multi-material and handle it accordingly. Another issue I *sometimes* have is with special German characters like öäü. Unity sometimes replaces those upon import with some other symbols and may therefore break your postprocessor scripts when they will look for the lightmap textures. I created two more custom maxscripts that check and replaces those characters in material and texture names. (For object names it would be good, too, I guess). As a little hint, in order to access easily all bitmap-materials inside 3ds max you can use the following maxscript snippet:

local maps = getClassInstances bitmaptexture

Using enumerateFiles or usedMaps() only gives you strings and might turn things more complicated. As some of our meshes use Vertex Colors, I check that too and tag material then with __lmc__ instead of __lm__. To detect the use of vertex colors you can do the following

local tmesh = snapshotAsMesh myObjectNode
if ( getNumCPVVerts tmesh > 0 ) then

Using AssetPostprocessor

There are several types of asset postprocessors. To create one, you have to place/create your script inside a project folder called "Editor". It's not created by default, so create one if you can't find it. Using Javascript you usually start like this

class AssetPost_ShadersByPostfix extends AssetPostprocessor
{    …

and then you implement a static function depending into which kind of event you want to hook into.

OnAssignMaterialModel gets triggered each time a material has to be imported. In this callback you have the control if, where and how you create the new material. If you organize your project in a way that you cluster all materials in specific directories rather to have them close to the geometry assets, then this works fine. Otherwise this isn't the best callback to use as you don't get any hint where, inside the project directory hierarchy, the imported FBX is. Usually on FBX import a "Materials" folder is created on the same level, something you can't do easily with OnAssignMaterialModel. Alternatively you can use

OnPostprocessAllAssets: The benefit of this callback hook is, that the assets creation is automatically done for you and you get the target directory paths as array. To detect materials you can simply do something like this

        for (var aFile: String in importedAssets)
            // identify materials by .mat extension
            if( aFile.ToLower().EndsWith(".mat") )

This works pretty good. But also with this there is a scenario where it's not the best fit. If you use the FBX exporter option "embed Media" that includes all textures inside the FBX file, then it does not import the lightmap textures during the first import/refresh activation. They get imported if you do a refresh or if you switch to another app and back. As result, your OnPostprocessorAllAssets may not find the lightmap textures because it's called during the first run, when the materials are created (and only diffuse textures get imported) and the lightmaps are added in the second run to the project.

So what I do is calling manually a custom ScriptableWizard inside Unity after import. It's therefore not totally automatic, but quite robust and only something like 3 clicks.

Somehow I miss some built-in functionality to deal with project things inside Unity but you can parse through all material assets inside your project using standard DotNet, like this

import System;
import System.IO;

var matFiles : String[] = Directory.GetFiles(Application.dataPath, "*.mat", SearchOption.AllDirectories);

for(var aMatFile : String in matFiles)
{     …

The rest is quite straight forward: the Wizard iterates through all project materials, checks if they contain any shader tags in their names, assigns the corresponding shader, extracts the lightmap texture name, finds the texture and assigns it as second texture to the shader.

Well, that's it. I hope this helps you to setup a better pipeline for importing assets with lightmaps from 3ds max. Of course the key concept can be used for anything else too! 

Last week I posted part 1 where I talked about Shaders, Lua and Blendshapes. I'd like to add to the LUA subject, as it seems to be a commonly asked question, that there are no additional tools in the SDK to deal with custom LUA bindings. The docs suggest to use available solutions.

3D Compass

The 3D Compass is a translation handle/gizmo like many of us know it from other DCC (Digital Content Creation) tools like 3ds max or Maya. The good thing is, that it includes handles for translations on planes too (unlike Maya or Unity3D). I really miss them in tools that only provide translations along one axis.


If you look at edges of the pane-translation handles, you will notice an additional arc. These can be used for rotations. I like this solutions, a good enhancement! The yellow pivot does not do a 3-axis translation (I never liked those!) but uniform scaling. This also is a good approach. So overall it's a bit like a universal transformation handle. Therefore there is no variation for rotation-only or scale-only modes. Which means – unless I overlooked it – there is no way to use the 3d compass to do a non-uniform scaling.

Another good addition is the ability to clone easily the selection by holding SHIFT while translating. It does a duplication with few dependencies – the 3d entities get cloned with attributes but not the mesh etc. Good for building content/levels.

The 3D Compass works with the available coordinate systems: local, global, view, parent. (About the ref. guide I am not sure). Something that didn't work yet was the angle snapping when using the rotation handles, but maybe that was fixed for the final release. If not, use the old method. Something that wasn't yet possible too, is the ability to show/hide the 3D compass as in some situations it might be disturbing.

Enhanced Content Protection

VSL code and Shader code can now be encrypted and therefore shared without knowledge transfer. This allows to create a more solid commercial environment for 3rd party component developers and freelancers/consultants.

Protecting scripts

The password can be specified in the variable manager. Having the correct password allows to decrypt (unprotect) the content. I don't know how solid this is but at least it's one more level of obfuscation and should be good enough for most cases.

For some unknown reason are LUA scripts excluded from this protection scheme. Another problem is, that you can't select multiple items in the editors and un-/protect them all in one go. While the Shader Editor handles at least the first selected item, the VSL Editor just doesn't react. So going one-by-one is not very effective, when dealing with more complex content.

New Building Blocks

XML: When Dassault Systemes added XML BuildingBlocks to Virtools 4.0 but restricted them for deployments for VR/XE/Office players I thought "Oh my!". I mean, if you want to create some interesting online content XML as document-format standard is just omni present. For example WebServices. Looks like Dassault Systemes finally became aware of this and now the XML BBs can be used for Webplayer projects too. Personally, I've only tried to use them once under 4.0. They didn't seem to be very stable  (which might have changed in meanwhile) and I ended by dropping them in favor of a custom VSL solution. Anyways, it's still a good addition that should have been done much earlier.

Thus from now on, they can be found under Narratives/XML Parser. The good thing is, that it can be used via VSL too. Moreover there is a visual XML Debugger window. The documentation could be better, I think, especially as the usage is not always totally clear – for example "XML Load Document" may fail but without any hints why.

XML building blocks

Content Processing: I couple of new, useful BBs for examining and processing content: Merge Materials and Merge Textures (to find duplicated/identicals inside a group), Hierarchy Parser Upwards, Dependency Parser , Is Child Of 2D/3D (checks the entire hierarchy upwards)

Camera Movement: Pick And Pan BB, Pick And Rotate, Camera Zoom Extend

The basic idea of BBs for common operations, especially for configurators and the like, is nice. The execution of that idea is "suboptimal". First of all in the beta version the Pick BBs ignore a 2D hit. This means that while a user interacts with the GUI, he might modify 3D content that is hidden behind the GUI without intention. A developer would thus manually check for a 2D hit and deactivate the behaviour, making the "out-of-the-box" idea less effective (less fast 😉 ). The pick and rotate BB rotates a picked object along all 3 axis. No idea where this is useful, because reorienting an object towards a desired orientation is very hard to achieve this way, I think. It would be more universal (= fits to more use-cases) if it has options to constrain it to one axis.

Is Key Down BB:  LOL, finally! No more to say about that 😀 I guess everybody had their own custom solution for that (i.e. via VSL )

Set World size BB: resizing a 3D entity by giving the desired world size.

Spherical/Cartesian coordinates converters: the spherical coordinate consists of a vertical Angle, a horizontal Angle and a distance value. It sounds interesting and I wonder for what user-scenario these have been designed … GeoInformation systems? It also sounds like that we could simulate the content in 2D and display it on a spherical surface i.e. pathfinding and move-to, using the built-in solutions which might be difficult otherwise. Here two images from the Virtools documentation showing the coordinate systems:

2D Texel = Screen Pixel BB: this BB adjusts UV coords of 2D entites to fit texel/screen pixel ratio, a bit like screen mapping. The docs says "This is particularly useful to ensure sharp 2D GUI display by avoiding resizing artefacts".

To be continued …

Ok, I think we covered the major new features so far. In the 3rd part there will be some more and a final conclusion. 


If you have been writing PostFX shaders in Virtools, you may have waited long for this, but here it is: overriding techniques for rendering into RenderTargets!!! You can has Tron, yay! Here's a simple example for a masked glow PostFX:

Masked Glow

Overriding techniques allows you to specify a technique name that shall be used when rendering the scene using the Render Scene in RT View BuildingBlock. Previously one was only able to use one shader/material for all objects. That's ok for some tasks but if you need need per-material masking, there was no easy way for doing so. I first expressed the need for a better way in 2005 – that's 4 years ago! In 2006 I tried manual techniques switching and hit another barrier. Now, finally, it's a smooth process. Better late than never!

Some help with hiding or unhiding elements of the scene (i.e. something like RenderLayers) has unfortunately not yet been added. Also some other aspects like include management still needs improvements.

There are a few new shader semantics available:

AlphaTestEnable, AlphaBlendEnable, AlphaRef and for OpenGL only: SingleSided, DoubleSided

If you check the SDK, you will  find traces of WIP (work in progress) for shader-based shadow maps: a new shader semantic, a new BB, a new Rendstate and maybe a future built-in shadow-shader. You will also notice that support for hardware shadow texture formats has been added.


LUA has been added as scripting language. LUA is a widely used scripting language in the games industry. In contrast to VSL (Vitools Scripting language) LUA is not strongly typed and not JITed (Just-In-Time compiled). It's therefore to be considered to be slower than VSL. So why add it? VSL is very focused on implementing new BuidlingBlocks via scripting. It's not very strong with custom data types and working in a global scope is very limited.

Moreover not everybody likes to use the concept of Schematic Programming. By using the SDK it's possible to bypass it, but of course it makes development slower again. Using LUA one is now able to script a game without using the schematic a lot. This is possible because all LUA scripts share the same context. ( A bit of schematic is still a required though).

As LUA is known by a wider audience, new (script) developers can pick up 3DVIA Virtools much faster without worrying a lot  about the schematic. A good example for this are the 2 last adventure games by City Interactive. Moreover there is a new example game (a BoulderDash clone) entirely writtin in LUA. I think it's also a good starting point for Virtools users that still need to learn LUA!

Lua in Virtools

As LUA is a dynamic language there is no help from a compiler but a button for checking the syntax is available. Setting breakpoints and stepping through the code is possible! There is a small input field in the right side of the toolbar for calling LUA functions directly but i think it's a bit small. From my Maxscripting experience a big input console that allows to prototype interactively is a big plus and hopefully it will come in a later release.

You can't yet use LUA for action scripts and therefore you don't have direct access to the selection. But using the run-button LUA scripts can be executed at any time in authoring mode too.

Personally I am not much attracted by the LUA syntax. Further more I saw some articles on how to implement OO in LUA and they scare me off 😉 Currently I think using C# as embedded scripting language is my favorite! AngelScript recently got simple inheritance and interfaces – it's also a strongly typed language with JITC – might be an interesting alternative to LUA and Co.

The LUA editor control is the same from the shader editor and VSL editor and thereore suffers from the same problems i.e. long scripts slow the entire editor down (due it's slow syntax highlighting I guess).

Blend Shape Support

Blend Shape Support basically means morph targets mixable with skinning (bone-based deformation). So you can have facial animation or body deformations/customizations using morph targets ( blend shapes in Maya) at the same time with your standard bone-based character animations.

Probably most people will think "finally facial animations!" … yeah, cool … but I think you can do much more with it…


What about muscle deformation? Here is a simple example of what I mean: an arm deformed by bones can be morphed simultaneously to bulge it for the muscles effects …

Above you see two arm entities, both are referring to the same mesh, but one is using the morph weights differently. The great thing is: it's still only ONE mesh. You don't need to copy the mesh for each entity! Thus it even works independently for GPU skinned characters – only one mesh and full individual weight control .. I tested it and it seems to work!

Currently when exporting from 3ds max, you need to do a "export selected" that excludes the morph targets otherwise they will be added as additional body parts. Hopefully in future those will be detected and skipped automatically.

I think this is probably one of the new feature that really gives a solid advantage over other 3d authoring solutions! Flexible and simple workflow, nice!

To be continued …

There are more new features but it's getting late (bedtime!). Please note that this is beta experience and some things might be different with the final release. I hope it gives some more insights about 3DVIA Virtools 5's new features as benefits are not always clear via a plain listing of "what's new?" items. Feel free to add informations or to ask questions. Till next part, cheers!

Just a little hint for new Unity users on the PC platform. The SciTe/Scintilla editor that comes with the PC version is probably a lot better than the smultron editor that comes for the Mac version. Still it's a complete different level if you use Visual Studio.

Visual C# Express 2008 is a free development environment and it's very good. Besides standards like Syntax-Highlighting it has good refactoring tools and a pretty good intellisense (intelligent auto-completion).

Visual C Sharp for Unity

In order to use Visual C# Express, you need to add references to the Unity DLLs inside a new project. On the PC you can grab them from the installation folder


The DLLs are


If you also add all other DLLs your code depends on, you can even compile your code for verification. If you don't copy your .cs file but use the same that is referenced by the unity project, then each time you save the file in Visual Express, Unity will notice the change and reload and compile it using Mono.

Visual Web Developer 2008 Express says that it has "JavaScript IntelliSense" – I haven't tried it but if you prefer JS than you should give it a try!

Recently I was asked how to use global variables inside global functions using VSL, one of 3DVIA Virtools built-in scripting languages.

In VSL you can share a variable across diferent "Run VSL" BuildingBlocks by using the keyword "shared". For example

Besides using VSL in BBs or actionscripts, you can also create "global" VSL scripts. These are automatically included. The things is that you can not use the shared keyword inside global scripts. So if you want to modify shared variables inside global functions, one workaround is to use parameter-objects.

Basically it's a struct containing variables you like to share. Either one struct for all or split your variables by context into different structs. This user defined structure makes passing them to functions much easier. Also less code changes are required when adding more data.

Above you see the definition and how it's used as input parameter of a global function. This works because it's passed by reference and not as value (/copy). This way you can modify shared variables even inside global functions. Here is how it could look like inside a VSL BB that calls the global function:

I hope this will help a few more people. If it's not clear, let me know.

CGGenie did an interesting survey called "Upgrades09". After having published the results (see previous link), they also added a very interesting article about how to interpret the results!

It's title: "CG Survey: a question of cost and satisfaction".

To resume the article and the survey, one could say: professional users are less satisfied with their tools than hobbiest or casual users.

Here some extracts:

3ds Max's users have invested a large amount of money, are likely to be professionally pressured in their usage and timescales and also are likely to be pushing the software to its limits every day.

[…] in more challenging ways and those little niggles might become major blockers, those quirky crashes become fundamental cash burners – even though the actual reality of the event, the flaw or the software limitation wouldn't have changed, the user requirements would have.

This helps to understand why a diversified, professional user/customer base might seem always as unhappy grumbler. They use the tools under tight time-frames and budgets. Working with their tool everyday they see what works efficiently and what not. Moreover, they want to push it to it's edges. They want to be fast(er). And if the tool improves, they move fast towards the new 'edge' … pushing it some more! And moreover not necessarily everybody is pushing it into the same direction …

Obviously it's "a far greater challenge" to satisfy those people. 

A few years ago, when Advergaming slowly became an old concept, the topic of "Serious Games" started to get some drive. I was looking at some of our projects and said to myself: "hey we actually do serious games, too!".

We never liked the term "serious games". I read that there were discussions about it but still "serious games" is mostly used. Our key idea for learning via gaming is, that the learning process is built-in, not obvious. You just play and somehow without noticing you learn new things. I mean, that's actually a natural element of a game anyways: there is some kind of challenge, because if it's too easy, it's no fun. If it's too difficult it's no fun!

Moreover, game-makers know: players don't like to read pages over pages on how to play a game (in general, there are always exceptions!). So there are many games out there with in-game tutorials, so it's learning by doing in many cases already. Learning by doing or learning by exploring …

So therefore we prefer the expression "Game Based Learning" (GBL) or to be more precise "Digitial Game Based Learning" (DGBL). It's very different to "traditional" eLearning. eLearning still contains a lot of the old spirit for non-interactive teaching methods.

I looked at some other projects and I realized that some were different. They contained more "realism" and they fitted more the idea of  "training". It also requires tools to measure and report improvements in a formal way. A game score alone might not be enough. I like to use the term "Computer Based Training" (CBT) for these kind of applications.

Game Based Learning and Computer Based Training are different but you can gradually mix them. Nowadays I usually put them on a imaginary slider and I  mix between them based on the requirements and aims.

GBL <——0—-> CBT

For GBL fun is the priority. Reality is not important. For CBT reality is more important and therefore fun is not the key element. Usually CBT are more VR-like (Virtual Reality). If you work in this domain, what terms do you use and/or prefer?

I like C#

In January I started a little fun project to continue learning C#. Using MOGRE (Managed Ogre3D) and WinForms I started a little editor. It's really at a very, very early state and currently I am too busy to continue working on it, but I have to say, I enjoyed working on it so far.

C# and WinForms make things really a lot easier (I don't like MFC!). And I have touched only the top of the Iceberg so far. C++ is powerful and gives a lot of freedom. Lots of freedom = lots of choices = not always easy to make a choice. I enjoy having some "standard solutions" in C#, like delegates. Built-In Introspection/Reflection/RTTI is also really, really, really cool! 

As everything is also of type "object, I believe that node-based editors/programming should be much easier/faster to realize using Dot.Net/C#. Sure, if you need to be portable and if you have to squeeze out the most of the hardware, C++ is the choice. If you do some PC-only Sims or Viz and you don't mind wasting some CPU power, why not? There are not yet many 3D render engines available, but they start to appear (i.e. XNA based).

At work, I decided to implement a (non 3d) project in C#. There are a bunch of asynchonic ways of doing things, and for me it's not so transparent yet (what happens in what thread, what is threadsafe especially when things get layered). But I guess it's a matter of learning and understanding the framework.

Most of the time I am using Visual Studio 2008 Express and for a free tool it's impressive. It does now do lots of things too that previously was only possible using 3rd party Add-Ons like VisualAssist. Something I am missing is to be able to filter the intellisense drop-down list by type (Property/Event/Method). VisualAssist has some filtering buttons at the bottom and it's useful.


Last august I finally took some time to try an idea I had or a while (years?). Would it be possible to mis-use 3DVIA Virtools Point-Cloud system for rendering foliage objects (grass, flowers, plants etc). Basically, Point-Clouds are used to visualize 3d scans or data. It's a feature that was slowly introduced with 3.0 Service Packs and finalized in 3.5.

It comes with a feature that is called "Point Cloud Selection" which allows to render separately a subset of the cloud. Interestingly you can also render meshes instead of sprites. This, plus being able to use shaders with point clouds gave me the idea that it might be useful for outdoor foliage rendering.

I'll make it short: it doesn't work the way I hoped.

If you try to render a selection with meshes using a shader, the whole system slows down intensively. I think it's a combination that was not considered or intended by the developers. So it's only compatible with the fixed function GPU pipeline. Besides that I am not sure what it takes to calculate the vertex positions inside the shader correctly, so a mesh is rendered at each point position.

In addition to that, the bigger the cloud is, the slower is the selection process itself. Thus you cannot have one point cloud for your entire outdoor scene – you would have to manage multiple clouds via a grid, nullifying somehow the benefits of this out-of-the-box selection-system.

Here is a picture where on the left you see the whole point cloud and on the right you see a circular selection around the character:

Point cloud grass

Of course, there are other methods to do the same. Justed wanted to see if it goes also like this. 🙂