CGI & 3DS Max

The Animago Award is a German CGI Award which has become more and more international in the past years. This time it took place in Potsdam, near Berlin at the Babelsberg Studio. This year there was also a conference which was mostly organized by the reseller Lichtblick.

I've to say that I was very positively surprised by the conference, I enjoyed it a lot – therefore thumbs up! It also allowed me to see some people again. For example Hanno who now works for Naughty Dog and previously at CryTek. He did a talk about Uncharted 2, I've a few notes maybe I can remember something from it … hm… I can barely read 'em 😐 … sorry will be rough!

Creating a Character Driven Game – Hanno Hagedorn, Naughty Dog

  • 8000 triangles per hero head
  • no manual low-res creation, reduction automated for faster iteration
  • Mudbox and Photoshop used (for texturing?)
  • I think he sad something about how to have a better look for the skin. I think they controlled the red tones via an extra channel that did the masking. Maybe that was mixing insidea shader via a Fresnel term or so.
  • extra ambient lightmap butthat maybe not used in close-up shots (cinematics)
  • they used 2 normal maps
    • one rough
    • one for the skin shader to produce specular highlights

  • texture variations for wet/snowy cloths and skin that got blended in dynamically
  • dynamic wrinkles using ambient occlusion and normal maps that got dynamically blended in
  • hair had around 4000 polygons (i guess triangles)
  • per frame light baking for the skin,
    • (maybe at 256er resolution)
    • which got blurred 
    • my notes say "red blur edge", not sure what it means 🙁
  • faces were hand modelled
  • audio  (voice) and body motion-cap was done at the same time
    • better quality of voice acting and sync
    • extra cameras to record facial expressions
    • facial animation was manually animated using the reference material from the mo-cap sessions
  • all deformations with bones no blend shapes (no morphing)
  • 76 bones per face

Some more general infos

  • level editor and cut scenes inside maya
  • pre-process/-production 6 months
  • production 1,5 years
  • no producers due very experienced and skilled team = flat hierarchy
  • PS3 only game
  • access to Sony's PS3 expert team

Crowd Simulation – Paul Kanyuk, Pixar

Another interesting talk was from Paul, TD at Pixar. He explained various crowd simulations done for films like Ratatouille, wall•e and Up.

  • Secondary animations via signal processing
    • creating secondary animations procedurally can become quite processing intensive
    • using signal processing on various inputs (transformation, animation etc) creates nice effects at much lesser costs
    • example Robots stopping suddenly with spring-like overcompensations
    • but phases (i.e sinus phases) should not shift
  • massive was used with Maya and Marionette
    • flow-fields on terrain to help brains via color maps
    • 6 short animation clips
    • data transport between different tools (massive, marionette etc) via invisible bones
  • different rules combined via weighting
    • avoidance
    • leader follow
    • helps to tweak, fine tune behavior
    • 200 to 1000 rules for foreground actors (? hm not sure about my notes here!)

Well, so far for now. These were the highlights. As said a bit rough but maybe you got one or another thing for you out of it.

Wow, what an exciting time for real-time 3D enthusiasts!

UDK

Epic released the UDK for PC: the Unreal Development Kit based on the famous Unreal 3 Engine! For non-commercial/Edu use it's totally free. And for commercial use there is a very reasonable and transparent pricing:

  • Internal Use: 2.500 US $ per developer seat per year
  • Publishing: 25% royalties for revenues above 5000 US $
  • Internal Use may stack with publishing thus: per seat per year plus royalties

You can read it directly on their site. Checkout the features, here some highlights:

  • SpeedTree included – with editor
  • global illumination solver
  • Bink Video Codec
  • Animation is driven by an AnimTree
  • animated Facial normal maps using a visual scripting system
  • slicing of objects for physics-based destructions
  • distributed assets processing
  • automated creation of navigation meshes for pathfinding
  • visual shader creation system
  • and much more of the "usual" Unreal Engine stuff 😉

Another YAY 😀

One important note though: this is basically the modding access. No source code access. So this may mean that if you can't script what you need, you won't be able to implement it.

CPU GPU love

Last year I've written about NVIDIA aiming for doing more general computation and INTEL going for more specialized, parallel vector computation. Since then we got tons of more info about Larabee, CUDA, Stream SDK, OpenCL, DirectCompute etc – all indicating that there might be a fusion one day.

According to a well-known german IT-News Site, INTEL revealed at the IDF09 that at the end of 2010 something called Advanced Vector Extensions (AVX) will be added to DIE of the processor generation named "Sandy Bridge". At a later time, this will be the place for the Larabee integration. Prior to that – in early 2010 – intermediate versions will be available where the INTEL's CPU / GPU mix will in the same box.

Sounds like this won't be "one" unity but separate units very, very close to each other. Obviously this will increase the speed of data-sharing.

"The CPU and the GPU create a co-processing environment where we have the CPU operating on very complicated sequential codes and the GPU operating on massively parallel applications."

Jen-Hsun Huang, NVIDIA’s President and CEO

When I recently read about something called "R-Screen" on Dassault Systemes blog "3D Perspectives", it reminded me of something I saw a couple of years ago.

R_Screen

R-Screen is a VR installation created with 3DVIA Virtools (VR pack) and is basically a rotating stereoscopic (rear-pro) screen that aligns towards the viewer based on head-tracking data. It allows to move around a virtual object which increases the feeling of immersion.

Something of similar purpose is Alias Research's "Boom Chameleon" from 1998/99 or so. You basically move a screen attached to a boom around it's center center like a window to the VR world. Nowadays with camera tracked markers, we have this even as Augmented Reality on portable devices! Interesting though how old these concepts are. Another demo shows a Mixed Reality example where objects are staged and affect a virtual scenario. This is called a Tangible Interface.

Alias_Boom Alias Staged Interface

There are a few more (old but interesting) videos online from Alias Research (they take a while to load).

Last week I digged into the topic of exporting a lightmapped, vertex-colored scene from 3ds max to Unity. Unity3D does not come with special exporters but exports to standard formats like FBX and COLLADA. When exporting to FBX, 3ds max Shell Materials do not get converted automatically to a lightmapped shader upon import. No wonder as FBX (= Filmbox which became later MotionBuilder) is not specially targeted towards game content. Therefore you have to manually assign the shader and the second texture. Of course it would better if this could be automatized!

Here are two pictures from a test scene with vertex-colored houses we created once for Virtools-based projects. The houses are currently using only dummy lightmaps, but that's the result without any manual shader or texture assignments.

Raw 3ds max scene Result in Unity 3D

So, here is how I do it.

Preparations inside 3ds max with Maxscripting (MXS)

Unity allows to hook into the assets import pipeline via custom scripts. This is very cool concept and is similar to something we did for our Virtools Assets&Build pipeline which we called BogBuilder btw. The key concept is therefore to *somehow* pass hints to a assets post processor script. What I do is to tag material names inside 3ds max, for example using __lm__ to indicate that the material is a lightmapped one. I use two underscores on each side because it reduces the probability that the original name accidentally contains such a sequence of letters.

I did not found a way to extract the names of lightmap texture from FBX files inside a Unity post processor script. So I actually add the texture name to the material name itself too! Here is an example of how a material inside 3ds max can look like after preprocessing

wandputz-tex__lm__S_4_WaendeCompleteMap_ambient-schlagschattenMulti100.tga

Pretty long, hehe. But it helps!

The custom maxscript does thus the following for every shell material

  • take the texture from baked material and put it into the original material's self-illumination slot
  • add the lightmap tag to the original material name (if it's not already there)
  • add the lightmap texture filename (including extension) to the material name
  • assign the original material back onto the geometry

Don't forget to check if the original or baked material is of type multi-material and handle it accordingly. Another issue I *sometimes* have is with special German characters like öäü. Unity sometimes replaces those upon import with some other symbols and may therefore break your postprocessor scripts when they will look for the lightmap textures. I created two more custom maxscripts that check and replaces those characters in material and texture names. (For object names it would be good, too, I guess). As a little hint, in order to access easily all bitmap-materials inside 3ds max you can use the following maxscript snippet:

local maps = getClassInstances bitmaptexture

Using enumerateFiles or usedMaps() only gives you strings and might turn things more complicated. As some of our meshes use Vertex Colors, I check that too and tag material then with __lmc__ instead of __lm__. To detect the use of vertex colors you can do the following

local tmesh = snapshotAsMesh myObjectNode
if ( getNumCPVVerts tmesh > 0 ) then

Using AssetPostprocessor

There are several types of asset postprocessors. To create one, you have to place/create your script inside a project folder called "Editor". It's not created by default, so create one if you can't find it. Using Javascript you usually start like this

class AssetPost_ShadersByPostfix extends AssetPostprocessor
{    …

and then you implement a static function depending into which kind of event you want to hook into.

OnAssignMaterialModel gets triggered each time a material has to be imported. In this callback you have the control if, where and how you create the new material. If you organize your project in a way that you cluster all materials in specific directories rather to have them close to the geometry assets, then this works fine. Otherwise this isn't the best callback to use as you don't get any hint where, inside the project directory hierarchy, the imported FBX is. Usually on FBX import a "Materials" folder is created on the same level, something you can't do easily with OnAssignMaterialModel. Alternatively you can use

OnPostprocessAllAssets: The benefit of this callback hook is, that the assets creation is automatically done for you and you get the target directory paths as array. To detect materials you can simply do something like this

        for (var aFile: String in importedAssets)
        {
            // identify materials by .mat extension
            if( aFile.ToLower().EndsWith(".mat") )
            {
               …

This works pretty good. But also with this there is a scenario where it's not the best fit. If you use the FBX exporter option "embed Media" that includes all textures inside the FBX file, then it does not import the lightmap textures during the first import/refresh activation. They get imported if you do a refresh or if you switch to another app and back. As result, your OnPostprocessorAllAssets may not find the lightmap textures because it's called during the first run, when the materials are created (and only diffuse textures get imported) and the lightmaps are added in the second run to the project.

So what I do is calling manually a custom ScriptableWizard inside Unity after import. It's therefore not totally automatic, but quite robust and only something like 3 clicks.

Somehow I miss some built-in functionality to deal with project things inside Unity but you can parse through all material assets inside your project using standard DotNet, like this

import System;
import System.IO;

var matFiles : String[] = Directory.GetFiles(Application.dataPath, "*.mat", SearchOption.AllDirectories);

for(var aMatFile : String in matFiles)
{     …

The rest is quite straight forward: the Wizard iterates through all project materials, checks if they contain any shader tags in their names, assigns the corresponding shader, extracts the lightmap texture name, finds the texture and assigns it as second texture to the shader.

Well, that's it. I hope this helps you to setup a better pipeline for importing assets with lightmaps from 3ds max. Of course the key concept can be used for anything else too! 

CGGenie did an interesting survey called "Upgrades09". After having published the results (see previous link), they also added a very interesting article about how to interpret the results!

It's title: "CG Survey: a question of cost and satisfaction".

To resume the article and the survey, one could say: professional users are less satisfied with their tools than hobbiest or casual users.

Here some extracts:

3ds Max's users have invested a large amount of money, are likely to be professionally pressured in their usage and timescales and also are likely to be pushing the software to its limits every day.

[…] in more challenging ways and those little niggles might become major blockers, those quirky crashes become fundamental cash burners – even though the actual reality of the event, the flaw or the software limitation wouldn't have changed, the user requirements would have.

This helps to understand why a diversified, professional user/customer base might seem always as unhappy grumbler. They use the tools under tight time-frames and budgets. Working with their tool everyday they see what works efficiently and what not. Moreover, they want to push it to it's edges. They want to be fast(er). And if the tool improves, they move fast towards the new 'edge' … pushing it some more! And moreover not necessarily everybody is pushing it into the same direction …

Obviously it's "a far greater challenge" to satisfy those people. 

Yesterday I came across this :

(…) Canvas 3D JavaScript library (C3DL), (…) aims to facilitate the development of browser-based interactive 3D graphics with JavaScript. It is highly experimental and is still at a relatively early stage in its evolution. 

(…)

Interesting perspective!!

Also interesting to me is "the effort to implement SMIL for Firefox". But 'interesting' in a different way as I actually once did a SMIL project … many years ago during my studies. Hmm … *thinking* … no idea when, maybe in 2000?? I don't know. What is SMIL, you may wonder? It's a W3C standard … 

The Synchronized Multimedia Integration Language (SMIL, pronounced "smile") enables simple authoring of interactive audiovisual presentations. SMIL is typically used for "rich media"/multimedia presentations which integrate streaming audio and video with images, text or any other media type. SMIL is an easy-to-learn HTML-like language, and many SMIL presentations are written using a simple text-editor

Back then there were no freely available players that actually implemented the whole specification. We tried the Quicktime-Player and the RealPlayer. I know that there was a 3rd one, but I think you had to buy it. Quicktime had support for a really small subset and Realplayer was actually not fully compatible. We had to use not so few 'RealPlayer only' tags, but at the end we were actually able to do a little"rich-media" presentation and watch it using the RealPlayer.

The project was something like a simple web-based SMIL editor. Very simple. We had a HTML front-end where you were able to add media, transitions etc. Then the data was saved as SMIL. The editor was able to load SMIL back for further editing.

I am not sure, if there is nowadays really a need for this. Flash or the combo "HTML, CSS, JS" does the trick.

Maybe I need to correct myself. I said 2009 is going to be important but probably not a breakthrough year for 3D vision using stereoscopy.

This month I saw a lot of marketing in the online and offline media. Early this month a German boulevard newspaper had the main title on their front page saying that the 'nex-gen TV' is going to be 3D-capable and connected to the Internet.

Online I saw on a lot of places the marketing results from NVidia and IZ3D. IZ3D says to have "the first 3d display for gamers".

It's a passive display that requires glasses. It's a bit of the usual PR blahblah ecause X3D had an stereoscopic display aiming for gamers a couple of years ago! With ~1000€ it was affordable for enthusiasts but still too pricey for the mass market. That iZ3D 22" display is around ~400$ – thus more likely to be a mass product. Still it makes it not the "world's first designed for gamers" A key difference is that the IZ3D is using passive stereo, thus you are required to use polarized glasses, but probably also cut costs.

Maybe you remember the active shutter glasses that came with ELSA gfx cards at the end of 90ies. The problem with them is mainly the frequency and – back then and probably today too – they created headaches as the frequency of screens were/are at 60 Hz and it gets halfed, so it gives you a refresh-rate of 30 Hz – which is not enough for a smooth perception. Today there are some displays that go at 120 Hz.
Passive displays don't alternate pictures but show both at the same time. The polarized glasses act as filter, so each eye just gets one image. Usually if you roll your head too much to the sides, the separation gets lost and you get some "ghosting" or even both pictures. But I think it's less a problem than auto-stereoscopic displays where you usually need to be at a specific angle on the horizontal axis.

NVIDA has now new glasses – they are active and they are able to sync to 120 Hz. They refer to 120 Hz displays from Samsung and ViewSonic. There is also Sony experimenting. Same for Panasonic and plenty other companies … !

We have two autostereoscopic displays in our offices. An old one, single-view, and a more recent one with 5 views.

Puppetshop, the rigging and animation toolkit from Kees Rijnen, is now available for free!!!

Aaaaand … ShaderFX is free for …

[…] individuals and companies smaller than 2 employees

:woot:

Wow ! Pretty cool !

[It's at the bottom of the page, where it says: download.]

In addition to that, as result of the Softimage acquisition by Autodesk, CAT, another rigging, muscle and animation toolkit, is now available as extension download for 3ds max subscription customers. Here is an old review of the 'Character Animation Toolkit'

 puppetshop Character Animation Toolkit ShaderFX

I never had a look at these animation packages… what about you?