ATI Mojo Day report

It's very simple: What's the best demo NVidia has? Wolfman? Undersea creature? Chameleon? Final Fantasy?

For awhile, I thought these were much better than the supplied ATI demos. Not only were the effects better, but the overall presentation and artistic content was better.


But ATI's team has gone one step further. They have taken SIGGRAPH offline movies and converted them to run in real time on the R300. Not running at 5fps and "chopped down" severely like Final Fantasy, but running at atleast 30 FPS and nearly indistinguishable from the original offline RenderMan movies.

The two major ones I'll cite are the Debevec "Rendering with Natural Light" demo and the Animusic demo. And it's not simply that they have fast hardware. Alot of care and thought went into the Animusic demo to make it run realtime, but look the same.

It just proves ATI's demo team got Skillz.
 
Just an FYI:

#3 Microsoft has developed an AWESOME shader debugger for Visual Studio. You can set break points in your shaders or set breakpoints on screen pixels!. You can single step your shaders and watch each line be executed or each pixel be rendered. It uses the reference rasterizer to do this of course.

MacOS X (10.2) came with a really nifty OpenGL shader builder (with debugger) that does pretty much the same thing. There's also a neato OpenGL profiler that spits out performance, driver stats, states of OpenGL functions, call tracing with input parameters, etc...

Now if I could only get the Cg compiler to build without spitting out a ton of errors... o_O

#10 ATI devrel employees were all dressed like Austin Powers

Aside from the free hardware, I somehow find this the most entertaining!

DemoCoder could you elaborate now on just how it is that ATIs demo team has surpassed Nvidias..... I never would have thought that in this realm would ATI ever surpass nvidia.

It's not too surprising if you subscribe to various devlists, and picking a lot of books lately. ATi dudes have been becoming *a lot* more proactive, accessable, and active in publishing their research and techniques... That along with DemoCoder's point of ATi's demos (which have been getting more and more impressive)...
 
archie4oz said:
Now if I could only get the Cg compiler to build without spitting out a ton of errors... o_O

Hrm this is the first time I have read that Cg was having issues. I am surprised there is so little said about this. Maybe there is ... but I am certainly not prevy to it.

DemoCoder could you elaborate now on just how it is that ATIs demo team has surpassed Nvidias..... I never would have thought that in this realm would ATI ever surpass nvidia.

It's not too surprising if you subscribe to various devlists, and picking a lot of books lately. ATi dudes have been becoming *a lot* more proactive, accessable, and active in publishing their research and techniques... That along with DemoCoder's point of ATi's demos (which have been getting more and more impressive)...

No I don't subscribe to devlists prolly I should I suppose. But lately I have had to move away from IT for income and spend less and less time involving myself with its inner workings. I do however have an accute interest in graphics tech that seems to bring me back in line should I drift .. Anyhow thanks for the suggestion. It would be nice to see some more demos from ATI to solidify your points. BTW I love the Animusic demo, but the old lady is getting tired of hearing it.... ;)
 
Sabastian said:
archie4oz said:
Now if I could only get the Cg compiler to build without spitting out a ton of errors... o_O

Hrm this is the first time I have read that Cg was having issues. I am surprised there is so little said about this. Maybe there is ... but I am certainly not prevy to it.

Note that he's having trouble getting the Cg compiler compiled on his Mac OS system(I think), or that he's recompiling the Cg compiler.
 
My impression of RenderMonkey is that it is a nice prototype tool to "preview" vertex and pixel shader effects. That is, RenderMonkey is NVEffectsBrowser on steroids.

NVEffectsBrowser stores meta-data, shaders, models references, etc as C code and thus it is not editable at runtime by the "IDE". You can't use NVEffectsBrowser to "develop" the shaders.


RenderMonkey on the other hand, stores a graph of information (not a scene graph, but kinda a lightweight one for compositing) in an external text file (declarative XML format), and other data that can automagically setup models, textures, and other states, and then preview the rendering for you.

So, you can use RM to write procedural shaders and test other effects without having to write any DirectX or OpenGL code.

However, RM has no "runtime" so developer's can't really "import" RM effects into their game engine. RM also doesn't do C/C++ code generation. It would be nice if it could autogenerate C++ classes to do all those texture stage state and other calls for you. After all, if I spent hours clicking through RM dialog boxes setting up all the 3D API state, you'd think that I'd want to reuse that work? In reality, I expect most developer to still go the "compile-edit-debug" cycle with Visual Studio to debug/prototype their own pixel shaders.

I like RM, but it needs improvement to be really useful for developers. Right now, it's cool for artists, and cool for playing around, but it could be substantially enhanced with code generation techniques. If I spent alot of time prototyping and building up nice RM effects, I'd like to be able to "export" that work out of RM format and back into my engine, without having to rewrite all the API calls needed to achieve the effect.
 
DemoCoder said:
I attended ATI's developer event in San Francisco last week, here are some of my comments

#1 ATI is aggressively going after developers.

...

#2 ATI has made a terrific comeback over the last year. Their developer support used to pale in comparison to NVidia's, and their demo team wasn't putting out stuff that good. There has been a complete turn around now.

...

#6 Richard Huddy is still an awesome presenter.

Heh, sorry for being so slow, but wasn't it a scoop by ATI that they got Richard Huddy to join ranks from nVidia's Developer Support (even though he was in 'transfer' at Codemafia)? It's a good sign, me thinks.

Anyway, DemoCoder what's take on the state of their drivers from a developers point of view?
 
Ilfirin said:
1. It isn't 'the Doom 3 method', the process was proposed in a siggraph paper almost a decade ago. Doom simply uses the process, not invented it :)

Could you be more specific about which Siggraph Paper you are talking about ?

Thanks,

K-
 
Appearance Preserving Simplification (98 Cohen, Olano, and Manocha). I'm sure there are others, but this perfectly describes the technique and was the first citation I found about this technique in the Digital Library.
 
Kristof said:
Ilfirin said:
1. It isn't 'the Doom 3 method', the process was proposed in a siggraph paper almost a decade ago. Doom simply uses the process, not invented it :)
Could you be more specific about which Siggraph Paper you are talking about ? Thanks, K-

Hi Kristof; not the Siggraph-paper but someone else doing the same :

http://www.flipcode.com/cgi-bin/msg.cgi?showThread=10-02-2002&forum=iotd&id=-1

He has improved the characters of Q3 with this method. Maybe I misunderstand his method, but it seems for me that this should even work for normal games, cause the information for the normal map is already there :

10-02-2002.jpg


Text:

Inspired by all the recent poly reduction bump screenshots (hint: Doom3) I tried to achieve a similar look with existing geometry and this is the result. The image shows Klesk and Tank Jr. from Quake3 on the left without normal map and on the right with normal mapping applied. For calculating the normal map I convert the diffuse texture to greyscale when I load the model and use that as a heightmap for normal map generation. This works since artists tend to paint lighting information into the textures. Since they can't assume a particular light direction/position or the model would look wrong when placed into a scene, the luminance information (from the greyscale bitmap) is suitable for normal map generation. To apply the normal maps you have to duplicate some vertices (those where the texture is mirrored). The demo is part of my testbed that you can download at www.chengine.com. All you have to do is to set the q3a variable in the test.set file correctly to your Quake3 root directory. It should run on all cards with 2 texture units and DOT3 support although I've only tested it on my GeForce2MX.
 
Saem said:
Wow, that looks really good. Thanks for the linkage.

No, it doesn't IMO. :eek:

You have to generate the normal maps as a whole different piece of art when you're using them in a Doom III-way to add surface detail together with per-pixel lightning (dot3).

This fast hack really doesn't cut it, IMO.
 
My only complaint with the posted method is that the result is just too "busy" and the model surface is just too "rough." I assume that if the normal map was done correctly, some of the greyscale image "shadows" that have created relief on the normal mapped models would in fact just remain flat texturing. I therefore assume that the resulting model would have larger areas of "smooth" surfaces that are displaced/bumped/whatever-you-want-to-call-it as a whole.

But, I guess subjectively, I'd conclude in a game that the models on the right were better looking than the originals.
 
I like the extra detail on the models in some places but in others there could be less for example Klisks(?) face it has a more real look with less detail particularly on his forehead. Plus the extra detail in combination with these low poly models is funny in some ways. If the models were higher poly count then these would look considerably better IMO. So in some ways I like it and in others I don't.

EDIT: But the detail on the bottom character is better almost undeniable.
 
LeStoffer said:
Saem said:
Wow, that looks really good. Thanks for the linkage.

No, it doesn't IMO. :eek:

You have to generate the normal maps as a whole different piece of art when you're using them in a Doom III-way to add surface detail together with per-pixel lightning (dot3).

This fast hack really doesn't cut it, IMO.

Looks way better than the original...
 
Looks better, sure, but generally using luminance to create a normal map generally doesn't yield very good results. It'll look more detailed but the lighting will look very odd. So even in these pictures. Looking at the legs in the lower image gives the impression that light comes from the floor while the chest gives the opposite impression.
 
Back
Top