Ruby Video - HERE! and 2 STUNNING pics :D

DemoCoder said:
Note the polygon budget used on the breastisis.

And with the geometry load on the video card, doesn't that mean we have more processor time for better breast physics ala "Dead or Alive" (the first one with the ridiculous floppage.)
 
Yosh, I don't mean GA for everything. I mean GA for selected layers/plates. Yes, it is too slow to use on everything. But I believe it was used in Harry Potter Chamber of Secrets, Matrix Revolutions, and for Gollum in LOTR. It was also recently used in RockFish.

Just like with RayTracing, I think you'll see people use scanline rendering except when a certain scene calls for something extra, in which case more expensive techniques come into play (ray tracing/radiosity/photon mapping/monte carlo/etc)
 
Nice work @ Rhino but i wonder why the Explosion in the Pre-Rendered Production is emerging from the Door where the Bomb actually has been planted but in ATI's Demo (Video) MOV it's emerging from above the Bridge and also the transistion in ATI's Demo is hmm wrong ?
 
DaveBaumann said:
rhino said:
Just for you guys, a pre-rendered still of Ruby.

Mmmmm, swords... do I detect another episode of Ruby? Perhaps in about 6 months time...? ;)
Does it involve a guy called Gates? Some sort of corporate takeover by ATI?
 
DemoCoder said:
Yosh, I don't mean GA for everything. I mean GA for selected layers/plates. Yes, it is too slow to use on everything. But I believe it was used in Harry Potter Chamber of Secrets, Matrix Revolutions, and for Gollum in LOTR.

You're right, Revolutions might had it in a few scenes, then again they did some pretty crazy things there :) But Gollum certainly had no GI - the subsurface scattering shader on his skin was slow enough on its own. Weta have rendered occlusion passes indeed, but that's far from full GI. I don't know about Harry Potter, but I'll ask my friend at MPC who's been working on Azkaban recently as a lighter. I'm sure we're not going to use GI though :)

A more suited area for GI is architectural visualisation, where it's good enough to render it once and bake it into the textures or vertex colors. Such solutions are also often used in VFX work for CG backgrounds and sets... but full, dynamic GI (calculated for every frame) is still too slow. The VFX market is in some trouble anyway, the lowest bid gets the work and thus every second of render time counts more than ever...
 
Megadrive1988 said:
now then, Xbox 2 aka Xenon, N5 aka GCNext, and PS3, should be a significant step above the X800 8)

didntcha hear? Ati won the xbox 2 contract, they'll be running r500s.
 
I wonder what AA level was used for the real time version. I cannot believe that it is 6X MSAA because it looks much better in my oppinion.
 
I cannot believe that it is 6X MSAA because it looks much better in my oppinion.
Most probably was 6x MSAA, it would just look "better" because of the natural blurring of the video compression. Its always hard to judge AA on anything that has been compressed.
 
I'm sure that it's been downsized from something like 1280*960, which adds a 4x supersampling on top of the 6x MSAA. So it's like 24xS or something :)
 
Goragoth said:
I cannot believe that it is 6X MSAA because it looks much better in my oppinion.
Most probably was 6x MSAA, it would just look "better" because of the natural blurring of the video compression. Its always hard to judge AA on anything that has been compressed.

I'm sure that it's been downsized from something like 1280*960, which adds a 4x supersampling on top of the 6x MSAA. So it's like 24xS or something

Makes sense.

Thanks
 
Back
Top