ATI's new Ruby?

I wonder if they've managed to work out how to get the boobie physics running on OpenCL yet? ;)
 
part of the image seems to be obscured by a shadow, of what is probably a bald mans head.

the multi monitor support is the most exciting part, to me anyway.
 
This Ruby's another example of electronic entertainment still being stuck in a white people-only universe...

Roughly 9 out of every 10 people are colored on this planet, yet in games the stereotypical token black person is even more token than is typically the case on TV shows...

Sad.
 
This Ruby's another example of electronic entertainment still being stuck in a white people-only universe...

Roughly 9 out of every 10 people are colored on this planet, yet in games the stereotypical token black person is even more token than is typically the case on TV shows...

Sad.

I don't think there are many black skinned redheads! :)

I suspect that game designers use the whitey faces for a couple of basic reasons which have nothing to do with racial preferrences.

1. I believe its easier to render a whitey face with more basic lighting/shader technology, with black faces and say older technology its a lot harder to make a black face that doesn't look like an Ewok from Noddy.

2. They are designing games for a western audience.
 
Code:
                                  ———----------------—————--
                         /|  /|  |                          |
                         ||__||  |       Trolle bitte       |
                        /   O O\__           nicht          |
                       /          \        fuettern!        |
                      /      \     \                        |
                     /   _    \     \ ———————---------------
                    /    |\____\     \     ||
                   /     | | | |\____/     ||
                  /       \|_|_|/   |    __||
                 /  /  \            |____| ||
                /   |   | /|        |      --|
                |   |   |//         |____  --|
         * _    |  |_|_|_|          |     \-/
      *-- _--\ _ \     //           |
        /  _     \\ _ //   |        /
      *  /   \_ /- | -     |       |
        *      ___ c_c_c_C/ \C_c_c_c____________
 
This Ruby's another example of electronic entertainment still being stuck in a white people-only universe...

Roughly 9 out of every 10 people are colored on this planet, yet in games the stereotypical token black person is even more token than is typically the case on TV shows...

Sad.

Ever played Crysis? More asians then you could ever wish for, plus some blacks too!
 
Wow this is the first 3D female demo got me attention. I mean...since the fairy from Nvidia whose name escapes me...'spokeswoman' from Nvidia/ATI did not look a jump up from the fairy lady. New Ruby shading..looks so real! now if ATI did model the whole body...damn...they need to release the playable demo. Speaking of demos, it had been a while since N/A released one. Be a sport Dave!
 
The underlying technique of the light stage is to record a zillion photographs of a real subject being lit from a zillion different angles. 1 light per photograph. Then, to show the subject being lit in a new environment, you convert the lighting environment to a (conceptual, not literal) cube map and modulate each of the lighting photos by 1 pixel of the cube map. Add all of those modulated lights together and you get a newly lit image of the subject. At that point it is "just" a matter of compressing the hell out of the lighting photos and smartly streaming and processing them fast enough to preview it in real time.

Advantages:
*The result looks perfectly real (to the limit of source image resolution) because it is basically a photograph of a real person.

Disadvantages:
*The data set is huge -even after compression. The video mentions 21,000 images.
*The subject is static (Unless you can handle enough images to make a movie. Then you can have a pre-recorded video like the brunette OTOY showed off a while back.) Also, the camera is static. Basically, all that you can change is the lighting.
*No local lighting.
*No mesh to work with. Just photographs.


Now, OTOY is a bunch of smart guys and they have a lot of tech that I don't know much about. They might have some crazy GPU voxel raytracer that can animate the light stage data and make everything awesome. But so far, this is the best of my knowledge as to how it works.

EDIT:
Looking at the video again, they do rotate the view around 1 axis. I suspect this is similar to the QuickTimeVR trick of scrolling through a movie of the camera rotating around the subject.

Still... it's a neat technique and it's great for film, but I won't get excited until I see it being used on an arbitrarily animating model.
 
Last edited by a moderator:
Let's say they have the lighting data from the lightstage scan and the derived mesh or voxel data. Can an artist then go in and animate the subject (after the scan)? Using the collected lighting data can you relight new synthesized/animated versions of the subject, or are you limited to the performance recorded on the lightstage? At the end of the clip he seems to imply that they will be able to animate the static model. If that's true, then this is the coolest thing since sliced bread.

Looking at the video again, they do rotate the view around 1 axis. I suspect this is similar to the QuickTimeVR trick of scrolling through a movie of the camera rotating around the subject.

Good catch :yes: With a real full 3D mesh or voxel dataset you'd have a completely free camera.
 
Back
Top