About Adrianne

Guess the demo will come out with the DX10 (Vista) drivers as there aren't any for the 8800 cards yet, only XP.
Anyone know if the demo is DX9 compatible or DX10 only?
 
I think Adrianne was chosen because of her husband's relations to the 3D industry (in fact, her husband knows NVidia's manager of desktop products personally it appears). I disagree that the demo is unimpressive, and I doubt previous gen cards have the power to run it at the resolutions NVidia was demoing. It's just not as good as the smoke-box stuff.

I did however think the Asian-ish chick they showed on the g70, as well as the fairy-ish chick for the NV3x/NV4x were "cuter" in the face. Adrianne's face IMHO isn't that beautiful.
 
Yes, but they... Actually, as a mild-mannered Canadian, I have nothing to say.

I have to apologise for that comment, I guess I was just a bit drunk and grumpy that night ;-)

EDIT: (to add some value to the post)

apart from the hair, I dont think there is *that much* to seperate dawn and adrianne, I was disspointed, I was really hoping for full flowing hair, I percieve the hair as a down grade since Nalu, Nevermind, its still clear that there is SOOOO much untapped performance to be had from this card and its really exciting times none the less...

681 million transistors. .......... GEEEEEEES.

;-)
 
Last edited by a moderator:
apart from the hair, I dont think there is *that much* to seperate dawn and adrianne

Dawn did not do real-time subsurface scattering, Adrianne does. In hires motion, zooming in on Dawn's face and Adrianne's face, Adrianne's skin looks way better. Actually, Adrianne's hair is an upgrade from Nalu also. Both Adrianne and Nalu model the hair with tons of geometry, but according to NV's launch presentation, Adrianne's hair uses SSS shaders as well (and probably shadowing), whereas Nalu, IIRC is just plain old anisotropic lighting shaders. Maybe for Nalu it doesn't matter so much, since Asian women with deep black hair tend to exhibit the shiny anisotropic look vs lighter hair colors in which SSS becomes more dominant as well as it is hard to see the effects of hair shadowing on already dark hair, vs the effect it has on light colors.
 
Hmmm...the infamous Fudo is saying that the World in Conflict video on nZone is rendered by 8800GTX.

Correct me if I am wrong, but that is just a cinematic trailer. There is no way the actual gameplay could be that good. If it is, then I'm buying one of these 8800GTX monsters TODAY.

http://uk.theinquirer.net/?article=35655

EDIT: There are two gameplay clips, inq's link is for the trailer...
 
Last edited by a moderator:
The trailer isn't real time. If it was, they would have demoed this fact during the launch, instead they showed gameplay.
 
So, whats the esitimated geometry count for Adrianne? Is it mentioned anywhere? Its very hard to see what the hair really looks like from the material thats been shown so far, but I assumed that since adriannes hair was 'up', that was using lass geometry than Nalu, I also assumed that the 'bun' at the back of her head where the hair is in a ball, is some sort of solid, and was not made by individual strands, put up in to that shape, these were some of the reasons I was coming from when commenting.

An wrt SSS just how accurate is their method? Is it possible that using pre-calcualted SSS with more exhaustive equations could equal better visual results than a more basic estimation in real time?
 
Well, in the launch demo, they switched to wireframe mode, and there were so many individual hairs modeled that it also looked like it was solid-shaded.
 
Coincidentally I'm just watching Two Towers on the telly... thinking perhaps Gollum would have been a better choice?
 
I have to apologise for that comment, I guess I was just a bit drunk and grumpy that night ;-)

Actually, the truth is that when you made that comment I tried to find something about Canadian women that makes them stand out, and I couldn't think of any. The best I had was that they are on average more intelligent then American women, but I'm not so sure these days. :cry:
 
That`s not really something outstanding...it`s like saying a two legged dog is better then a one legged one...whilst true, it`s not really a statement towards its qualities.
 
Actually, the truth is that when you made that comment I tried to find something about Canadian women that makes them stand out, and I couldn't think of any. The best I had was that they are on average more intelligent then American women, but I'm not so sure these days. :cry:

lol, I see ! Well, thats not just a symptom of your country thats trully a global (perhaps western) problem, People are getting fatter / less educated / and have less common courtesy than generations gone by.

But I guess thats something for another thread !!
 
Coincidentally I'm just watching Two Towers on the telly... thinking perhaps Gollum would have been a better choice?

Funny you should say that, I damn well thought along time ago, that gollum was a brilliant choice for tech demo material on g80. But something tells me, they just wouldnt do it justice this generation,

I read somewhere that gollum used 20gigs of image data alone, I think when we reach the realm of dx11 / 2GB cards, Gollum could look much more like a real possiblity. Though I would love to be proved wrong. Im sure theres a g80 zealot out there that will vouch, it could be done realy well on the new hardware?
 
What's interesting is not only the fucking awesome goodness of realtime navier stokes on an amazingly fine grained grid (destroying the Novodex/Ageia fluid demos), but also they are also doing realtime raytracing at the same time on the same GPU. The G80 is stupidly fast, IMHO, a step-function change from previous gen, like the PS1.1->R300 transistion.
 
its really breath taking seeing that running on a GPU.

When I saw the bump mapped torus demo (and the cubicmapped sphere bubble, also awesome) that went with geforce 256 back in 2000, I became in interested in seeing how long it took for the tech in these demos to start making their way in to games on a bigger scale. Bump mapping (and later with normal mapping) could be seen, fullscale in games in 3 years after that incredible blinn bump mapped torus. (remember?)

I wonder how long before we see incredible large scale renderings with these complex smoke and water, marching cube effects, in a game?

I also wonder how difficult it would be to combine the calculation / rendering and interaction of such effects as these, with traditional real-time gaming environments and dynamic geometry?
 
Last edited by a moderator:
Thanks for the link, SugarCoat.

I was so sure they'd pull out the hair pins from virtual Adrianne's hair, but it didn't happen :cry:

Every demo was very impressive except Frog and Adrianne. Frog was funny though :)
 
Last edited by a moderator:
its really breath taking seeing that running on a GPU.

When I saw the bump mapped torus demo (and the cubicmapped sphere bubble, also awesome) that went with geforce 256 back in 2000, I became in interested in seeing how long it took for the tech in these demos to start making their way in to games on a bigger scale. Bump mapping (and later with normal mapping) could be seen, fullscale in games in 3 years after that incredible blinn bump mapped torus. (remember?)

I wonder how long before we see incredible large scale renderings with these complex smoke and water, marching cube effects, in a game?

I also wonder how difficult it would be to combine the calculation / rendering and interaction of such effects as these, with traditional real-time gaming environments and dynamic geometry?


I guess will be seeing it in Crysis 2 :D

I also think that any demo presented by ATI/nVidia takes about 3+ years to reach the games...that's not much, I can wait!
 
Back
Top