GeforceFX & ATI demos?

I've been asking that same question since the FX has been distributed for p/reviews. With the "lack" of DX9 software out there, I would have thought trying to run ATI's DX9 demos would've been an obvious thing to try...
 
ATI's demo's did not work on the gffx

it would start to load and then right when its suppose to start the demo it would freeze, had to hard reboot

that was with 42.63 driver btw

And the GFFX demo's did not work on the 9700 Pro
 
Well, the GFFX demos use nVidia GL extensions, do they not?

The ATI demos, (at least AFAIK) on the other hand, are DirectX, and SHOULD work on the FX, assuming the FX has DX9 compliant drivers. (I'll have to check, but I believe it has been stated by ATI developers that their demos do not check for specific hardware, and "should" run on any card that meets the required DX level.)

In any case, I think the fact that the FX doesn't run ATI's demos is worth mentioning in any p/review.

Did you try any of the RADEON 8500 demos? (PS 1.4?) They should also work on the GeforceFX.
 
Joe DeFuria said:
Well, the GFFX demos use nVidia GL extensions, do they not?

The ATI demos, (at least AFAIK) on the other hand, are DirectX, and SHOULD work on the FX, assuming the FX has DX9 compliant drivers. (I'll have to check, but I believe it has been stated by ATI developers that their demos do not check for specific hardware, and "should" run on any card that meets the required DX level.)

In any case, I think the fact that the FX doesn't run ATI's demos is worth mentioning in any p/review.

Did you try any of the RADEON 8500 demos? (PS 1.4?) They should also work on the GeforceFX.

I didn't try the demo's on the gffx till after our preview

i didn't try the 8500 demo's, just the dx9 ones

when we do a full review of a retail gffx i'll try the dx8 and dx9 ati demo's on it and see what happens
 
When you do a full review (and I hope it's not JUST the "ultra" version....), and if the DX8/9 demos from ATI don't work, please try and get official clarification from ATI on whether or not the demos look for specific hardware, or just DX8.1 / DX9 compliance.
 
The NV30 demos live are VERY frigg'in impressive. It's acutally not too hyperbole when they say 'the dawn of cinematic computing'. The aging car in real life looked great, paint peeled off, wood changed colour, rust accumulated etc.

There's also a demo with a camera zooming around and toys being manipulated. Joe made fun before of how we aren't at even toy story level yet well I think we are. This scene with the toys and camera affects had the best post processing (motion blur, depth of field) implementation I've ever seen. I think we are there.

And finally dawn. A bit misleading as the background is like 5 triangles while the entire character, modelled by the same guy who did Aki in FF had a mind bogging 3 million polygons. You couldn't see any edges on her body and the emotions were done extremely well.

Kirk said you can't do these on the R300, the skin shader exceed R300 instruction count. Also the conditional branches exceed R300 capabilties.
 
To be clear, Kirk said (paraphrasing) "there nothing that can be done on PS 1.4 hardware that our PS 1.1-1.3 hardware can't do as well. I don't see any benefit for PS 1.4."

He didn't say that PS 1.1-1.3 hardware could just run PS 1.4 code.

So the question is...if he sees no benefit to Ps 1.4...then why all the sudden is nVidia worried about benchmarks that have both Ps 1.1 and PS 1.4 paths?! I mean, PS 1.4 doesn't provide any benefits, right?
 
JF_Aidan_Pryde said:
Kirk said you can't do these on the R300, the skin shader exceed R300 instruction count. Also the conditional branches exceed R300 capabilties.

And that may very well be the case. But how critical is this for consumer software (entertainment or productivity) this year or next?
 
John Reynolds said:
JF_Aidan_Pryde said:
Kirk said you can't do these on the R300, the skin shader exceed R300 instruction count. Also the conditional branches exceed R300 capabilties.

And that may very well be the case. But how critical is this for consumer software (entertainment or productivity) this year or next?

Considering even John Carmack has said he's bumped into the R300 shader length limitations, I wouldn't be surprised if this gives NVIDIA a better position when doing high end CGI work.
 
JF_Aidan_Pryde said:
Kirk said you can't do these on the R300, the skin shader exceed R300 instruction count. Also the conditional branches exceed R300 capabilties.
Sounds like more marketing BS from this guy. R300 should be able to do shaders of any length, if a programmer or compiler broke them down into multiple passes. The only difference would be additional memory bandwidth and vertex processing requirements, but if your shader was over 160 instructions long, performance would almost certainly be bound by fill rate anyway. Someone at SIGGRAPH last year showed a real-time ray-tracing demo running on R300 that executed thousands of shader instructions per pixel.

Of course, I doubt that means we will see the NV30 demos running on a R300 any time soon. Especially if it was done in OGL using NVIDIA's extensions.
 
GraphixViolence said:
JF_Aidan_Pryde said:
Kirk said you can't do these on the R300, the skin shader exceed R300 instruction count. Also the conditional branches exceed R300 capabilties.
Sounds like more marketing BS from this guy. R300 should be able to do shaders of any length, if a programmer or compiler broke them down into multiple passes. The only difference would be additional memory bandwidth and vertex processing requirements, but if your shader was over 160 instructions long, performance would almost certainly be bound by fill rate anyway. Someone at SIGGRAPH last year showed a real-time ray-tracing demo running on R300 that executed thousands of shader instructions per pixel.

Of course, I doubt that means we will see the NV30 demos running on a R300 any time soon. Especially if it was done in OGL using NVIDIA's extensions.

My bad, I didn't fully quote him correctly. His follow up was that the R300 would probably need 5 passes to render 'dawn' compared to one pass on NV30.
 
JF_Aidan_Pryde said:
Considering even John Carmack has said he's bumped into the R300 shader length limitations, I wouldn't be surprised if this gives NVIDIA a better position when doing high end CGI work.

I think Carmack made that statement in regards to some in-house work he's been doing, not anything at all related to the Doom 3 engine.
 
JF_Aidan_Pryde said:
The NV30 demos live are VERY frigg'in impressive. It's acutally not too hyperbole when they say 'the dawn of cinematic computing'. The aging car in real life looked great, paint peeled off, wood changed colour, rust accumulated etc.

There's also a demo with a camera zooming around and toys being manipulated. Joe made fun before of how we aren't at even toy story level yet well I think we are. This scene with the toys and camera affects had the best post processing (motion blur, depth of field) implementation I've ever seen. I think we are there.

And finally dawn. A bit misleading as the background is like 5 triangles while the entire character, modelled by the same guy who did Aki in FF had a mind bogging 3 million polygons. You couldn't see any edges on her body and the emotions were done extremely well.

Kirk said you can't do these on the R300, the skin shader exceed R300 instruction count. Also the conditional branches exceed R300 capabilties.

Aging Car - Blue screened when I tried it, never could get it to run

The toy one - it was cool, but very aliased, the camera effects are neat

Dawn - she is ok, but not cinematic movie quality yet
 
John:
You are correct. Looks like he's working on the next thing already. :)

Brent:
I didn't notice the aliasing in the toy demo, it was on a projector. But I think it's pretty much Toy Story after seeing the sum of the four demos. How about you?
 
John Reynolds said:
JF_Aidan_Pryde said:
Kirk said you can't do these on the R300, the skin shader exceed R300 instruction count. Also the conditional branches exceed R300 capabilties.

And that may very well be the case. But how critical is this for consumer software (entertainment or productivity) this year or next?

Don't you think all this looks a lot like the the old T&L (Geforce1 x Voodoo3) controversy? Sure, it took years for all the industry catch up and start to take advantage of it, but someone has to make the first step sometime... and it seems to me Nvidia has always tried to be pioneer with new technologies/tendencies, as is the case with shaders now... of course, one can question if they don't use to push the envelope a little to ahead of what is ideal/pratical...


Rodrigo
 
Back
Top