Pre-order X800 Pro - ?NDA? - 8 extreme / 12 normal PS pipes

jimmyjames123 said:
I think you will be more likely to see substantial IQ gains (as it starts to run ps2 and ps3 code) for the nv40 rather than significant performance gains. Keep in mind the tests done where the 9800XT runs farcry faster using the nv3x path.

I guess I just have a gut feeling that the NV40 will gain a significant amount of speed in FarCry when all is said and done. The SM 3.0 add-on is also supposed to help boost performance, but that cannot be exposed until DirectX 9.0c is out.

ps3 will help performance over ps2, nv40 is currently running mostly ps1.1 on farcry. Crytek has stated that ps3 is being used very selectively, they can't and won't be replacing every shader.
 
Looking forward to seeing what NV driver team and CryTek can do with the NV40 performance and IQ in this game.

Looking even more forward to the ATI reviews on Tuesday :D

MuFu mentioned X800 XT benchmark teaser...does that mean that we will be seeing both X800 Pro and XT cards being reviewed on Tuesday?
 
MuFu said:
fallguy said:
JBark said:
Psikotiko said:
For that price, better be faster than 6800 Ultra

"up to two times the performance of the acclaimed RADEONâ„¢ 9800 PRO"
Judging by that quote, I'd imagine it's actually the same or a bit slower, since the 6800U seems to be pretty much double the performance of a 9800XT.

Only in a select few games/res'. It is far from double in Farcry.

JYFI, I got a couple of teaser benchmarks from somebody with an XT PE. It's pulling double the FPS of the 9800XT in Far Cry at 1280x1024 with 4xAA/8xAF. This is about 20% faster than a 6800U@400/550MHz on the same system, using the same settings (not sure about PS 2.0/3.0 though). They're about the same speed without AA/AF, but this is on a 3.2GHz P4 and the CPU limitation is extremely apparent. The AA/AF margins in Far Cry are reportedly even higher on more powerful, A64-based testbeds.

MuFu.

But is the XT PE going to be released on the 4th?
 
DemoCoder said:
If they're the same speed without AA/AF, then that implies shader/pixel throughput is about the same, and the 20% speed difference comes from either a) faster ram b) more efficient AA c) more efficient AF d) some combination thereof

Since FarCry is CPU limited however, I don't think you can actually trust the non-AA modes. I guess we'll have to wait for the shadermark figures.

Unless they did something funky, remember Far Cry is more PS1.1 shaders for NVIDIA boards than it is for ATI. ShaderMark also didn't operate on the 60.72 drivers and the 6800.
 
CapsLock said:
Give me a giant freakin break !!!!!! :rolleyes: :rolleyes: :rolleyes:
Ok, arm or leg? :|

Cry us some more crocodile tears Diggy. Yea, it like, never occurred to you that people at Ati would be unhappy that official specs leaked before the official NDA was up!!! I guess this is just like when you knew you should have contacted Ati first with the leaked slides story. :rolleyes:

You and EB knew exactly what you and EB were doing. Face it, you're a shameless site pimper. You and EB will and do whatever it takes for hits, regardless of integrity, ethics and loyalty.

I also don't like EB for the little Alexa Dos spyware program I get when I visit EB. :devilish:

This BS hammy act is the last straw. I am now offically anti-Dig and anti-EB.

Caps
You misread. I don't care that ATi itself is upset, I'm concerned because I was a part of something that pissed of someone who I personally happen to like. :(

It's a subtle but important distinction.
 
DaveBaumann said:
DemoCoder said:
If they're the same speed without AA/AF, then that implies shader/pixel throughput is about the same, and the 20% speed difference comes from either a) faster ram b) more efficient AA c) more efficient AF d) some combination thereof

Since FarCry is CPU limited however, I don't think you can actually trust the non-AA modes. I guess we'll have to wait for the shadermark figures.

Unless they did something funky, remember Far Cry is more PS1.1 shaders for NVIDIA boards than it is for ATI. ShaderMark also didn't operate on the 60.72 drivers and the 6800.

How did techreport get the test to work properly then?
http://techreport.com/reviews/2004q2/geforce-6800ultra/index.x?pg=13
 
jimmyjames123 said:
MuFu mentioned X800 XT benchmark teaser...does that mean that we will be seeing both X800 Pro and XT cards being reviewed on Tuesday?

Yes.
 
digitalwanderer said:
Really MuFu?!?!? :oops:

THANKS!!!!!! :D

(I'd only heard rumors, confirmation is SCHWEEEEEET!!!! 8) )

Dig Capslock Hates you... Remembered that...
I know Capitalization is cool and all, but forget about it, you should have thinked before giving him a Alexa Spyware...
icon65.gif


:LOL:
 
Vysez said:
but forget about it, you should have thinked before giving him a Alexa Spyware...
icon65.gif


:LOL:
You really don't think they would give me any kind of server access or say on how they're set-up over there, do you? :|

I'm just the color commentator. Nothing more, nothing less.

You'll have to talk to our webmaster about it, I have no clue what "Alexa Spyware" even is! :LOL:
 
DaveBaumann said:
Unless they did something funky, remember Far Cry is more PS1.1 shaders for NVIDIA boards than it is for ATI.

I don't see how that changes anything, since the NV40 will run most shaders using FP16 at about the same speed as FP32. It can only gain a big benefit with FP16 in a smaller percentage of cases. All PS1.1 signals to driver is that everything can be partial precision and it lowers register pressure. It potentially makes it easier to optimize for mini-ALU ops (x2/x4/d2/d4/invert/etc) but both ATI and NVidia benefit from that.

Most shaders in any game are going to be "ps1.1-like", even if using PS2.0, the majority of them will be short, and look exactly like PS1.1 equivalents, except for precision differences.

Most of the difference will come down to driver compiler differences. On early drivers, it is likely NVidia went for the "low hanging fruit", which is, it is easier to optimize PS1.1 than 2.0. PS1.1 has syntactic mini-ALU hints, and is restricted to 2 temporary registers. With only 2 temporary registers, a PS1.1 driver compiler need not do any register shuffling or instruction inlining, and it need not revert to FP16 since 2 registers are below the threshold. With PS2.0, the driver has to be smarter at taking advantage of the mini-ALUs, doing dual issue, and reallocating registers, thus I would expect performance differences between 1.1 and 2.0 until drivers mature.
 
digitalwanderer said:
You really don't think they would give me any kind of server access or say on how they're set-up over there, do you? :|

I'm just the color commentator. Nothing more, nothing less.

You'll have to talk to our webmaster about it, I have no clue what "Alexa Spyware" even is! :LOL:

I'm kidding Dig. ;)
 
I don't see how that changes anything, since the NV40 will run most shaders using FP16 at about the same speed as FP32.

Of cousre its going to make a difference - especially since there are likely to be large variances in shader lengths between the PS2.0 and PS1.1 shaders. Given the PS2.0 issues they had with NV3x I would suspect that the majority of their driver compiler efforts went into optimising PS2.0 rather than the relatively solved problems of PS1.x.

If the level used for demoing is one that uses lots of water shaders that will probably cause large performance variances because of the different shaders used and the screen coverage of the shader.
 
DaveBaumann said:
dan2097 said:
How did techreport get the test to work properly then?
http://techreport.com/reviews/2004q2/geforce-6800ultra/index.x?pg=13

No clue actually. It failed to operate at all with me, and I wasn't the only one with this issue either.

Yeah I did hear of issues, so it was a pleasant surprise to see shadermark at techreport. The results seem to correlate with rightmark and xbitlabs(?) own test and aren't by any means bad, so I dont see any malicious reason for Nvidia trying to not make it work. Is there any chance Nvidia will make available a newer driver for benchmarking against the X800s? Well they would actually have to already have done so seeming as the reviews are probably halve written.
 
DaveBaumann said:
Of cousre its going to make a difference - especially since there are likely to be large variances in shader lengths between the PS2.0 and PS1.1 shaders.

It is not the shader length per se that would cause a difference in efficiency between executing 1.1 vs 2.0 on NVidia HW vs ATI, it is the number of simultaneously live registers. A very short shader could be much less efficient than a longer shader on NV3x for example. Secondly, most of PS2.0 shaders are also relatively short. They are longer than 1.1, but probably on par with PS1.4 limits. I expect few to ever hit 30-40 instructions.

Let me make this explicit: I am talking about an identical HLSL shader compiled for two different profiles: (PS1.1 and PS2.0). *NOT* a low-detail 1.1 shader vs a "high detail" 2.0 shader. Any algorithm that can be squeezed into both 2.0 and 1.x should run about the same.

You're suggesting FarCry isn't valid because it may be running a degraded (low quality) path for an NV40 detected as an NV3x. Well, that's possible and even likely. I was arguing that an 12 instruction 1.1 or 28 instruction 1.4 shader should run about the same speed as a 12-28 instruction 2.0 shader that use prodominantly the same operations, on a clock by clock basis.

Many PS2.0 shaders in fact are simple enough to be 1.x shaders, and that's the point.

If your point was the former, I agree, FarCry isn't a good benchmark at this point, and should only be used if both cards are running identical IQ shaders.
 
Many PS2.0 shaders in fact are simple enough to be 1.x shaders, and that's the point.

Given that they have gone to the extent of supplying differing number of PS1.1 and PS2.0 shaders for NV3x and R3x0 then its likely that they already have compiled the PS2.0 shaders that could fit as PS1.1 anyway seeing as it will most benefit NV3x because of the integer / FP16 usage.

Although its not necessarily entirely indicative solely of shader usage / performance, someone has tested 6800U with ATI's PS2.0 path and it causes a reasonably large performance drop.
 
DaveBaumann said:
Many PS2.0 shaders in fact are simple enough to be 1.x shaders, and that's the point.

Given that they have gone to the extent of supplying differing number of PS1.1 and PS2.0 shaders for NV3x and R3x0 then its likely that they already have compiled the PS2.0 shaders that could fit as PS1.1 anyway seeing as it will most benefit NV3x because of the integer / FP16 usage.

Although its not necessarily entirely indicative solely of shader usage / performance, someone has tested 6800U with ATI's PS2.0 path and it causes a reasonably large performance drop.

Basically the NV40 suffers from the same weakness as the NV3x?
 
K.I.L.E.R said:
Basically the NV40 suffers from the same weakness as the NV3x?

I don't think that that is what he is saying, remember that the 9800xt runs the nv3x path faster than the dx9 path also.

Whether there is more to it than nv3x just being a less complex shader path I don't know, there may be other optimizations there also, but clearly there is a fair performance benefit for 9800xt running the nv path.
 
Back
Top