NV35 - already working, over twice as fast as NV30?

LeStoffer said:
<cough in>

Ask yourself this: If lack of bandwidth was the main reason what NV30 Ultra didn't take the market by storm, then why did they go to such lengths to run a high core speed. (Hint: It wasn't the main reason).

<cough out>

Then why increase costs to move to a 256-bit bus with the memory still running at 500MHz ?
 
LeStoffer said:
<cough in>

Ask yourself this: If lack of bandwidth was the main reason what NV30 Ultra didn't take the market by storm, then why did they go to such lengths to run a high core speed. (Hint: It wasn't the main reason).

<cough out>

Poor per-clock VS performance compared to the R300, horrible per-clock arithmetic performance and the memory controller actually needing a clock speed equal to the memory to be efficient?
Just speculation.


Uttar
 
McElvis said:
LeStoffer said:
<cough in>

Ask yourself this: If lack of bandwidth was the main reason what NV30 Ultra didn't take the market by storm, then why did they go to such lengths to run a high core speed. (Hint: It wasn't the main reason).

<cough out>

Then why increase costs to move to a 256-bit bus with the memory still running at 500MHz ?
I guess writing 8 color values per clock needs quite a lot of bandwidth...
 
While running a quick benchmark in Quake3, 1600x1200, 4XAA and 8xAF it got 111 FPS, compared to a GeForce FX Ti5800 Ultra that got 48 FPS. However both chips were clocked at 250MHz because it was still a prototype, and they wanted to make an exact comparision to the NV30. So basically both were set to 250 in order to make a fair benchmark.
The translation @ Nvnews
 
This 250/500 MHz terminology in use for DDR still causes no end of confusion. To me, it seems clear that it was 250 MHz DDR... aka "500MHz" by convenient choice when a bigger number serves (i.e., "marketspeak").

Anyways, I can easily think of situations where Quake III is bandwidth limited ...namely, when bandwidth is lower. In such a situation, a true 8 x 1 fixed function pixel pipeline would have no trouble outperforming a 4 x 2 design with half the bandwidth with the characteristics exhibited. The details of the comparison seem to serve to obscure how mundane such an occurence is.

I'm more interested to see comparisons of shader performance, but that is mostly because I considered such fixed function performance increase a given, not because it is insignificent to consumers just because it is unsurprising.

I'm not sure that the nv35 is necessarily going to clock higher than the nv30, however...then again, I'm not of the opinion that it has to in order to be a much better part for consumers than the nv30, even if shader improvement shows no significant improvement per clock, though I hope the latter is not the case.
 
Bigus Dickus said:
Were they running a map where only single texturing was used?

Hmm..... ;)
With Quake3, that wouldn't be a "map," it'd be a set of detail settings.

It seems to me that 47fps, even for a 250Mhz FX, is just too slow for it to be at low-detail settings.
 
what is beyond me, is that no one has drawn THIS conclusion from the NV35 benchmarks yet.

I think for being clocked at 250/250 it's performance is still dissapointing.

I come to this conclusion based on a few facts.

1) They said 42.72 drivers in "performance" mode

-this means that it is running forced bilinear filtering rather than trilinear as shown by [H]'s review of those drivers.

-factoring in above, I'd say that if they were to enable trilinear than the 111 FPS would drop considerably to anywhere in the 45-60FPS range

-According to Anandtech (http://www.anandtech.com/video/showdoc.html?i=1779&p=16 the Radeon 9700 (275/270, not clocked TOO much higher than the 250/250 of the NV35) gets 90.2FPS with 4XAA/8XAF in quality mode.


Can someone tell me where my logic is off? Or is the Nv35 just some more smoke and mirrors that Nvidia is puffing in the eyes of consumers?

comments?
 
galperi1 said:
-factoring in above, I'd say that if they were to enable trilinear than the 111 FPS would drop considerably to anywhere in the 45-60FPS range

I dont think I've EVER seen trilinear eat up 50% of your framerate.
 
Johnny Rotten said:
I dont think I've EVER seen trilinear eat up 50% of your framerate.

Go look at [H] review of the 5600 Ultra when they switch from Aggressive (bilinear) to Application (Trilinear) with 4XAA and 8XAF enabled

Ut2003 | "Application" | "Aggressive"

1280 X 1024 |12.6FPS | 39.6FPS
1024 X 768 |19.4FPS | 66.6FPS

As you can see, with Trilinear enable, the 5600 Ultra (based off the NV30 core) gets roughly 30% of the original bilinear performance. Thus making my 50% look very optimistic
 
Xmas said:
galperi1 said:
1) They said 42.72 drivers in "performance" mode
No, they didn't say that.
I think he reached that conclusion as they said:
Chip.de
On both systems Windows XP pro SP1 and the Detonator version 42,74 (a classical "Performance" driver) ran.

Personally I don't see how you can have a "performance" driver which doesn't sacrifice something to achieve that performance, and with nVidia's tendancy to ditch IQ at the drop of a hat recently, galperi1's conclusions are not unreasonable imo..

They could set whatever they want in-game (and I believe they say max settings), but if they've chosen "balanced" in the drivers then they're not doing trilinear.
 
grrrpoop said:
Personally I don't see how you can have a "performance" driver which doesn't sacrifice something to achieve that performance, and with nVidia's tendancy to ditch IQ at the drop of a hat recently, galperi1's conclusions are not unreasonable imo..

They could set whatever they want in-game (and I believe they say max settings), but if they've chosen "balanced" in the drivers then they're not doing trilinear.
I don't know why they put that comment there. Maybe because it's one of those '3DMark optimized' drivers (it's really the only reason I can see). On the other hand, this driver supports 8xS and 16x antialiasing in OpenGL, which do not really fit the "performance" label.

Yes, it is very likely that they used 'balanced' setting (which, btw, is renamed to 'quality' in those new drivers). It's default, I think.
Maybe someone who has a GFFX card could test how much of a difference there is in Q3 between application and balanced?
 
Xmas said:
grrrpoop said:
Personally I don't see how you can have a "performance" driver which doesn't sacrifice something to achieve that performance, and with nVidia's tendancy to ditch IQ at the drop of a hat recently, galperi1's conclusions are not unreasonable imo..

They could set whatever they want in-game (and I believe they say max settings), but if they've chosen "balanced" in the drivers then they're not doing trilinear.
I don't know why they put that comment there. Maybe because it's one of those '3DMark optimized' drivers (it's really the only reason I can see). On the other hand, this driver supports 8xS and 16x antialiasing in OpenGL, which do not really fit the "performance" label.

Yes, it is very likely that they used 'balanced' setting (which, btw, is renamed to 'quality' in those new drivers). It's default, I think.
Maybe someone who has a GFFX card could test how much of a difference there is in Q3 between application and balanced?


Let's look at some facts then:

1) Nvidia told review sites to use the 42.72 "performance" drivers for the FX family reviews

2) They also told review sites to benchmark AF situations using the "aggressive" which forces bilinear filtering

Why would they stray from their strategy thus far. Obviously using those settings will give them the BEST numbers (ala 111FPS)

That is how I came to that conclusion. They will use any means necessary to boost their FPS as shown in the drastic difference between agressive and balanced/application settings in [H]'s review of the FX family. And this is just another example.

I feel that the current state of the NV35 is sub-standard at this point in time. If I'm right about the driver settings and the 9700 NP can beat it clock for clock, you can almost guarantee with certainty the the 9800 will also.
 
didn't anand hint at the fact that the nv35 will not beable to beat the r350 ? I remember reading that .
 
Xmas said:
Yes, it is very likely that they used 'balanced' setting (which, btw, is renamed to 'quality' in those new drivers). It's default, I think.
"Quality", yeah right :rolleyes: Another attempt by nvidia is mislead the consumer and reviewers. The only "quality" mode nvidia has is the "Application" setting. Making "Quality" the default does a serious disservice to consumers: Fake trilinear and other shortcuts are the default!

Plus, nvidia has yet to release a WHQL certified driver for the GeForce FX series cards, yet you can buy the GeForce FX 5800 in stores! So much for nvidia's vaunted driver quality. :rolleyes: And of course nvidia insists that websites use a "benchmark driver" (42.76) for GeForce FX reviews...

nvidia has gone on and on about "cinematic rendering" yet they release drivers with inferior image quality by default and the products are extremely slow when doing anything remotely "cinematic".
Maybe someone who has a GFFX card could test how much of a difference there is in Q3 between application and balanced?
HardOCP already did this: http://www.hardocp.com/article.html?art=NDQ0LDg=. Granted, this wasn't the 5800, but the results are still relevant. Note the huge gains with AA and AF.

For image quality analysis check out http://www.hardocp.com/article.html?art=NDQ0LDEw.

From HardOCP's conclusion:
Without question, the term "cinematic graphics" has a very positive connotation. Upon hearing this term you think of words such as "power", "speed", and "quality". Even more you think about how all those CG movies look on the big screen. Unfortunately, the only word that should come to mind when hearing about GeForceFX cinematic graphics in the future is "marketing", because that is the situation we are faced with.

Placing all of the new features aside, how on earth can anyone look at a card whose image quality must be lowered to remain competitive and declare it "cinematic"? Furthermore, it is a bit alarming to see NVIDIA claim that gamers will be "experiencing cinematic graphics the way it’s meant to be played" with their $79 card.
 
Some of you guys have short memories.

16 bit z vs. 32 bit z
16 bit vs 24 bit vs 32 bit fp
AA differences

now
ati's aniso not as good aniso on angles vs whatever Nvidia's doing on balanced mode

Both co's are known to cut corners on quality where it makes sense for performance. IMHO if you need to flip between screenshots 20 times to notice a difference the extra performance you gain from this corner cutting is well worth it.
 
Hmmmz

Deflection said:
Some of you guys have short memories.

16 bit z vs. 32 bit z
16 bit vs 24 bit vs 32 bit fp
AA differences

now
ati's aniso not as good aniso on angles vs whatever Nvidia's doing on balanced mode

Both co's are known to cut corners on quality where it makes sense for performance. IMHO if you need to flip between screenshots 20 times to notice a difference the extra performance you gain from this corner cutting is well worth it.
:rolleyes: The MAIN thing currently is the moral behind it!
 
galperi1 said:
Johnny Rotten said:
I dont think I've EVER seen trilinear eat up 50% of your framerate.

Go look at [H] review of the 5600 Ultra when they switch from Aggressive (bilinear) to Application (Trilinear) with 4XAA and 8XAF enabled

Ut2003 | "Application" | "Aggressive"

1280 X 1024 |12.6FPS | 39.6FPS
1024 X 768 |19.4FPS | 66.6FPS

As you can see, with Trilinear enable, the 5600 Ultra (based off the NV30 core) gets roughly 30% of the original bilinear performance. Thus making my 50% look very optimistic

If the only difference between the two was trilinear vs bilinear, then that would be true. But my guess is that there are other "optimizations" in the AF algorithm that help increase performance in the balanced and aggressive modes, not just using bilinear.
 
Back
Top