Xbit on R350: they say 16 TMUs + DDRII

As it is nearly a 200mhz Advantage only nets them a 30% speed advantage.. and thats based on internal Nvidia tests with older Ati drivers

And very early and probably unoptimised geforcefx drivers.
 
Hellbinder[CE said:
]I am a little curious about your Doubt in the R350, based on your comments on the main page. You seem to be in the same boat as a lot of people that Fail to Realize that the R300 is Faster Clock for Clock than the Nv30. In current Games. I think thats going to become pretty damn clear in a few weeks time. As it is nearly a 200mhz Advantage only nets them a 30% speed advantage.. and thats based on internal Nvidia tests with older Ati drivers.

Well...you need to realize that the NV30 is 200 MHz faster (500 MHz vs 300 MHz). That works out to 2/5 or 40% higher clockrate. That's only a little faster than 30%, so I could easily see that 10% delta being due to CPU limitations. The thing is the R300/R350 would hit those as well as it increased in speed.

Another part of the performance difference could be unoptimized drivers. But then again the NV30 may be released with unoptimized drivers like the R8500, in which case the affect on performance would be important. Even so, at this point it's way too early to say for sure that the NV30 architecture is less efficient clock for clock.
 
Performance Delta

Complete guess but could it be the Vertex throughput he's talking about?

375M@500MHz for the GFFX vs 325M@325MHz for the 9700Pro...

From that the 9700Pro certainly appears to be faster clock for clock.
 
Well...you need to realize that the NV30 is 200 MHz faster (500 MHz vs 300 MHz). That works out to 2/5 or 40% higher clockrate. That's only a little faster than 30%

325 + ~54% = ~500
no?

500 - 325 = 175
175 / 325 = .538 (53.8%)
 
Re: Performance Delta

Heathen said:
Complete guess but could it be the Vertex throughput he's talking about?

375M@500MHz for the GFFX vs 325M@325MHz for the 9700Pro...

From that the 9700Pro certainly appears to be faster clock for clock.

It could be, but that is dealing with theoretical numbers rather then real world numbers. However, the 9700 pro does appear to be faster on a clock for clock basis when it comes to triangle throughput. What this equates to in really world data is anyones guess.

I suppose what dave or one of the other sites could do when they are benchmarking the cards is underclock the nv30 core back to 325mhz and benchmark it against the 9700 to see what sort of comparison it does give.
 
in terms of vertex performance per clock, it seems the ATI R300 wins without question. R300 has a straight 4 vertex shaders, correct? it achieves 1M vertices per cycle. where as the NV30 has "a sea of vertex math engines". From what I read, this works out to roughly the same as having 3 vertex shader units. hense the assumption that NV30 has 3 vertex shaders-- Even though the NV30 doesn't have vertex shader units, it has an array of vertex processors not unlike 3D Labs P10. only by clocking NV30 to 500 MHz, does it *just* surpass the 300 MHz R300.
 
Yeah, my mistake, I was looking at the percentage as based on the NV30's clock, rather than the R300s. :oops:

Both percentages are technically correct, but you have to take the percentage relative to the part in question (NV30) since we're looking at the performance delta it has over the R300.

Now looking at that it does seem that it might be less powerful clock for clock. If so, that doesn't look good for Nvidia. But it could still be unoptimized drivers and CPU limitations.
 
I think we are just shooting into the wind here, we really don't know the real performance of the GF FX until it is released. Regardless of what Nvidia claims which really isn't much and not very specific since no real benchmark numbers as far as I know was released under what conditions etc. etc.. I also believe if the GF FX was able to do 30% better as stated by Nvidia then they would have had some sample cards out to reviewers to promote their upcoming product to get some momentum rolling for a great lauch. They havn't as of yet. I doubt at high resolutions with 4x AA and 8x AF that the GF FX will distinguish itself from the Radeon 9700pro, I believe it will be the opposite with the Radeon9700pro holding its own in that situation. At 1024x768x32 3dMark2001se benchmark the GF FX will probably own the Radeon9700pro due to its higher clock rate, but crank up the resolution and add 4x AA and 8x AF the Radeon 9700pro I believe will be faster.
 
Look what the NV30 is accomplishing with a 128bit bus, though. It might not be destroying the R300 like the R300 did to the Ti4600, but the technology put into the NV30 to allow it to even be 30% faster than the R300 on a 128bit is pretty impressive to me. Imagine what Nvidia could do with a 256bit bus, on the same GPU/RAM specs that the NV30 already has?
 
Thats funny, Unless you have a Nv30 right now I'd say that comment is way off base...kind of like a Chalnoth comment...what a joke. What are you basing these numbers from ???
Lets not get to ahead of ourselves here...I personally doubt a Nv30 will be able to outperform a Radeon 9700 in peformance where it counts (Eye Candy..FSAA and Anisotropic filtering enabled)..Escpecially high resolution >1024 x 768...I mean thats what we pay $400 dollars for correct, in fact these new cards today should not have to run without it.
I don't care how much bandwidth savings they have, they can't overcome the hardware bottlneck when sampling is increased along with AF.

Unless it is some form of blur filter (Quincinux II))..then IQ must also come in to play.
BTW its not easy implementing a 256 bit bus for a consumer level card, its funny to see people just overlook probably one of the most advanced consumer level graphic memory interfaces in the world...and scalable at that (multichip). Parhelia too.
 
Parhelia too.
To save you from appearing as a real ATI fanb*y, nice addition there! :)
AA would benefit greatly from 256-bit memory bus, but not aniso, aniso benefits from higher clock speeds and efficient alghoritms, both of which NV30 offers in the full extent.

Anyway, latest reports seem to suggest a 50% lead of NV30 over R300 (obviously, in the most suitable situations for the former), so I wouldn't kill NV30 yet if I were you Doomtrooper...
 
Interesting speculation here, I'm enjoying it. For my part, I'll venture that the NV30 debuts with fairly mature drivers. I base that on guessing that the driver team has had something final for several months now and even if the hardware was running at very low clock speeds or by way of simulation it was fully featured. If so, I see no reason for them not to have been optomizing the drivers. Granted, hardware compatability might be another issue.

Lol, good news would be fast (game) compatabile drivers right out of the box, bad news might be that the leaked and rumored performance numbers won't be greatly improved on all that quickly or all that easily.

We seem to be living in "interesting times" without the negative connotations of that old Chinese saying (curse), "May you live in interesting times". :)
 
emotionstation said:
Look what the NV30 is accomplishing with a 128bit bus, though. It might not be destroying the R300 like the R300 did to the Ti4600, but the technology put into the NV30 to allow it to even be 30% faster than the R300 on a 128bit is pretty impressive to me. Imagine what Nvidia could do with a 256bit bus, on the same GPU/RAM specs that the NV30 already has?
Ummm, why dont you look at the facts here?
Its using a 128bit bus, sure. Using DDRII running at 500 (1000) Mhz.
Imagine what ATI could do with DDR2 running at those speeds...

See?
The comparison is kinda useless.
 
alexsok said:
Parhelia too.
To save you from appearing as a real ATI fanb*y, nice addition there! :)
AA would benefit greatly from 256-bit memory bus, but not aniso, aniso benefits from higher clock speeds and efficient alghoritms, both of which NV30 offers in the full extent.

We know that NV30 offers efficient AF algorithms? I mean, we know that its drivers will offer quality and performance settings, but we don't know that its AF will be as fast or look as good as the 9700's (which, no, isn't without a few shortcomings, IQ-wise, either).

And please don't call someone else a fanb*y. It's hypocritical coming from you.
 
alexsok said:
(...) but not aniso, aniso benefits from higher clock speeds and efficient alghoritms, both of which NV30 offers in the full extent.

Yeah, just like on the GF-family, right? :rolleyes:
Due to its track record the NV aniso typically very slow compared to the ATI.
 
Doomtrooper said:
I personally doubt a Nv30 will be able to outperform a Radeon 9700 in peformance where it counts (Eye Candy..FSAA and Anisotropic filtering enabled)..Escpecially high resolution >1024 x 768...I mean thats what we pay $400 dollars for correct

Just what I need, more framerate in counter-strike.

You never answered my question I asked a long time ago: Would you buy a Radeon which has identical performance to the 9700 PRO and identical AA/ANISO IQ, but half the price, because it is only DirectX7 capable?

Seems to me that either you care about DX9 performance, or you don't. If you do, then the NV30's DX9 performance becomes a relevant benchmark measure. If you don't, then you're admitting that you're paying alot of extra $$$ for a "token" feature which will never really be used.

Of course, I suspect that what you really care about and what's really relevent is whatever feature the 9700 does faster. If feature X is faster, you will claim it is a feature that matters. If feature Y is slower than NV30, you will claim it is a feature that doesn't matter.
 
Back
Top