NV35 - already working, over twice as fast as NV30?

Lezmaka said:
If the only difference between the two was trilinear vs bilinear, then that would be true. But my guess is that there are other "optimizations" in the AF algorithm that help increase performance in the balanced and aggressive modes, not just using bilinear.

It's not just a guess - you're right. The mip-map transitions are moved closer for the first couple of levels. This lets them use both less texture bandwidth and do fewer aniso samples in those regions. The trouble is that those first few mipmap levels cover the majority of the scene, which is why they get such a performance boost. Since they are essentially using bilinear filtering in aggressive mode, these closer mip-map transitions don't make things any better.

Deflection said:
Some of you guys have short memories.
now
ati's aniso not as good aniso on angles vs whatever Nvidia's doing on balanced mode

Both co's are known to cut corners on quality where it makes sense for performance. IMHO if you need to flip between screenshots 20 times to notice a difference the extra performance you gain from this corner cutting is well worth it.

Well, NVidia's "aggressive" mode doesn't require you to flip 20 times to notice a difference. If you look at the 3DVelocity or HardOCP screenshots, the quality difference is immediately noticeable. Their "balanced" mode is a good compromise, however.

As for ATI's corner cutting, I don't think it helps performance very much. Very few surfaces on a typical scene will be affected by those angle peculiarities, whereas NVidia's corner cutting affects much of the scene. I think they just didn't put too much effort in their algorithm for detecting the need for aniso. Nonetheless, I think we all would have preferred it if ATI did it properly.
 
Lezmaka said:
If the only difference between the two was trilinear vs bilinear, then that would be true. But my guess is that there are other "optimizations" in the AF algorithm that help increase performance in the balanced and aggressive modes, not just using bilinear.

True, but why didn't they just do something to optimize the standard trilinear/anisotropic filtering implementation? Or give one the option to enable "full" quality aniso in drivers?
 
A NV30 at 250/500 was compared to a NV35 at 250/500

nVidia got NV35 samples working at 400Mhz, according to MuFu and other people on these forums.
The reason nVidia demoed it at 250/500 is to be able to make people think it's going to be 2 times faster when hitting retail, with 220FPS in Q3 at 4x AA & 8x AF

That's not the case. This is a memory bandwidth limited situation. We'd be lucky if we got a 30% increase with final silicon / drivers

I thought nVidia didn't want to hype the NV35 too much, and it mostly came from rumors on the forums.
Sounds like I was wrong.
I would have sincerly prefered if nVidia had just shud up with the NV35, and made it one big surprise when announced.
Sounds like they aren't :(


Uttar
 
Just to clear things up once and for all:

We talkin' 500MHz real or effective? Meaning, do all NV30 cards have synchronous core/mem clocks?
 
I'm reading, for example, Wavey's "250/500" as 250 MHz core, 250 MHz DDR memory.

My "250/500" was "250 MHz DDR aka 500 MHz DDR by marketing".

Perhaps that helps clarify the terms in use?
 
DaveBaumann said:
How do I normally represent DDR RAM speeds?
Dave's FX5800Ultra Preview said:
The 500MHz DDR-II RAM also runs hot which is why there are large heatsinks covering the RAM
Dave's 9800 Pro Review said:
Currently there are two products in the range, the Radeon 9600 with core/memory speeds of 325/400 (200MHz DDR) and the Radeon 9600 PRO at 400/600 (300MHz DDR)
:p


I'm sorry, just teasing.

Normally you use "1000MHz (500MHz DDRII)" or "500MHz (250MHz DDR)" which is IMO very clear.
So which speed did those NV35 samples ran at?
250MHz(core)/1000MHz (500MHz DDR) or 250MHz(core)/500MHz (250MHz DDR) :?:
 
DaveBaumann said:
How do I normally represent DDR RAM speeds?

Well, there are two distinctions I was making.

The first, in the prior post of mine, was that people represent 250 MHz DDR as 500 MHz DDR memory, due to the common marketing speak usage, and that people seemed to be confusing 250/500 MHz DDR with 500/1000 MHz DDR. This is the post that might be construed as addressing a fault in your usage, though that was not my focus.

After reading a bit, I realized that the wording I used in my discussion of this may have increased the opportunity for confusion, instead of reducing it, by my use of "250/500", since I used it as being equivalent to, borrowing your 9800 Pro review usage, "500 (250 MHz DDR)", while you used the same exact wording to express core/mem speeds (in the marketspeak notation).
So, to attempt to correct that error of mine, I tried to contrast what I meant in that post to your usage, and clarify the context of my own usage.

Your question seems to be taking this second post as a criticism of you. For my understanding, it seems evident to me you consider the clock speeds of the hardware being tested the same as I do. For the understanding of others, it seem to be getting confused with the 5800 Ultra's "500 MHz DDR II", and mr is illustrating how that can occur, I think.

His final quote illustrates what is required thanks to marketspeak confusion to communicate clearly nowadays, unfortunately. :-?
 
I don't think I've yet found a situation where the actual memory clock wasn't obvious from the context. This certainly is no different.
 
Well, it wasn't 250/250 and 250/250 for both cards, and it seems it was 250/250 for the NV35 and 250/500 for the NV30. Now i could have misunderstood what marc @ HFR was saying (that is 250/500 for both), but unliky!
 
The fact that it's a puzzle to even figure out the memory/core speed of the NV35 sample, leads me to believe the performance is going to be crap. :rolleyes:
 
You know, I'm beggining to hesitate here.
http://www.anandtech.com/video/showdoc.html?i=1779&p=16

GeForce FX 5800 Ultra @ 500/500, 1600x1200 4x AA & 8x Balanced AF: 102.2 FPS

GeForce FX 5800 Ultra @ 250/? , 1600x1200 4x AA & 8x AF ( Balanced? Aggressive? ) : 48 FPS


This is seriously beggining to confuse me.
This would theorically mean it's 250/250 ( effective 250/500 )
Because at 250/500 ( effective 250/1000 ) , you'd get much more than that.

So, the NV35 is at either 250/500 ( effective 250/1000 ) or 250/250 ( effective 250/500 )

nVidia *didn't* compare a 8GB/s card with a 32GB/s one, right? Right? Please say me they didn't :(


Uttar
 
Back
Top