NV35 - already working, over twice as fast as NV30?

I thought you're conclusions were the only way to read it in the first place

If we're talking about a 256bit bus, then we're looking at both cores at 250, and both memory at 250ddr, this would give the the nv35 twice the bandwidth.

so going by anand, cutting down the speed by half on both memory and core gives you a linear 50% drop in performance on the nv30.

:?:
 
Want to know a nice thing about running at 'low' clock speeds... it can help to avoid CPU limited situations.
 
Colourless said:
Want to know a nice thing about running at 'low' clock speeds... it can help to avoid CPU limited situations.
Right, but Quake3 wasn't going to be very CPU-limited anyway below around 250-300 fps on current high-end systems.
 
It seems pretty clear to me that when you drop the speed of the NV30 by a factor of two, and run it at 1600x1200,32bits, with 4xFSAA/8xAniso it will most likely be memory bandwidth limited.

Now when you run the NV35 at the exact same core/memory speeds, lo and behold, its roughly twice as fast with it's twice as large memory bandwidth (due to a 256 bit wide memory bus).

What is so surprising or confusing about this? It is a nice confirmation that NV35 at least HAS a 256 bit wide bus.
 
I would have sincerly prefered if nVidia had just shud up with the NV35, and made it one big surprise when announced.

NVIDIA will only manage to surprise me this time around, if they improve AA/aniso algorithms.
 
Ailuros said:
NVIDIA will only manage to surprise me this time around, if they improve AA/aniso algorithms.

Not gonna happen...
 
Ailuros said:
NVIDIA will only manage to surprise me this time around, if they improve AA/aniso algorithms.

I'm perfectly happy with Nvidia's Application AF (it is better, IQ-wise, than ATi's Quality at equal sample settings), if it runs with a substantial decrease to its performance hit compared to the GF4. Nvidia's AA is a whole other issue. . .blech!
 
Josiah said:
NVIDIA will only manage to surprise me this time around, if they improve AA/aniso algorithms.

isn't this mostly a driver issue?

Only their Aniso is a driver controlled issue for the most part, in the aspect that "aggressive af" is craptacular, and "balanced af" is decent but nothing to write home about. Their "application af" is nice, but in current hardware it takes a heavy performance hit.

Their AA is a different story. It's a hardware issue. For the most part, they use ordered-grid which is suboptimal compared to rotated-grid or sparse-grid. They also lack gamma-correction. Toss in massive performance hits to the mix, and it's a bloody ugly mess.
 
BRiT said:
Their AA is a different story. It's a hardware issue. For the most part, they use ordered-grid which is suboptimal compared to rotated-grid or sparse-grid. They also lack gamma-correction. Toss in massive performance hits to the mix, and it's a bloody ugly mess.
Do you think the NV35 will bring some solutions? :?:
 
My opinion on all of this is the NV35 will not offer any different AA solutions than the NV30. The NV35 is nothing more than a bug-fixed release of the NV30. If they were going to introduce new means of AA, it would have been in their new architecture (NV30) not a refresh part (NV35)
 
BRiT said:
My opinion on all of this is the NV35 will not offer any different AA solutions than the NV30. The NV35 is nothing more than a bug-fixed release of the NV30. If they were going to introduce new means of AA, it would have been in their new architecture (NV30) not a refresh part (NV35)

Not sure. Historically, this hasn't been the case.
Sure, on the AA technology front, yes...

As for sampling patterns, however...

GF1->GF2: Nothing, but creation of AA in GF1/GF2 drivers in order to remain competitive.
GF2->GF3: Exact same patterns for 2x & 4x. Introduction of Quincunx.
GF3->GF4: Modified 2x & 4x patterns, so that samples are in the center instead of at the edge, introduction of 4xS
GF4->GFFX: Introduction of 6xS & 8x

As you see, nVidia changed sampling patterns more from GF3->GF4 than from GF2->GF3 or GF4->GFFX!
Sampling patterns are a lot easier to modify than most other things.
So it's logical to change them for a refresh if required.

I'm a great fan of speculation. But in this particuliar case, we can't conclude anything by speculating. I believe we're better waiting for some rumors.

MuFu... CMKRNL... Stop our suffering... Anyone... Please...
:D


Uttar
 
Uttar,

The same basic principals have been used throughout, these have been minor tweaks to the hardware, and mostly software changes. To move to something comparable to Radeon they would need to implement a sparce sampling system, which would be a radical change and something that I seriously doubt would be implemented in a refresh (although, nice if it was).

The 16X mode that turned up in the recent drivers are probably a good indication of what will be occuring with NV35 -- it will probably just contain bug fixes so there isn't the texture blurring/corruption that we saw on NV30 boards. Clearly, going by the pre-release documentation, NVIDIA at least intended to release a 8X mode that utilised mixed Super/MultiSampling for both OpenGL and DX, but there was obviously some hardware bugs that caused the texture issues which is why 8X got moved back to 8xS.
 
DaveBaumann said:
Uttar,

The same basic principals have been used throughout, these have been minor tweaks to the hardware, and mostly software changes. To move to something comparable to Radeon they would need to implement a sparce sampling system, which would be a radical change and something that I seriously doubt would be implemented in a refresh (although, nice if it was).

The 16X mode that turned up in the recent drivers are probably a good indication of what will be occuring with NV35 -- it will probably just contain bug fixes so there isn't the texture blurring/corruption that we saw on NV30 boards. Clearly, going by the pre-release documentation, NVIDIA at least intended to release a 8X mode that utilised mixed Super/MultiSampling for both OpenGL and DX, but there was obviously some hardware bugs that caused the texture issues which is why 8X got moved back to 8xS.

I've got to agree with you: We probably aren't going to get something like Radeon sampling patterns with the NV35. It's just a too major architectural change.

My bet, however, is on a traditional 4x mode using the current 4xS patterns.
It probably wouldn't be as good as the Radeon patterns, but it would certainly be an improvement!
And this would be a lot easier to implement.


Uttar
 
DaveBaumann said:
My bet, however, is on a traditional 4x mode using the current 4xS patterns.

That will half the fill-rate.

I said "patterns" :)

I meant a *pure MSAA* 4x mode which got the same grid as the current 4xS grid.

Sorry if I wasn't clear enough.


Uttar
 
Uttar said:
GF1->GF2: Nothing, but creation of AA in GF1/GF2 drivers in order to remain competitive.
GF2->GF3: Exact same patterns for 2x & 4x. Introduction of Quincunx.

No, the 2x pattern is changed from 1x2 ordered to rotated.
Quincunx has the sample pattern as the 2x with a different post filter.
Also having MSAA is a big change.

GF3->GF4: Modified 2x & 4x patterns, so that samples are in the center instead of at the edge[/b], introduction of 4xS

The pattern modification is only a fixed shift of post-transformed coordinates.
Not a big change.

And 4xS is possible on the GF3 too, just not enabled.

GF4->GFFX: Introduction of 6xS & 8x

These are possible on the GF3 too, just not enabled.

And there's even one more GF3 post filter mode (9-sample gaussian) that haven't been enabled in D3D yet.
It's the 4 sample "equivalent" of Quincunx.

Edit: Yes I, missed the 'f' from 'shift', sorry ;)
 
John Reynolds said:
Ailuros said:
NVIDIA will only manage to surprise me this time around, if they improve AA/aniso algorithms.

I'm perfectly happy with Nvidia's Application AF (it is better, IQ-wise, than ATi's Quality at equal sample settings), if it runs with a substantial decrease to its performance hit compared to the GF4. Nvidia's AA is a whole other issue. . .blech!

Where's the "surprise factor" in that? :D

I'm not sure if I saw application AF benchmarks between NV25 and NV3x cards at digit-life (can't enter it right now), but the gap between them was rather underwhelming for the latter.

If I look at past and today's ATI's cards and their anisotropic implementations, the performance penalty for aniso is pretty much linear between generations, with the only other difference that quality has been getting consistantly better.

Between NV20-> NV25 ->NV30 I can see more or less the same "application" aniso (where I'd guestimate that the performance drop for all three cards is about the same in persentages), while quality has been decreasing. If else I'd like someone to give me a good reason why a user would prefer 8x "aggressive" over 2x "application" or 8x "balanced" over 4x "application" on a GFFX.

I'm not against performance optimisations at all; rather the contrary. Presupposition though is that IQ doesn't get hurt significantly.
 
Back
Top