Nforce 3 250Gb + GFX = Performance boost?

hmmm

Newcomer
I would be interested in seeing feedback on this TechReport article. As WaltC said in the comments over there, some of it looks like just an improved OGL driver, but one would expect that to help the GFX on the VIA platform as well. It is odd that the benefits seem to focus on OGL rather than D3D. That makes my guess about some sort of dynamic software overclocking unlikely. So what do you all think is going on?

Any questions about the ethics of such optimizations? Certainly we need some IQ comparisons, but I meant more along the lines of only optimizing for you own products. I have a hard time seeing anything wrong with that so long as you don't break your competitor's stuff. Then since everything is relative performance, it becomes a chicken or the egg question. Did NVIDIA optimize for their card or maybe the motherboard is natively superfast and they just slowed the Radeon down. I think the first is far more likely, but who can say for sure?

Anyway, I thought it was interesting stuff. I'd like to see the article greatly expanded to include more newer games and IQ.
 
This is weird, I read about it before but did not look at the graphs. I at least like that they used GFX on both boards and radeon on both so one can see what is changing even if why it is changing is not clear,
 
Interesting article. Can't say it matters to me much since my next card will be based on either R420 or NV40 (I assume the optimizations work for the NV40 too). And from what I've seen of Nforce250 thus far it seems promising.
 
It looks to me like what I posted in another thread a while back has come to pass - AGP 16x implemented on nForce3, allowing the NV40 to operate fully as an AGP 16x device.

As to why it only seems to affect OGL performance so far, take a look at lost circuits agp 8x article. You will notice D3D was hardly affected going from AGP 4x to 8x, but professional OGL apps showed a difference.

http://www.lostcircuits.com/video/asus_v9280s/
 
radar1200gs said:
It looks to me like what I posted in another thread a while back has come to pass - AGP 16x implemented on nForce3, allowing the NV40 to operate fully as an AGP 16x device.

How did they do that on a 5950U then, which is what the review used?

I think it's more likely to be some kind of non-standard fiddling on the chipset controller, just as they do with their customised IDE driver. I hope it's a lot more successful than that though, as the IDE driver has major compatability problems with a lot of CD/DVD burners.
 
I think NV38 & NV36 are AGP 16x internally, in order to take advantage of the PCI Express bridge chip. Doubt NV34 has that capability, though should be able to get to around AGP 12x.

Remember nforce1? the integrated GPU operated at AGP 6x

I think it's more likely to be some kind of non-standard fiddling on the chipset controller, just as they do with their customised IDE driver. I hope it's a lot more successful than that though, as the IDE driver has major compatability problems with a lot of CD/DVD burners.
First of all, the products in question are both designed by nVidia, on PCB's based off nVidia reference designs, so I doubt compatability issues will be an issue.

Re: the nForce IDE problems, I have never experienced any issues with burners in the systems I build around nForce boards. I do mostly use Liteon burners though.
 
radar1200gs said:
Re: the nForce IDE problems, I have never experienced any issues with burners in the systems I build around nForce boards. I do mostly use Liteon burners though.

They are only problematic when you install the customised Nvidia IDE driver. They work fine if you use the standard Microsoft driver.
 
radar1200gs said:
I think NV38 & NV36 are AGP 16x internally, in order to take advantage of the PCI Express bridge chip. Doubt NV34 has that capability, though should be able to get to around AGP 12x.

Even if that (incredibly unlikely) situation were true, would you really expect to see such pronounced performance differences? Take a look at the performance delta between all the other AGP revisions. I think there's much more to it than that.
 
What is so unlikely about the situation?

NV38, NV36 & NV40 are all manufactured at IBM. NV38 & NV36 would have served as testing for AGP 16X in NV40 and will benefit from it themselves in their PEG incarnations.

nVidia are dealing with exactly known capabilities of both products and have taken the opportunity to exploit those capabilities to speed up and simplify (no relying on 3rd party silicon / protocols) m/b to GPU communication.

They would have been silly not to do it.
 
radar1200gs said:
What is so unlikely about the situation?

NV38, NV36 & NV40 are all manufactured at IBM. NV38 & NV36 would have served as testing for AGP 16X in NV40 and will benefit from it themselves in their PEG incarnations.

nVidia are dealing with exactly known capabilities of both products and have taken the opportunity to exploit those capabilities to speed up and simplify (no relying on 3rd party silicon / protocols) m/b to GPU communication.

They would have been silly not to do it.

The unlikeliness stems from the fact that (as Hanners already stated) the performance differences between e.g. AGP4x and AGP8x are usually negligible. Why should a "virtual" AGP16x cause such a performance boost all of a sudden? I agree with you that nVidia quite possibly uses some methods to reduce overhead on Chipset<->GPU communication. (Especially concerning latency, i guess) But I think an alleged internal AGP16x solution is highly unlikely.
 
The performance boost is likely to have to do with Nvidia graphics division knowing the internals of the NForce chipset, such as buffer depths, latencies etc. That way they can let the driver do optimal transfers/command issues.

Cheers
Gubbi
 
Remember there is no official AGP 16X protocol, and only nVidia will use their version of AGP 16X on their chips so they are free to ignore or change some of the cumbersome limitations AGP normally imposes.
 
radar1200gs said:
Remember there is no official AGP 16X protocol, and only nVidia will use their version of AGP 16X on their chips so they are free to ignore or change some of the cumbersome limitations AGP normally imposes.

Try to read Hanner's and Snyder's comments, over and over again until you realize why it's VERY unlikely.
 
There isn't anything alleged about the AGP 16x. Most of the GPU's interfaced to the PCI-Express HSI will communicate with it at an effective AGP 16x rate. The exception being NV34 which I think will communicate at an effective AGP 12x.
 
radar1200gs said:
There isn't anything alleged about the AGP 16x. Most of the GPU's interfaced to the PCI-Express HSI will communicate with it at an effective AGP 16x rate. The exception being NV34 which I think will communicate at an effective AGP 12x.

Dude, you just don't get. The odds of a virtual AGP 16X providing the performance boost seen (only in AGP apps) is slim to none.

That's why it is unlikely. No one cares if NVIDIA could have pulled it off; we're saying that even if they did, it can't explain the performance boost.
 
Go read the lost circuits article I linked to.

See where performance improved between AGP 4x & 8x, compare to current situation.
 
radar1200gs said:
Go read the lost circuits article I linked to.

See where performance improved between AGP 4x & 8x, compare to current situation.

Well, I did.
Lostcircuits article: a max. of 5,4% improvement from 4x card to 8x card on same agp controller
Techreport: a max of 35% going from 8x to 16x same card, different controller.

When there only is a gain of 5% (and then only in one measly test of Spec ViewPerf) when going 4x->8x, why should another jump from 8x to 16x suddenly raise the performance delta so dramatically? (Besides the fact that this is comparing apples to...well...parsley or something.)
 
radar1200gs said:
Go read the lost circuits article I linked to.

See where performance improved between AGP 4x & 8x, compare to current situation.

Well, that LC link is weak on the benchmarks (just like the TR link at issue here), so it is tough to make any absolute judgements about which things benefit from the increased AGP bandwidth (w/ TC) and with whatever NVIDIA did (w/ TR), but the TC numbers would seem to indicate the cleavage is between profressional graphics and games, not between OGL and D3D renders--as indicated by the TR article. An logically, why would OGL games uniquely benefit from increased bandwidth while D3D remains stagnant?
 
Back
Top