NVIDIA GF100 & Friends speculation

I've just seen that over at 3DCenter AnarchX posted a slide supposedly coming from the GTX580 reviewers guide:
AnarchX said:
Die ausgewählten Benchmark-Zahlen aus NVs Reviewer Guide?

57153431288611776397102.jpg

http://itbbs.pconline.com.cn/diy/12066926.html
That numbers equate to a ~18% average performance increase quite inline with
16SMs / 15SMs * 775MHz / 700MHz = 118,1%
 
Surely that graph can't really be from any GPU-maker. The bars start at 0 for crying out loud.
 
They probably recompiled the chart from tables with numbers, which are in NVs guides. ;)

btw.
Could GF110 benefit from GF104-like shedulers with two dispatch units?
This would allow to execute superscalar on the two Vec16-ALUs, while SFUs and LD/ST are dispatched at the same time.

In the "leaked" GTX 580 Vantage scores especially graphics test 2 show a big gain, bigger than unit and clock-rate increase:

Graphics Test 2: New Calico

* The following features are specific to this scene:
* Almost entirely consists of moving objects
* No skinned objects
* Variance shadow mapping shadows
* Lots of instanced objects
* Local and global ray-tracing effects (Parallax Occlusion Mapping, True Impostors and volumetric fog)
http://www.futuremark.com/products/3dmarkvantage/tests/

Maybe a case where parallel execution of ALUs@superscalar and LD/ST is an advantage?
 
Supposed these numbers are true , If Cayman is 30% faster than HD 5870 , then it will be a close call indeed .

However if Cayman is slightly more faster , then Nvidia will have lost this round fair and square .
 
Conspiracy theory: The GTX580 uses the same chip as the GTX480. No respin.
I dont think its possible. Not because of Nvidia's ethic side, but because it would be a bad business decision. Tweaked and respined GF100 gives company:

1. Tweaked chip for a better performance and lower TDP.
2. Bug fixes.
3. New features, even if its minor additions.
4. Better yields due to finally getting hang on 40nm process, probably slightly smaller die.
5. Even if changes are small, they could get away calling it a new generation, GTX580. They would get too much bad press if its exactly same GF100. New "generation" - better sales, much better compared to GTX490.

Nvidia had a year for it, why on earth they wouldnt respin? I thought they would have new generation by now, but even if its only GF100b, its 100% new respin.

"But it'd explain why it's pin-compatible." - 6800 is also pin-compatible with Cypress boards AFAIK, its new chips nonetheless.
 
Surely that graph can't really be from any GPU-maker. The bars start at 0 for crying out loud.

All the graphs in the GF100 whitepaper start at zero too :) That chart would agree with rumours of performance gains in vantage being higher than in games though I have no idea why that would be the case. Doesn't seem like enough to hold off Cayman unless it's also just a minor performance bump.
 
Supposed these numbers are true , If Cayman is 30% faster than HD 5870 , then it will be a close call indeed .

However if Cayman is slightly more faster , then Nvidia will have lost this round fair and square .
Cayman will trade blows with GTX580 judging from all signs. But its not the high-end from AMD, its Antilles. Nvidia lost another round, unless GTX580 is far better than rumors suggests.

Or they will send Cayman reviewers OC versions of GTX 580. :LOL:
I'm 100% sure of it ;) Nvidia is very adamant that reviewers would test its OC cards vs stock Radeons, most reviewers succumb.
 
Could GF110 benefit from GF104-like shedulers with two dispatch units?
This would allow to execute superscalar on the two Vec16-ALUs, while SFUs and LD/ST are dispatched at the same time.

In the "leaked" GTX 580 Vantage scores especially graphics test 2 show a big gain, bigger than unit and clock-rate increase:

* Variance shadow mapping shadows

http://www.futuremark.com/products/3dmarkvantage/tests/

Maybe a case where parallel execution of ALUs@superscalar and LD/ST is an advantage?

Single cycle FP16 TMUs should help here probably (my bold).
WRT to superscalar execution: Given that they marketed GF104 as having 336 instead of 224 shaders, you cannot expect superscalar execution to have more benefits than upping shader count by 50% without having to go the painful way with full throttle control logic also. IOW, you will only get less than peak performance out of it, not more.
 
GTX 465 and later have all FurMark consumption below or close TDP and a gaming consumption significant lower than TDP.
GTX 480 was ~230W game, 250W TDP and Furmark ~320W.

That's coincidental. Unless nVidia is also using Furmark to set their TDP there is no guarantee a 580 won't hit ~300w in Furmark either.

Also, what source are you using for Furmark power consumption? According to Xbit a full 480 based system pulls ~320w more at load than idle and the same system with a 5870 pulls ~225w more.
 
That's coincidental. Unless nVidia is also using Furmark to set their TDP there is no guarantee a 580 won't hit ~300w in Furmark either.

Also, what source are you using for Furmark power consumption? According to Xbit a full 480 based system pulls ~320w more at load than idle and the same system with a 5870 pulls ~225w more.

TPU measured 320W for the GTX 480 alone, but I think they got a really bad sample. Most reviewers got about 300W. If memory serves, someone (Damien?) measured 260W under 3DMark, which would indicate that the 230W figure for games is rather optimistic.

Edit: yep, that was Damien: http://www.hardware.fr/articles/787-5/dossier-nvidia-geforce-gtx-480-470.html
 
I'm fairly certain that, even with AMD's crappy driver support, the GTX580 can trade blows with Cayman Pro.
 
Last edited by a moderator:
Back
Top