NVIDIA GF100 & Friends speculation

Is that really the case? From what I've seen the 5850 retains its leadership in perf/W over the 6800s: http://www.techpowerup.com/reviews/HIS/Radeon_HD_6850/29.html

For some reason, TPU's 6870 seems to be quite power hungry, drawing significantly more than the HD 5850: http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/25.html

That explains the perf/W results. But most reviews I've seen have the 6870 drawing about the same as the 5850, while being faster. That level of power-efficiency isn't bad at all for a 900MHz part.
 
Prior to the actual launch of the GTX 580, we were concerned about what the availability would be like. With NVIDIA engaging in such a quick development cycle for GF110 and being unwilling to discuss launch quantities, we didn’t think they could do it. We’re glad to report that we were wrong, and the GTX 580 has been in steady supply since the launch yesterday morning. Kudos to NVIDIA for proving us wrong here and hitting a hard launch – it’s the kind of action that helps to make up for the drawn out launch of the GTX 480 and GTX 470.




At this point Newegg is the 800lb gorilla of computer parts; they have the largest volume and as far as we can figure they get the bulk of the launch cards allocated to the United States. So what they’re doing is usually a good barometer of what pricing and availability is going to be like – except for this week. As it turns out Newegg is running a 10% sale on all video cards via a well-known promo code; and as best as we can tell rather than not including the GTX 580 in their sale, they simply hiked up the price on all of their GTX 580 cards so that prices were at or around MSRP after the promo code was applied. The end result being that the cards look like they’re going well over MSRP when they’re not. Judging from pricing at Newegg and elsewhere it looks like there is some slight gouging going on (we can only turn up a couple of cards that are actually at $499 instead of $509/$519), but ultimately GTX 580 prices aren’t astronomical like they appeared at first glance. After this stunt, this will probably go on the record as being one of the weirder launches.


http://www.anandtech.com/show/4012/nvidias-geforce-gtx-580-the-sli-update/1
 
Now that the 580 is out of the bag, would anyone agree that the GTX 560 actually has the potential to be more impressive (or less unimpressive, depending on how you view it)?

Compared to the GTX 460-1GB, we're talking about 8 vs. 7 SMs, so the theoretical increase in shader- and TMU-power per clock is twice as high compared to 480 -> 580.

Combined with clockspeeds of ~750/1500 for core/shader we'd be talking about ~27% more raw power. Add the improved Z Culling, and we might very well look at performance gains of ~25-30% on average, and over 30% in games with lots of geometry/tesselation.

From what I've gathered, that would be enough to beat the 6870 by ~5-10% on average and by far more in geometry/tesselation-heavy scenarios. OC potential is probably going to be higher as well, AF quality still a bit better.
Perf./W and perf./mm² will most probably remain on AMD's side, but the gap will shrink.

What do you think?
 
Well I don't think the 6870 is going up against the 580, so it doesn't matter.
lets wait and see what the 6970 deliverers.
 
Combined with clockspeeds of ~750/1500 for core/shader we'd be talking about ~27% more raw power. Add the improved Z Culling, and we might very well look at performance gains of ~25-30% on average, and over 30% in games with lots of geometry/tesselation.
Why would that increase perf over 30% in games with lots of geometry?
Also we don't really know enough about the Z Culling improvements - might not even apply to GF104.

From what I've gathered, that would be enough to beat the 6870 by ~5-10% on average and by far more in geometry/tesselation-heavy scenarios. OC potential is probably going to be higher as well, AF quality still a bit better.
Perf./W and perf./mm² will most probably remain on AMD's side, but the gap will shrink.
I've always said it looks to me like nvidia could get near HD5870 performance with a fully enabled, ~800Mhz GF104, so what you suggests certainly sounds possible. Though I suspect 750Mhz isn't quite enough to really beat the HD6870 - that looks to me like it would be very close in performance to the 460 GTX FTW (with 7 SMs but 850Mhz), dismissing the z cull or other improvements, i.e. assuming GTX560 could still be based on GF104. But certainly 800Mhz should be doable for a non-OC product too.
 
I assume (hopefully without making an ass out of u and me ;)), that the improved z-cull nvidia is stating is only to be viewed product related, i.e. only due to one more SM being active and thus contributing to the cull rate making it 16/15 the rate of GTX480 per clock. So, sadly, no magic involved there from where I'm standing.

At least, I have not seen any gains in rather directed tests that are not properly explainable by the above.
 
I assume (hopefully without making an ass out of u and me ;)), that the improved z-cull nvidia is stating is only to be viewed product related, i.e. only due to one more SM being active and thus contributing to the cull rate making it 16/15 the rate of GTX480 per clock. So, sadly, no magic involved there from where I'm standing.

At least, I have not seen any gains in rather directed tests that are not properly explainable by the above.
Ok... though I've seen at least a few games where the card showed larger gains than the additional SM+clocks alone should allow for, but of course it's possible that the upgraded TMUs and/or some other changes are the reason for that.


Why would that increase perf over 30% in games with lots of geometry?
I was under the impression that improved culling would help not only with tesselation. Some non-DX11 games show over-proportional gains for the 580 (HAWX 1 iirc), and I didn't think the upgraded TMUs alone could result in such gains, but I might be wrong.
 
I assume (hopefully without making an ass out of u and me ;)), that the improved z-cull nvidia is stating is only to be viewed product related, i.e. only due to one more SM being active and thus contributing to the cull rate making it 16/15 the rate of GTX480 per clock. So, sadly, no magic involved there from where I'm standing.

At least, I have not seen any gains in rather directed tests that are not properly explainable by the above.

I've seen some differences (some gain, some loss) but I guess they come from rebalancing the GeForce geometry throughput limitation and from a different "global game profile" for GTX500. (Quadros come with more global profiles and these give different results with triangle throughput tests)
 
;)
14sm_gpu-z.png


degenerate triangle culling-rate
16 SM(6FB): 2956 Mtri/s
15 SM(6FB): 2781 Mtri/s
14 SM(6FB): 2597 Mtri/s
 
Last edited by a moderator:
I've seen some differences (some gain, some loss) but I guess they come from rebalancing the GeForce geometry throughput limitation and from a different "global game profile" for GTX500. (Quadros come with more global profiles and these give different results with triangle throughput tests)

Conspiracy theory ahead:

Is it possible that GF110's new Z-cull boost is not actually new, but it's in GF100 as well, activated only for Quadros? That's obviously done for other parts of the graphics pipeline (which is why Quadros are so much better for CAD/workstation display).

Perhaps the GTX580 has that culling boost unlocked just to give an extra boost to it to give it that much more differentiation from a simple GF100 respin.

It'd be easy to test this by finding a z-cull benchmark and running it on a Quadro 6000, a GTX480, and a GTX580 and adjusting for clock rate and SM count.

(In fact it'd be a terrific theory except we know of the FP16 filtering boost too, so there HAVE been at least one new feature.)
 
MSI announces the N460GTX-SE graphics cards
MSI today introduced a new GF104-based graphics cards, the N460GTX-SE series which utilize a Cyclone cooler with one 90mm PWM fan, dual heat pipes and a nickel-plated copper base, are equipped with Military Class components, and are claimed to have up to 30% overclocking potential.

Spec wise, the cards feature 336 CUDA Cores, a 192/256-bit memory interface and 768MB/1GB of GDDR5 memory. No word on clocks.....
http://www.tcmagazine.com/tcm/news/hardware/31726/msi-announces-n460gtx-se-graphics-cards
 
On TC Mag earlier

Nvidia to release crippled GeForce GTX 460 with 288 Cores
For reasons not yet revealed, Nvidia has apparently decided on releasing another graphics card powered by the GF104 GPU. No, it's not a 'full' GTX 460 with 384 CUDA Cores, but a GTX 460 'SE' that packs 288 CUDA Cores and has lower clocks.

Said to debut on November 15, the GeForce GTX 460 has a GPU clock of 650 MHz, a shader frequency of 1300 MHz, a 256-bit memory interface, 1GB of GDDR5 VRAM set to 3400 MHz, 2-way SLI support, a TDP of 150W and dual-DVI and HDMI output options. No word on pricing but it could cost as much as the 768MB version of the 'regular GTX 460.
http://www.tcmagazine.com/tcm/news/hardware/31493/nvidia-release-crippled-geforce-gtx-460-288-cores
 
EVGAs MSRP for reference version is $179, the same like they charge for an 1GiB PFB 720MHz 336SPs GTX 460.
Looks a bit that Nvidia and partners tries to attract customers with the 1GiB VRAM and selling them a much slower GPU.
MSRPs should be ~$149, imo.
 
Hopefully reviewers don't completely ignore it and somebody does a comparison to the 460 768MB which I presume will be EOL'd now.
 
Back
Top