3200 MHz seems pretty low even for the salvage part considering 5870 runs it at 4800MHz.
Is it just me or do those numbers almost look like they may just possibly be real?
That'd certainly be a great achivement if the performance delta stays the same or goes up this generation. And AMD's PR been using their die sizes comparisions for several years now, yeah.
The post to which you're referring was a guess on his part, an unlikely one at that. We haven't seen ROPs clocked @ only 475MHz since the days of NV40.
GTX 470, potentially a $500 part, will not be out-performed by GTX 280, a part launched 21 months ago. It is illogical.
The 2*6pin conectors and 225W limit could be the reason
But that would be a very lame reason
3200 MHz seems pretty low even for the salvage part considering 5870 runs it at 4800MHz.
My post was based on Info I stand behind at the time. It is at all possible they increased the clocks for the ROPs. But at the time I posted, that was the info I had been given.
As I said already, 475MHz is not a logical clockspeed for 2010-era ROPs on an IHV's Flagship SKU's salvage part.
I think it's low too, but remember GF100 has more memory and a wider memory interface than Cypress so I wouldn't be surprised if the memory clock is somewhat lower. 50% is a bit much though.
Which is true, but again, I mearly posted to clarify I wasn't guessing but was going off info I had been given. Nice to see they did get the clocks up tho.
Depends on which way you look at it. Its 50% going from 3200 to 4800 but only 33% going from 4800 to 3200.
It's possible those were the clocks A1 was hitting, but I don't think anyone expected A1 to be shipping silicon, especially not after hitting those clocks.
Poosibly A2 aswell. The COmputer show in January was reported as seeing a few Fermi cards running the sled demo but the frame rates got real low at times.
Poosibly A2 aswell. The COmputer show in January was reported as seeing a few Fermi cards running the sled demo but the frame rates got real low at times.
Poosibly A2 aswell. The COmputer show in January was reported as seeing a few Fermi cards running the sled demo but the frame rates got real low at times.
seeing this article from Hexus, could it be possible that it wasn't running on GF100?
(text is below the video)
http://www.hexus.net/content/item.php?item=22702
could be a means to creating a limit to help differentiate the two GF100 products, after all who in their right mind would pay a premium for a top tier card when a lesser (lower cost) one could easily meet or match performance at a substantial savings ? maybe bu limiting available OCing (and thus mem bandwidth) the performance delta between the products will be well defined (enough to justify the cost increase)
In all honesty though you can use just about any single metric and the gtx470 would do better in practice than what the theoretical figures suggest. As already mentioned, quite a bit less bandwidth (though I'm not convinced on the 800Mhz GDDR5 yet). The figures floating around for rop clocks seem to be sketchy, but even assuming 600Mhz it's got less raw rop throughput than a HD5870 (ok not by that much - inline with those performance numbers). Texturing? Only half the theoretical texture fill rate. ALUs? Less than half raw throughput.you know better than use a single metric such as memory bandwidth to compare different products using different archs.. /slap
In all honesty though you can use just about any single metric and the gtx470 would do better in practice than what the theoretical figures suggest. As already mentioned, quite a bit less bandwidth (though I'm not convinced on the 800Mhz GDDR5 yet). The figures floating around for rop clocks seem to be sketchy, but even assuming 600Mhz it's got less raw rop throughput than a HD5870 (ok not by that much - inline with those performance numbers). Texturing? Only half the theoretical texture fill rate. ALUs? Less than half raw throughput.
It isn't surprising (GF100 really needs to achieve more of its theoretical potential compared to cypress, otherwise it would be a horrendous disaster), plus Cypress doesn't fare well on that metric compared to Juniper neither. Of course, in the end this metric isn't really relevant at all...
I didn't assume that. That's just for the theoretical figures. ALUs is what got the most increase there, and even that is "only" a roughly 100% increase (depending on clocks). (Small nitpick, it isn't really truly more than twice the transistors, since that would count disabled units too. That hasn't really anything to do with the architecture itself - plus this is gtx470 which probably has a bit less than twice the "active" transistors of a gtx285). And clearly some of the transistors invested are for dx11 features not for performance (as was the case with evergreen), and others may help performance a lot but only under limited circumstances (like the distributed geometry processing - I really wonder how expensive this was in terms of transistor count).So you assume that nVidia invested more than twice of the transistor for less than 50% more speed. Even a GTX295 would be faster with less transistors...
In terms of theoretical specs? Absolutely. But they should achieve higher performance in practice. Plus it shouldn't be really worse in terms of perf/mm^2 even in theoretical terms thanks to 40nm vs 55nm manufacturing.BTW: How could they build smaller chips with this kind of per/mm^2? They would slower and bigger than a G92b...