The V3 document is from July 16th, when was that page last updated? 9W for 3GB of RAM sounds reasonable though.
Last edited by a moderator:
That's what Charlie seems to be saying too.
Now how do you reconcile that with the fact that Nvidia has no CPU business yet has higher gross margins than AMD (45.6% vs 45%) in the second quarter of this year with those huge economically unviable dies they keep making? Inconvenient facts?
May be you should look here
http://www.realworldtech.com/page.cfm?ArticleID=RWT090909050230&p=2
Your views on perf/W of GPU's might change a bit.
Most other numbers are for graphics cards, so the 5870/1G (544@188w) would be more comparable than the 4gb firestream. On the other hand the C2070 is (now) specified at 247w, not 225 (unfortunately no gf100 graphic cards to use here).
So, here you have it with the consumer cards added:And I strongly recommend not using wikipedia as a source for anything.
I've linked the official specs twice (once via wikipedia, once via the respective IHVs homepage). The Specs you linked didn't even have a fix on the clockrates, that's enough for me to completely disregard them for the obviously newer ones on the website, when it comes to TDP. Apart from that, I was linking to Tesla M2070 for a reason. The Tesla card comes with a 22 watts fan (and that's why it's TDP is exactly that amount higher) - something which isn't factored in into CPUs (apart from the memory issue, which I tend to view as a working set cache) and also not on Firestream 9370, as you can see here. We don't want to spoil our data, don't we?That's Easy, C2070 runs it's GF100@1.0V and has a "board power" greater than 238W. (actually.. I think it's still listed as "TBD" in the official documents.) GTX480 default is 0.950 right the "C2070" documents from last year say 1.05v? What are you getting at?
Tesla C2050/2070 Specs v1 November 2009:
Tesla C2050/2070 Specs v3 July 2010
The Specs you linked didn't even have a fix on the clockrates, that's enough for me to completely disregard them for the obviously newer ones on the website,
Thanks, I couldn't find any official reference to the stock voltages on the consumer boards..Besides, GTX 480 isn't 0,95 volt - that's 470. 480 is 1,0 Volts.
Because we're not talking about two years ago, this is the R9xx topic. RV7x0 has nothing to prove against the upcoming architectures, Fermi does.After having read you talking about Fermi exceeding its TDP under certain workloads, i couldn't help but wonder why you haven't hinted at this also wrt to other chips which are doing the same (RV770). Coincidence?
How about linking to the M2050/70 board specification document, as this is what the dot in Carsten's (and also my) graph referred to?Here's Version 3 of the C2050/2070 Board Specification document, dated July 16th 2010, how do you know your website numbers are newer/better/more real/less disregardable?
How about linking to the M2050/70 board specification document, as this is what the dot in Carsten's (and also my) graph referred to?
Sure making up facts is always an easy way out.
What do you think is the difference? None?http://www.nvidia.com/docs/IO/43395/BD-05238-001_v02_M2050_boardspec.pdf
225W isn't the board power in that case, it's the "Board Power Dissipation" according to that document.
If you look closely, you will also find the GTX480 in my chartI was confused since people first place a GPU in that chart and later on start to specify which model of card they want in there to have to most desirable specifications. Like Cypress is now split in Radeon and Firestream, but GF100 only as a specific Tesla part.
If you look closely, you will also find the GTX480 in my chart
That's what Charlie seems to be saying too.
Now how do you reconcile that with the fact that Nvidia has no CPU business yet has higher gross margins than AMD (45.6% vs 45%) in the second quarter of this year with those huge economically unviable dies they keep making? Inconvenient facts?
That chart does not affect my argument, which was about adding GPU capabilites for co-processing in a general purpose system.
It does however illustrate the state of things.
That chart has been constructed to show FLOPs/W and FLOPs/mm^2.
But those FLOPs are marketing numbers. No application generated the FLOPs data. No computational kernel, or even LINPACK. There is nothing there that connects to the real world.
And the article demonstrates no understanding of utilization, such as distinct differences between serial, vector, parallel and vector parallel codes, what limitations there are in data organization and usage for different cases. Even then, we are still in the domain of theory, in the realm of physical devices we have to deal with the memory subsystem (which is what typically defines what performance you can wring out of a small computation kernel), communications et cetera. To actually produce code, you need tools that allows you to optimally access the hardware, so now we have gotten to the software side of things which is its very own can of worms. Of course the application is rarely just a tight computational kernel, so we have Amdahls law to deal with, and the further up you go in specialized capabilities, the harder it applies.
All of the above concerns new code, and specifically targeted at that. Legacy codes in this case gain nothing and old code, and old code reused in new applications, constitute just about 100% of what is run on general purpose computers. I could go on and on.
Viewed in that light, a GPU is incredibly inefficient even if by magic all x86 software was rewritten from scratch due to the very limited set of problems it can be applied to.
I saw a Q2-10 report on EETimes which mentioned a median 300mm wafer price of $3200 which was up from $2900 in Q1, are those numbers realistic? And are these price increases a factor when ordering large amounts like AMD/NV do?
I think we can all agree that they are going to sell a boatload of GF104 based cards for ~$200. They're projecting a gross margin improvement in the next quarter. So a couple thousand sales of professional cards are going to offset poor margins on millions of GTX460's? Charlie didn't even go to those lengths trying to justify his misdirected nonsense.
Tatsächlich will AMD die Southern-Islands-Chips wohl zwischen dem 15. und 29. Oktober vorstellen.
Even with poor margins , as long as the gtx 460 is a money maker (even if its $10 bucks a card) they will still be able to use the high margin professional market and the low end gpu sales to help increase margin improvement.
The real question comes with ati. If they do decide to drop prices is the gtx 460 still competetive when put up against the 5850 ? If thier chip prices cascade downward and the 5830 becomes $175 and the 5770 goes further under $150 will nvidia's new parts keep up ?
I think the ball is fully in ati's court and its up to ati to decide if they are going to fumble or make a second touch down out of the cypress parts.
If I were Nvidia I'd be happy if GTX460 could perform close to GTX470 because it's much cheaper to make and I'd sell it for around the same price.The answer to both is common sense. Nvidia would cannibalize the expensive GTX 470 with a fully blown GF104 with clocks in the 750/1500 range and AMD would (or should) really DIE to see their fully blown Cypress with a little higher clocks and 1920 ALUs best GTX 480 in most games.
I'm not sure why you think it's based on R520 as it has no more in common with R520 than anything else. It's more closely related to Xenos which was 8 pixels per clock feeding a 64 wide vector. Even that's irrelevant though as there have been multiple generations since then which is plenty of time to change the internals.Bear in mind that ATI's rasteriser is still based on R520 (though the fixed function interpolator unit has been deleted). Which was only 16 fragments per hardware thread. Packing multiple triangles' fragments into a thread wasn't a priority back then.
Thats the thing, not only AMD arent DIEing over this, they dont even care. Why? I expected AMD to launch GTX480 killer as 5890 (newer stepping with high clocks, which would be low R&D and high reward line, spoiling GTX480 sales), yet AMD didnt bothered. I guess they think its enough to have the fastest card (even if its dual) and selling all the chips they make. Its not what I would have done in AMD place in this particular case, but its their business, and until NV becomes competitive again, AMD is concentrating on Fusion, mobile market, etc.AMD would (or should) really DIE to see their fully blown Cypress with a little higher clocks and 1920 ALUs best GTX 480 in most games.
I'm sorry, what are you talking about? When orders are huge from two long term customers, the prices they get are very similar, any difference is negligible. Thats common sense as well as common business practice, thats what they are talking about. Yet I have no clue what YOU are talking about. Maybe you want to talk "we havent seen their exact contracts", but common sense should remain, and if you argue against common sense, you better come up with something more than... your opinion.And I'll bet they're not. Based on a similiar amount of information that you provided