NVIDIA GF100 & Friends speculation

Their Quadro business is nearly a profit business. They have margins of 60-80% which is twice of the geforce business. In the last quarter they made with 1/4 revenue of the geforce business the same profit. And this without a growth in their quadro segment.

Is this a statement about the profit of the professional line, or the tanking of the mainstream line?

-Charlie
 
These 3DM tests with pure bilinear filtering were bound by interpolation performance and GT200 saw here an increase through the additional 8SPs per SM.
Right, forgot about that. So, not sure where they get the big efficiency improvement compared to GT200.
Especially considering this:
Jawed said:
Fermi L1 texture cache is 12KB, just like GT200's.
Thought that was increased too. So maybe more internal bandwidth somewhere instead?

Nvidia told that TMUs are 1/2 hot clock but did not give more details.

On the other hand we have 256 L/S-units @ hot clock, app state buckets with gains up to 70% and the statement, that filtering-quality will not be reduced.
Yeah, filtering running at hot clock would make things fit very well together indeed (though, I wouldn't really call that "efficiency improvements" in this case).
 
If people here watch the Top10 supercomputers arround the World, ATI GPU equip some of these computers.
Nvidia has none.

They had a big one for about a month, then they delivered real silicon, and it went away, or "The press release is still valid" in NV PR terminology.

-Charlie
 
They had a big one for about a month, then they delivered real silicon, and it went away, or "The press release is still valid" in NV PR terminology.
Wouldn't it be illegal (or at least financially irresponsible to shareholders) to claim that the Oak Ridge deal was still going when it isn't? It's a bottom-line affecting piece of information, after all. Do you have proof that it's really, truly, honestly not going ahead now?
 
I'll keep it short, forgive me.
• You keep ignoring the possible different contracts at TSMC for AMD and Nvidia
• you forget the PCIe-Brigde
• 70% Yield would in my books provide a much higher rate of stock than it is right now. IOW: I doubt that number
• HSF uses Vapor-Chamber, which should be a bit higher than your average heatpipe-radial-blower-combo
• Board-Design should be using far more complex circuitry, because it has to switch two chips within mikrosekonds

edit:
Probably add to that higher per-unit-prices for even the same components because of volume.

I have a fair idea of TSMC contract costs. Then again, if you are in a situation where there are massive shortages for ~2 years on 40nm wafer starts, you probably aren't going to give many discounts. ATI likely has the volume now on 40nm, so I really doubt that NV gets appreciably cheaper wafers.

You are right about the PCIe bridge, so add $15 or so, maybe 20.

I am aware of the cost of a vapor chamber, I know the two companies that make the OEM parts for ATI, and know many of the people involved at the cooler companies. ATI has almost the same wattage to cool as NV, and has a larger area to cool it over, both die and card. If you want to add $5 for ATI there, feel free. The official cost for the 4870X2 reference cooler was $15, and I was told at the time that some companies could do an internally designed vapor chamber for half that. The cost differential there is very low. Don't confuse consumer costs with OEM/ODM costs. You also have no idea what the production GF100 consumer card will use for a cooler, do you?

Board cost is mainly layer count. The power and signal pins to a single GF100 chip are likely much harder to route than the those to a single Cypress. 384 bit memory vs 256 are a major factor, as is power PER GPU. The PCIe lane count will be equal either way. You have a slightly longer board for ATI, and a much more complex board for Nvidia. Given the two, the GF100 board is probably more expensive. Anyone want to count the layers on a Cypress vs Hemlock board? Same? I don't have a hemlock here to do it with, but I would wager that Hemlock is a notably simpler board than GF100.

Anyone want to count the board layers on 3870 vs 3870x2, 4870 vs 4870x2 and 5870 vs 5970? Since they are all released products, can Dave tell us? Oh heavenly voice of ATI knowledge, answer our nattering technical minutia.....

FWIW, the board layer count from my CES pics seem to indicate a 14 layer board, but it is hard to tell if that is camera artifacting or real counts. It is a 14MP SLR, and I can clearly read the resistor numbers, so it is likely to not be artifacting, but it could be.

-Charlie
 
Wouldn't it be illegal (or at least financially irresponsible to shareholders) to claim that the Oak Ridge deal was still going when it isn't? It's a bottom-line affecting piece of information, after all. Do you have proof that it's really, truly, honestly not going ahead now?

Rys, do you honestly think Charlie has proof for any of the stuff he posts about Nvidia...

A little later we got on the phone with someone in Computing and Computational Sciences at Oak Ridge National Laboratory and they stated that the SemiAccurate article was inaccurate and had no further comment. Legit Reviews also contacted Andrew Humber over at NVIDIA who informed us that the original press release that was issued in September is still valid and that nothing has changed.

http://www.legitreviews.com/news/6999/
 
Nvidia used the term "up to" in a lot of their comments about performance which explicitly indicates they're referring to maximums and not averages. If your point is that we shouldn't expect the numbers Nvidia is showing to represent the average case then...ummm...duh?

I agree with Sontin, it is getting a bit silly now. If you have specific opinions on the technical shortcomings of Fermi I'm sure we'd all love to hear them but we could do without the pointless fluff.

Thanks, so I guess that's the harmless quote that sparked the premise that Nvidia doesn't care about DX11? It's ironic really since they've shown that they care quite a lot and then some.

The point is that NV, AMD, ATI, and Intel routinely have press demos where they compare their product to the competition. This is common practice, and if you have a winning part, you allow the press to run wild, bring their own software, and generally do whatever the hell they want.

If you don't have a winning product, you control the comparisons, limit what can be done, and don't let people wander outside the guidelines if you let them do anything at all. Again, this is pretty common practice.

I have been at briefings that where both were done (not the same briefing obviously), and the they invariably turn out to presage the performance of the product. You also can learn a lot from how the numbers are presented, and the way in which questions are answered, if at all.

Nvidia is doing one of these things.

-Charlie
 
The point is that NV, AMD, ATI, and Intel routinely have press demos where they compare their product to the competition. This is common practice, and if you have a winning part, you allow the press to run wild, bring their own software, and generally do whatever the hell they want.

If you don't have a winning product, you control the comparisons, limit what can be done, and don't let people wander outside the guidelines if you let them do anything at all. Again, this is pretty common practice.

I have been at briefings that where both were done (not the same briefing obviously), and the they invariably turn out to presage the performance of the product. You also can learn a lot from how the numbers are presented, and the way in which questions are answered, if at all.

Nvidia is doing one of these things.

-Charlie

So controlled benchmarking at Intel IDF 2006 for Conroe 2/3 months prior to release was just a myth then was it?

They had a handful of benchmarks preloaded that we ran ourselves

Emphasis mine

http://www.anandtech.com/tradeshows/showdoc.aspx?i=2713

Please don't pretend that Intel we're controlling info about Conroe because of fears about performance, it is standard practice to do so.

Don't get me wrong, it's quite clear Nvidia are having many difficulties getting Fermi out on time and at a reasonable price-point, but given what they are trying to achieve surely the delays will be worth it as it gives Nvidia easy upgrade paths for quite a while without having to re-revolutionise, something which ATi will have to do at some point.
 
Are you saying you don't expect Fermi to even be competitive, i.e. less than ~30% over Cypress?

In most current benchmarks 30% would be good.

Ah so all the praise to AMD for being the first to market with DX11, now turns into "the real DX11 design is coming", after leaks suggest that Fermi is better in DX11 features than Cypress...Now that is funny :)

I have always bee sceptical about the way ATI worked on RBV870. They have made it wider but they have not redone their design very much and only where needed. It was risky. If NV would not have been late, they would have had a problem, with NV late and having problems to get the chip working at an acceptable TDP, they were lucky.
 
He is wrong.

-Charlie

Haha, please don't treat me like an idiot Charlie. If you can come up with a credible source disproving his claims then please feel free to post them here or write another hair-brained article about it.

You know it could be that Nvidia have been developing a GF50/75(?) in tandem on a smaller die to avoid harvesting expensive dies...
 
Wouldn't it be illegal (or at least financially irresponsible to shareholders) to claim that the Oak Ridge deal was still going when it isn't? It's a bottom-line affecting piece of information, after all. Do you have proof that it's really, truly, honestly not going ahead now?

Yes, but I won't go into any more detail. Feel free to call ORNL, or ask NV for a DIRECT answer about that. If you read Nate's piece here:

http://www.legitreviews.com/news/6999/

You can see some very curious wording, especially on Nvidia's part. On top of that, the wording of ORNL's response is a classic PR hair split of denying the whole thing based on one part, without being specific.

I did re-check with the source, and others since, and my side was confirmed. Theo posted a rather curious piece on it, I won't link it for the sake of your sanity, so you can be pretty sure NV PR was trying to plant seeds.

Call or write NV and ORNL, but ask SPECIFIC questions, anything else and they will dodge. I stand by what I wrote.

-Charlie
 
Seeing that I was there, and I was allowed to run a lot more stuff than that, I would say you are full of it.

-Charlie

Me full of it, give me a break. Could you expand on Fermi's software tesselator a bit?

The fact that Anandtech only reported on what they were allowed to report on regardless of what they let you run means they were withholding info from the public.
 
Surely you can't compare the variety of benches Intel offered to what we had yesterday?

Of course not. That would be ridiculous, I'm just saying Intel controlled info released to the public despite how successful Conroe would eventually become. The level of control is clearly different, but they still did it.

I give up. Honestly, if you guys want to keep eating the shit that Charlie feeds you then feel free...
 
I give up. Honestly, if you guys want to keep eating the shit that Charlie feeds you then feel free...

It's not about believing him or not. I take his info with a truck-load of salt. But the level of information given yesterday does not allow you to draw any solid conclusions on the overal performance of the chip. The graphs provided are not supported by sufficient info (such as resolution) to draw any conclusions, really.

As I said earlier, it is strange to me that they didn't show any numbers from real games against Cypress if their product is supposedly much superior.
 
Back
Top