NVIDIA GF100 & Friends speculation

Take security, for example: lots of cameras, main/alarm screen - number of displays does add up pretty quickly. The cost of the LCD is quite low compared to a videocard that has more than 2 outputs.

So the 6 display port Eyefinity edition has a market in security? Wow I had NOT thought about that one.

P.S How many displays would a game developer *want to* run? I can see AMD trying very hard to corner the developer workstation market.
 
I wouldn't underestimate the CAD market between Inventor and Solidworks alone you have 2million + users, then you have all the other CAD packages, Max,Maya etc... This is a user-base of many millions of users who seem to have not to much problem buying $2k a pop graphics cards every 2 years or so.

I've seen CAD departments at automotive companies buy truck loads of highend Quadro's ($4,000+ a pop) cards without hesitation, this is mainly due to the fact that all the CAD software used generally is unsupported on anything but "professional" cards.

exactly then you have a multi-million dollar market for TV on air live presentations and offline presentations that all require professional cards.
 
So, looking at Radeons first, ATI got something like 1.8 the performance of 4870 out in 5870 (is that about right?) in 15 months (4870 in June 08, 5870 in September 09). For NV to retain the advantage of 280 (out in June 08), GF100 coming out in 26 months (this March) would have to be ~2.77 times as fast as 280 (1.8^(26 months / 15 months)). That would be *good* performance, keeping up with ATI development speed and retaining their advantage from 2008.

Uh, why can't I edit... Got the 26 months mixed up. It should be 21 months, corresponding to 2.28 times the performance of 280. Sorry about that.
 


And thats what I'm saying its stupid to do that. Do you know what drivers were used, do you know what motherboard was used, etc etc etc. Its dumb to even look at that, because now if we use use up to 2x from the marketing slides its slower, oh no wait if we use hardware review that you used before

http://www.hardware.fr/articles/770-20/dossier-amd-radeon-hd-5870-5850.html

now its faster by 10%,

Where the hell can you even estimate anything from these numbers?
 
For what it's worth, the architecture really looks to me a lot like 4 GPUs that just happen to have a common command processor, L2, ROPs and memory and general gubbins.
Now you sound just like NV-PR (in fact, they say exact the same thin almost to the letter in one of their PDFs).
 
You don't. You know it, I know it, Aaronspink knows it. But Quadro is the corporate standard and that's not going to change anytime soon.

well the NVS is that's true, but the market for that is much smaller then the advertisng + TV world. And into that CAD world, well yeah its very small.
 
So the 6 display port Eyefinity edition has a market in security? Wow I had NOT thought about that one.

P.S How many displays would a game developer *want to* run? I can see AMD trying very hard to corner the developer workstation market.

I could easily see how having 3-4 monitors would be a benefit for a game developer:
1 3D display
1 debug/perf data display
1 IDE display
0/1 Office work display

Certainly I miss the 3 displays I had back in the day for IC design.

The reality is until you've used a multi-display setup for business environment you don't realize how much it improve efficiency, esp in the technical industries. Certainly for anything that involves code and debug, 2+ monitors make a world of difference.
 
Peddie had Nv doing $214M worth of professional desktop graphic cards (quadros) in Q2. Last quarter Nv reported professional up 11% quarter on quarter, so that would put Nv's desktop professional revs at somewhere around $240M last quarter, and margins here are 80%+, trust me or take a look at the pricing and what you get for it yourself.
Meanwhile ATI (AMD graphics division) did a total of 306M in revs in Q3 for their whole business.

From nvidia's Q2 report, page 23:
Quadro+Tesla: revenue $116M, income $41M
Geforce: revenue $372M, loss $144M (the $120M bumpgate charge probably included)

That's what compares to AMD's graphics division, MCPs ($237/53M) are in AMD's computing divison.
 
You don't. You know it, I know it, Aaronspink knows it. But Quadro is the corporate standard and that's not going to change anytime soon.

you either need multiple display cards or a card that support more than 2 displays. Until 5xxx, >2 display graphics cards have almost always been sold as part of the professional display lines. Someone pointed out matrox which has subsisted this whole entire time on basically selling 3+ display cards to markets like these for prices that they couldn't get in any other market with 5-8 year old ICs. Nvidia and ATI have also had these solutions for quite some time.

Some of the sub-markets are fairly specialized like transit/power monitoring stations and some are more mainstream like the financial sectors and the like, but they all share basically the same requirement which is lots of information display in a constant and understandable/readable presentation. In almost all cases the graphics performance isn't an issue as they aren't doing anything really taxing by todays standards graphically, they just have a LOT of information to display.
 
From nvidia's Q2 report, page 23:
Quadro+Tesla: revenue $116M, income $41M
Geforce: revenue $372M, loss $144M (the $120M bumpgate charge probably included)

That's what compares to AMD's graphics division, MCPs ($237/53M) are in AMD's computing divison.

"ATi" made a lost of 1 million in the second quarter.
 
According to the analyses I have seen, yes it does. If you go by raw silicon costs, Fermi is ~$150 (104 die candidates per $5000 wafer, 30% yield (I am being overly generous here, far less for a 512 shader part) gets you ~$160), add in $10 for packaging and testing, $20 for a PCB (lots of routing for 384b memory, but cheaper than GT280), $20 for board components, $25 for HSF, $10 for assembly, and $39 for 12 GDDR5 chips ($3.25 per for low bins, if they up the bin, add $6 or so). That gets you to $274 raw cost, no FOB, no profit, not assembly failures etc.

Then look at Hemlock. Silicon costs about $45 per Cypress die ($5000 wafer w/160 die candidates per (it is actually higher) @70% yield (conservative not estimate) ~= $44.6, real numbers are better than that), $90 for the pair. Add $7.50 per, $15 total for packaging and testing (no lid, much smaller die, far fewer traces, better dot pitch, less power, much lower package board count etc), $25 for the PCB (probably cheaper than GF100's but lets be negative), $30 for board components, $25 for the HSF, and $15 for assembly. Add 16 GDDR5 chips (At $3.50 per - one price tier up from Fermi's) for $56 and you are at $256.
I'll keep it short, forgive me.
• You keep ignoring the possible different contracts at TSMC for AMD and Nvidia
• you forget the PCIe-Brigde
• 70% Yield would in my books provide a much higher rate of stock than it is right now. IOW: I doubt that number
• HSF uses Vapor-Chamber, which should be a bit higher than your average heatpipe-radial-blower-combo
• Board-Design should be using far more complex circuitry, because it has to switch two chips within mikrosekonds

edit:
Probably add to that higher per-unit-prices for even the same components because of volume.
 
Last edited by a moderator:
you either need multiple display cards or a card that support more than 2 displays. Until 5xxx, >2 display graphics cards have almost always been sold as part of the professional display lines. Someone pointed out matrox which has subsisted this whole entire time on basically selling 3+ display cards to markets like these for prices that they couldn't get in any other market with 5-8 year old ICs. Nvidia and ATI have also had these solutions for quite some time.

Some of the sub-markets are fairly specialized like transit/power monitoring stations and some are more mainstream like the financial sectors and the like, but they all share basically the same requirement which is lots of information display in a constant and understandable/readable presentation. In almost all cases the graphics performance isn't an issue as they aren't doing anything really taxing by todays standards graphically, they just have a LOT of information to display.

I know, and I agree completely, but when you claim that eyefinity will kill these markets practically overnight, you're not taking into account the purchasing cycles, software compatibility validation, support overhead and yes even brand loyalty in these industries.

Eyefinity is a cost-effective and superior solution. But it's going to take (a lot of) time to make serious inroads. It's the same situation really with OpenCL and DC vs CUDA, and Bullet vs Physx. The emergence of a cool new technology doesn't mean that the established one will just disappear instantly. Too many people have an investment in time, experience or money going for that to happen as quickly as we might like from a purely technological perspective.
 
No need to use third party data when you can get them from nVidia's annual and quarterly reports as the PSP division.

http://phx.corporate-ir.net/phoenix.zhtml?c=116466&p=irol-reportsOther

For the last reported quarter nvidia had revenues of $116 million with operating income of $41 million for an division operating margin of 35%.

Thanks for that, didn't realize they broke it down. No gross margins, but the operating results are quite interesting. They are here for last quarter too:

http://www.sec.gov/Archives/edgar/data/1045810/000104581009000036/q310form10q.htm

$129M and $48M for Q3, both still far below where they were in 2008 when professional was providing the large majority of Nv's operating income.
 
Hi,

I've read most of the thread and while the performance speculation is interesting, I haven't seen anyone really take a stand on what kind of performance would be *acceptable* and what *good*. The way I see it, acceptable would be to put out something over the Radeon performance curve, good would be something that retains their advantage from the earlier generation. Of course, I'm not talking about raw performance here, but with respect to release date (since 5670 would have blown away competition a few years ago).

I'm going to look at 4870 and 280 since 4890 and 285 were minor updates.

So, looking at Radeons first, ATI got something like 1.8 the performance of 4870 out in 5870 (is that about right?) in 15 months (4870 in June 08, 5870 in September 09). For NV to retain the advantage of 280 (out in June 08), GF100 coming out in 26 months (this March) would have to be ~2.77 times as fast as 280 (1.8^(26 months / 15 months)). That would be *good* performance, keeping up with ATI development speed and retaining their advantage from 2008.

For *acceptable* they would only have to improve upon 5870 by what the six months of development are worth. That is, GF100 should have at least ~1.27 times the performance of 5870 (1.8^(6 months / 15 months)). That performance would keep them competitive, with performance between ATI's current and projected next generation. However, they would have lost the advantage they had in 2008.

Did I have some of my facts wrong? I took the dates from Wikipedia, so I'm not 100% on them. Also, I couldn't find a very good chart of average performance between 4870 and 5870 so the 1.8 was an estimate based on what I could find.

Your thoughts: which of these (or neither) sounds more probable, and is this a reasonable way to look at things :?:

Problem is that ATI has yet to reveal its DX1 generation archtiekture, with RV870 being an evolution of the previous generation. When R900 comes it might make Fermi look rather lame.

Due to the fact that Fermi is hardly able to veat an evolutionary ATI chip, I have little confidence for the future of that line. Especially if you consider the production problems and the TDW, which means one GPU from Fermi is more expensive than 2 RV870s. More expensive for NV and more expensive for the user, while lacking eyefinity.
 
Back
Top