Nvidia GT300 core: Speculation

Status
Not open for further replies.
What is it with all the armchair experts here? Nvidia has been operating a highly profitable business for years and years. The low volume high-end segment halo effect is just one aspect of this.

Are you guys saying they should be looking to AMD for advice on how to run a business, just because they managed to pull off one promising videocard generation now?
No, they should look to their own strategy of how to be a highly profitable business. With the exception of G80, they've never gone for much more expensive GPUs than their competition in any generation except NV30, and they've been most successful when they had substantially better products than their competition at any given production cost (G7x, G8x, G9x before RV770).

Anyway, these specs look rather stupid given that they aren't self-consistent.
 
No mention of DP...

What do you guys think, will this generation (consumer cards) have double precision FP, as the current rumour is, that the GT200b will not have it?

I find the current Nvidia DP implementation with the 1:12 speed penalty quite useless anyway, as it can in most cases even emulated faster.
 
Yes, but did not people like PdM (was he really banned from RWT ?!?) and others report in the past those 400+ mm^2 IPF chips cost Intel like $150 (more or less) to actually make ?

(I remember an old RWT discussion about manufacturing costs of Itanium 2 chips)

Paul choose to absent himself from discussions at RWT, he was not banned. He is welcome back at any time as long as he can be polite.

Large IPF chips cost Intel relatively little to produce because:
1. They have a lot of cache with redundancy, so point defects don't reduce ASP
2. They are produced on older process technology that is fully depreciated

In contrast, GPUs are mostly logic where point defects will cause a chip to be binned down and lower the ASP.

GPUs are also produced on leading edge process technology which TSMC is depreciating through the prices charged to AMD/NV (so they have to pay indirectly).

But let's suppose that a 500mm2 chip costs around $150 to produce. That's fine when you sell the chip for $600, you've still got nice margins.

But when you sell a whole system (board, cooler, chip, DRAM) for $450, the margins are much much worse. The DRAM is probably $100 alone, the cooler is going to cost a lot simply due to weight (say $20), the PCB has >10 layers and probably costs about 30 or 50, so right there your margins have been cut down tremendously.

DK
 
Always good to hear from you DK, thanks for stopping by! Can you comment any further on Paul's decision to leave? As stubborn as the guy was, I probably learned more from reading his articles on RWT than just about anywhere else (Johan de Gelas' work on Chip-Architect being the other).

Sorry for the OT, and thanks for the IPF analysis! Without Paul around to do it for us it's good to have someone else step up to the plate ;)

Just a quick note on graphics card costs though:
You missed a couple steps of the chain, however - NV buys GT200 "kits" (DRAM, GPU, PCB, cooler) for perhaps $200, then sells them to AIB partners for perhaps $300, who then sell to resellers/distributors/OEMs for perhaps $400, who then resell for $450-$600. Note: all numbers are my rough estimates. I could be off by +/- $50 at any point of the chain.
 
Last edited by a moderator:
That'd be laughable if true.

Maybe but at this point I don't think DX10.1 is expected from them. If it's indeed the case that the base G8x architecture doesn't lend itself well to certain DX10.1 functionality there really isn't a compelling reason for them to make substantial changes until DX11.
 
Small note, but the die is 600mm^2, give or take, not 576.
 
Small note, but the die is 600mm^2, give or take, not 576.

I'd say 576 is within the margin of error from 600, wouldn't you? Remember the speculation about RV770, a much smaller chip, the rumored size of which ranged by as much as 40mm^2.
 
If it's indeed the case that the base G8x architecture doesn't lend itself well to certain DX10.1 functionality there really isn't a compelling reason for them to make substantial changes until DX11.
I've always been curious as to what functions those really are? Nvidia - no wonder - is very tight lipped about it and no one ever mentioned which and foremost why DX10.1 should be giving them such a hard time.
 
If you ask me, the 'GT300' (if indeed that is how it will be named) will simply be a 55nm die shrink of GT200 with no major differences other than the obvious, power consumption, clock speeds and YIELDS.

There is a good chance they might include GDDR5, however, allowing them to lower the number of memory controllers and vastly increasing the memory clock speed. I'd sooner expect this on a 8800GTX -> 9800GTX style makeover though.
 
If you ask me, the 'GT300' (if indeed that is how it will be named) will simply be a 55nm die shrink of GT200 with no major differences other than the obvious, power consumption, clock speeds and YIELDS.

There is a good chance they might include GDDR5, however, allowing them to lower the number of memory controllers and vastly increasing the memory clock speed. I'd sooner expect this on a 8800GTX -> 9800GTX style makeover though.

Reducing the number of crossbar MC channels would also require a reduction in the number of ROP partitions. NV would have to re-architect their ROP partitions to keep the same fillrates.
 
Reducing the number of crossbar MC channels would also require a reduction in the number of ROP partitions. NV would have to re-architect their ROP partitions to keep the same fillrates.

Assuming the abundant fillrate of GT200 is of any use to begin with.......
 
Assuming the abundant fillrate of GT200 is of any use to begin with.......

True, but the implication is to reduce to 4 or 6 ROP partitions, (G92 or G80 level, respectively) and I would think that would be quite the step backward...

edited because eye kant spel
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top