Nvidia GT300 core: Speculation

Status
Not open for further replies.
What part of my argument falls to pieces? I note that you didn't respond to anything else. I will assume that you therefore agreed with the point I was making.

Actually I didn't respond to the rest because I thought your rant was too ridiculous to warrant any serious attention.
The part that falls to pieces is exactly this:
Performance per unit area has a direct effect on the end costs seen by the consumer. To argue against this is pointless - in the market in question here performance largely dictates worth, and area largely dictates cost, Performance per unit area is therefore the most direct metric that dictates the consumer's final costs unless GPU manufacturers are to operate as a charity and donate all their cards to the consumers.

You got it backwards.
Performance per unit area only has a direct effect on the manufacturing costs.
It *only* has an effect on the end costs seen by the consumer *if* the GPU manufactures choose to use the same profit margin.
Since this condition is not met, the end-user does not see a direct effect of performance per unit area on the price. It is indirect by default, and virtually non-existent in the current context of nVidia vs ATi hardware.

So despite your lengthy post, your premise was false. Hence your argument falls to pieces.

We could also argue that performance per unit area is only relevant under certain circumstances... Namely, if the cost of the other components (PCB, cooling solution, memory chips etc) outweigh the costs of the GPU, then the performance per unit area of the GPU won't be the most significant metric the consumer sees.

They might choose to operate with lower margins for a period while market forces drive that necessity

Exactly, so you contradict yourself.

Did it happen because of the release of large parts with low performance/mm2, or was it driven by small parts with high performance per mm2?

Ah, there's the hidden agenda at last!
We all need to bow down to ATi for bringing down nVidia's prices!
Be that as it may, it still doesn't validate the claim that performance per mm2 is something that actually affects the end-user at this point.
Let's not confuse issues in our endless love for ATi and their wonderful effect on the GPU market, shall we?

Now, excuse me while I trace your post history, to see your joyful posts about nVidia's first G92 bringing prices of high-end DX10 parts into the mainstream for everyone.
 
There's a difference between GPGPU support and OpenCL though.
ATi already supported an early GPGPU API back in the X1000-days.
But I don't think anyone expects the X1000 to get OpenCL support, because the hardware just isn't up to it.

But what of the HD 2000 and HD 3000, they're older than HD 4000 but newer than the X1000.
 
But what of the HD 2000 and HD 3000, they're older than HD 4000 but newer than the X1000.

Exactly... I wonder if they will get support.
So far, the signs indicate that they don't (the GPGPU stuff coming from ATi recently is for HD4000 only).
If I'm right about ATi adding a local storage to HD4000 for OpenCL (can't remember where I read it, must be out there somewhere), then it would be impossible for ATi to support it on anything older.
I'm quite sure that people expect it though, since these cards are the ATi-equivalents of nVidia's 8000/9000 series, which WILL support OpenCL.
 
Exactly... I wonder if they will get support.
So far, the signs indicate that they don't (the GPGPU stuff coming from ATi recently is for HD4000 only).
If I'm right about ATi adding a local storage to HD4000 for OpenCL (can't remember where I read it, must be out there somewhere), then it would be impossible for ATi to support it on anything older.
I'm quite sure that people expect it though, since these cards are the ATi-equivalents of nVidia's 8000/9000 series, which WILL support OpenCL.

Local storage was added in R7xx, see the ISA for details,

http://developer.amd.com/gpu_assets/R700-Family_Instruction_Set_Architecture.pdf
 
Performance per unit area only has a direct effect on the manufacturing costs.
It *only* has an effect on the end costs seen by the consumer *if* the GPU manufactures choose to use the same profit margin.
You can't cut costs indefinitely as you can't operate without making your money back. If you could make parts for free then your point is valid, but you can't. In a competitive market if you want to stay in business then at some point costs must get passed on. Margins are not something that you can just choose to cut indefinitely - they are a finite resource.

Since this condition is not met, the end-user does not see a direct effect of performance per unit area on the price. It is indirect by default, and virtually non-existent in the current context of nVidia vs ATi hardware.
I won't debate this with you anymore. Market pricing is driven by multiple considerations. The consumer is directly affected by the prices they see in the market. Similarly performing products are likely to sell for similar costs, regardless of relative cost of manufacture if that is what the market will sustain. The final cost point is driven by some factor, and that factor clearly affects the consumer.
We could also argue that performance per unit area is only relevant under certain circumstances... Namely, if the cost of the other components (PCB, cooling solution, memory chips etc) outweigh the costs of the GPU, then the performance per unit area of the GPU won't be the most significant metric the consumer sees.
And if you had chosen to present that as your position things would have been much simpler, because I could just have agreed with you that all of these cost factors are relevant to the consumer (since they all contribute to performance per unit cost, which the customer sees) and we could have moved on, but unfortunately that was not the original argument that you were making.
 
You can't cut costs indefinitely as you can't operate without making your money back. If you could make parts for free then your point is valid, but you can't. In a competitive market if you want to stay in business then at some point costs must get passed on. Margins are not something that you can just choose to cut indefinitely - they are a finite resource.

This is obvious, but why do you keep bringing it up?
We'll just have to wait and see what nVidia does with GT300, won't we?

Namely, you currently classify the G80 as a low performance/mm2 part.
That may well be true today... but upon its introduction it completely redefined both performance and features. It took ATi ages to catch up. During that time, G80 was the absolute king, and nVidia's margins were under no threat whatsoever.
So big chips can very well be successful and have high profit margins. They just need to be performance leader.

I don't know enough about GT300's manufacturing cost or performance levels to get a good idea of where it is going to fall, but I'm not going to write it off just because it may be a large die.

And if you had chosen to present that as your position things would have been much simpler, because I could just have agreed with you that all of these cost factors are relevant to the consumer (since they all contribute to performance per unit cost, which the customer sees) and we could have moved on, but unfortunately that was not the original argument that you were making.

I wasn't making any argument, really. I was asking why performance per mm2 was significant to the end-user, because someone else brought up that point, but failed to specify any reasons why.
We both seem to agree now that it indeed is not significant to the end-user in today's market.
I didn't change my position, you did. You originally tried to argue that die-size is pretty much the only factor in the cost of a videocard for the end-user. I never claimed that. And this statement simply is another example of why GPU cost isn't as important as you originally made it out to be.
 
yes, a big die with relatively low perf/mm² works for GT200 or G80. It may also fail : R600, and the canned GT212, due to heavy current leakage.
Maybe a big GT300 will be possible because of TSMC's 40nm process maturation?
 
Since those old GPUs (well RV6xx and later, i.e. specifically excluding R600) can do scatter and gather, AMD could use video memory to emulate local memory.

Obviously that'd be S-L-O-W...

Jawed
 
6 months from tapeout to launch?
If all goes well with A2... If not... then 212 cancellation will start to look like a mistake.

I guess now the Charlie-hate-bandwagon will flip. ;)
Charlie is perfectly fine as long as he simply passes what rumours he hears, problems begin when he tries to analyse them -)

How many respins (if any) did GT200 have? I cant remember...
They released the usual A2 version.

June tape-out for a Q4 release is indeed a tad aggressive; as a reference point, GT200 taped out in December and was released in June.
However, it's a known fact that NV have been sitting on it for some time and released it only when AMD was ready with RV770...

yes, a big die with relatively low perf/mm² works for GT200 or G80. It may also fail : R600, and the canned GT212, due to heavy current leakage.
Maybe a big GT300 will be possible because of TSMC's 40nm process maturation?
Both R600 and GT212 hardly 'failed' because of current leakage.
 
I'd personally guess that both IHVs chose to not release any 40nm performance parts of the current generation on purpose and not necessarily because of any rumoured problems with TSMC's 40nm manufacturing process. If the latter theory would be true, then it bears the question why both IHVs seem to aim for final tape outs with higher complexity chips in Q2 on 40nm.
 
Since those old GPUs (well RV6xx and later, i.e. specifically excluding R600) can do scatter and gather, AMD could use video memory to emulate local memory.

Obviously that'd be S-L-O-W...

VERY slow. I doubt that ATi will even bother to try such an implementation.
I guess it's better to not support OpenCL at all on 'old' hardware than to support it and get a reputation for craptastic GPGPU performance.

But it reaches back to the point I made, that nVidia has focused a lot on GPGPU since G80.
So, is GT300 going to just use the same basic GPGPU architecture as GT200 (which is 'good enough' for OpenCL and DXCS as it is), and add DX11 features... or will they develop their GPGPU architecture even further?
 
Both R600 and GT212 hardly 'failed' because of current leakage.

I thought that's what held back the R600 (along its other weaknesses), or I mix up a bit with the phenom (which could have fared better against the Q6600)

About the GT212, it's because of its lateness and because it's unneeded?
 
I doubt anyone really knows why the GT212 was probably canned. The theory that both IHVs have been concentrating lately mostly on next generation parts (partially also due to the financial crysis) makes sense.
 
I thought that's what held back the R600 (along its other weaknesses), or I mix up a bit with the phenom (which could have fared better against the Q6600)
R600 was badly designed right from the start and was too slow even when it came out. So the failure here isn't in it's late arrival but more in the architecture itself.

About the GT212, it's because of its lateness and because it's unneeded?
No one knows. But i'd bet that even if any 40G problems were in the list of reasons for 212 cancellation they were somewhere at the end.
If you go with the theory of chips being cancelled because of problems with the new process you should eventually expect all of future AMD and NV chips being cancelled and both companies shut down at some point. I mean, why would they cancel GT212 but go ahead with much more complex GT300 on the same process? This makes no sense at all if GT212 was cancelled because of fatal 40G problems.

I personally see several possible reasons for 212 cancellation:
1. They may have brought some GT30x chip forward instead of GT212.
2. They may try to avoid GTX280 vs 9800GX2 scenario this time.
3. They found out that AMD won't bring anything faster than GT200b until RV870 and that might make GT212 release pointless (although if there will be nothing to fulfill GT212 place by the time RV870 will come they'll be in a difficult situation) if GT214 turns out to be more or less equal to GT200b in performance.
Whatever their logic was i'm sure they thought about product lines first and any technical problems later.
 
But it reaches back to the point I made, that nVidia has focused a lot on GPGPU since G80.
So, is GT300 going to just use the same basic GPGPU architecture as GT200 (which is 'good enough' for OpenCL and DXCS as it is), and add DX11 features... or will they develop their GPGPU architecture even further?
The performance per fixed-function unit of GT200 is appalling. 80 TMUs and 32 ROPs can barely beat RV770's halved configuration for both these things. NVidia's math is also considerably slower per transistor, particularly double-precision.

Moore's law is about using your transistors smartly as well as stamping out more of them. NVidia put so much effort into CUDA they forgot the graphics. Or maybe they just assumed that AMD and Intel graphics were irrelevant?

So, NVidia's got plenty of work to do on the graphics fundamentals. Particularly if they have to fit a competitive GPU into ~600mm2. It seems GT200 only just fit within that limit, which is why it wasn't fast enough on launch and got laughed off court for daring to come on with a $650 tag. Though you could argue that being so huge meant it couldn't be clocked high enough.

For what it's worth I suspect GDDR5 and 40nm will enable NVidia to make GT300 fairly spectacular.

Jawed
 
Chip being ready for some time have nothing to do with when production ramp began.
No it means their production ramp was brought forwards. We saw the same again with the paper launch of GTX275.

Still, you offer absolutely zero evidence for your "fact". What is "some time", anyway? A week?

Jawed
 
Status
Not open for further replies.
Back
Top