Nvidia GT300 core: Speculation

Status
Not open for further replies.
It's amazing how Charlie can take interesting raw info and turn it out into incoherent gibberish. First of all, has he still not learned that A11 is the first tape-out for both NV and AMD's GPU divisions, and not A0?

Secondly, he clearly doesn't understand that you can start mass production before the samples are back. So here's a more correct timeline if they're willing to pay the big bucks to get it done:
- Week 0: Tape-out, initiate hot batch production for a few wafers + mass production for many wafers stopping at a certain stage of the process, allowing for a metal respin.
- Week 8: Hot batch samples received (A11).
- Week 10: Testing completed, bugs found.
- Week 12: Bugs fixed via metal layer respin, finish mass production for already started wafers. Also hot batch a few for driver devs etc.
- Week 18: Hot batch samples received (A12), do testing & final driver R&D.
- Week 20: Mass production samples received, ready to ship to AIBs.

Shock horror, that's exactly mid-October and I didn't even have to fudge with any of my numbers to get there; they are far from accurate, but I'm pretty confident they're not too far off either. Of course, there's a risk with this: if there's a major bug and a silicon layer respin is required to fix it, they're effectively toast and they just got for potentially tens of millions of dollars worth in wafers to send to the trashcan. That's one of the reasons why this kind of thing isn't usually done, but clearly NV seems to think GT300 is so important to the company's future that they need to get it out ASAP.

Oh BTW, here's a hint to Charlie: 8 weeks from June 1 is not August 1. It's July 27th. Sigh... (of course, you could argue that's compensated by the fact he assumed June 1 instead of, say, June 15)
 
I spoke to an nVidia representative just today... said that GT300 will launch around the same time as G80 did, so October or November sounds believable...
 
I spoke to an nVidia representative just today... said that GT300 will launch around the same time as G80 did, so October or November sounds believable...
Yes, but G80 had first samples *back from the fab* in June, AFAIK. So this is definitely more aggressive.
 
That's one of the reasons why this kind of thing isn't usually done, but clearly NV seems to think GT300 is so important to the company's future that they need to get it out ASAP.
Maybe they've heard that AMD really is a quarter ahead of them?

The thing is, ~early November really is when we should be expecting GT300, regardless of tape-out rumours etc. So does this June tape-out rumour have legs? It doesn't make sense for NVidia to have planned for tape-out to be so late. GT300 has been too long in design, surely.

Jawed
 
Maybe they've heard that AMD really is a quarter ahead of them?
Oh, sure, given how fast info goes around in this industry I'm sure they heard that a long time ago. It certainly should be a key reason why they'd be willing to do what I described, I agree (although I'd argue not the only one).

So does this June tape-out rumour have legs? It doesn't make sense for NVidia to have planned for tape-out to be so late. GT300 has been too long in design, surely.
Heh, new generations with massive changes are always late. NV implied in a CC I think about one year ago that they wanted GT300 out in Q3 and some derivatives out in Q4, but then again they also revealed back in 2006 that they wanted G80 out in 2005, sooo :p
 
How much of a factor would it be, that both AMDs and Nvidias simulator-farms do now have vastly more TFLOPS of computing power available on the same budget compared to last time both developed a completely new architecture (i.e. G80 and R600 respectively, though that might be arguable).

Of course, the floorplans they have to simulate are much more complex now at least from a mass-of-transistors point of view. But then - many blocks are just copy and paste, if you've got one SIMD right, you're basically set to go for the others too.

Would this make shorter ramp time feasible - just by getting their simulators to a new level of complexity?


edit:
On another note: the well respected german webseite heise.de (behind c't magazine) just posted that AMDs first DX11 offerings might be out in summer already. That's what they've apparently learned from one of the board partners (industry sources, they quote).
http://www.heise.de/newsticker/Graf...ps-von-AMD-bereits-Ende-Juli--/meldung/136350
 
Of course, the floorplans they have to simulate are much more complex now at least from a mass-of-transistors point of view. But then - many blocks are just copy and paste, if you've got one SIMD right, you're basically set to go for the others too.
The rumour is that NVidia's making a big chip again, much like GT200-big (i.e. G80 or even bigger, big). It seems reasonable to assume that bigger chips simply take longer.

In contrast, we're expecting RV870 to be substantially smaller...

Then there's a question of architectural change. I'm not wholly convinced NVidia needs a "massive" architectural change, but I do think NVidia will be making much more of a change than AMD. RV770 just seems closer, not least because of its D3D10.1 capabilities.

Still don't have a good feel for whether the LDS/GDS architecture of RV770 is capable of fully supporting thread local data share in D3D11-CS. I think it's technically capable, but...

There's still the generally unanswered question of why NVidia's had such a miserable time of all process nodes from 65nm onwards.

Jawed
 
Maybe they've heard that AMD really is a quarter ahead of them?

That would be a curious thing to react to. That would imply their original timeline was based on some assumption of AMD getting to DX11 around the same time as they do. Silly thing to rely on.
 
Well, the only thing I think is credible is ~November launch of GT300. The stuff about tape-out/rush may be true, but that requires something to have gone wrong and/or for a significant design-change.

Jawed
 
It's amazing how Charlie can take interesting raw info and turn it out into incoherent gibberish. First of all, has he still not learned that A11 is the first tape-out for both NV and AMD's GPU divisions, and not A0?

A0 is the generally recognized term for first silicon. ATI/NV might use A11, everyone else starts at A0.

Secondly, he clearly doesn't understand that you can start mass production before the samples are back. So here's a more correct timeline if they're willing to pay the big bucks to get it done:
- Week 0: Tape-out, initiate hot batch production for a few wafers + mass production for many wafers stopping at a certain stage of the process, allowing for a metal respin.
- Week 8: Hot batch samples received (A11).
- Week 10: Testing completed, bugs found.
- Week 12: Bugs fixed via metal layer respin, finish mass production for already started wafers. Also hot batch a few for driver devs etc.
- Week 18: Hot batch samples received (A12), do testing & final driver R&D.
- Week 20: Mass production samples received, ready to ship to AIBs.

You have to have incredibly high confidence to order production volume of wafers off of A0 Tape-in. Generally companies will only do this with derivative parts.

2 weeks for baseline testing before A1 is incredibly short unless you are fixing a DoA issue.
 
Yeah you crash a project when it's falling behind schedule. You don't just go crazy because of some external event that you have no control over. But this is Nvidia after all. When's the last time they did something that made sense....
 
How much of a factor would it be, that both AMDs and Nvidias simulator-farms do now have vastly more TFLOPS of computing power available on the same budget compared to last time both developed a completely new architecture (i.e. G80 and R600 respectively, though that might be arguable).

none. More compute cycles are eaten up by slower more complicated models with more cases to cover. Generally validation is doing the best they can just to complete in the same time as the prior design even given an order of magnitude increase in raw computing throughput.
 
Still don't have a good feel for whether the LDS/GDS architecture of RV770 is capable of fully supporting thread local data share in D3D11-CS. I think it's technically capable, but...

It is not, AFAIK. In one of the GDC docs there's a specific downlevel-version of compute-shader mentioned: 4.1 as opposed to 4.0 for Nvidia. D3D11 requires 32 kiByte (I'm sure they mean, although they're all writing 32 kB...) of TLS, for downlevel versions 16 kiB will suffice.
 
Might not be. At least that's what the nVidia guy suggested - of course he couldn't say anything important...
:LOL: well maybe that's the significant design-change we're looking for, eh?

Is this NVidia guy being more chatty with you than you've experienced with previous major NVidia GPUs before their launch?

Jawed
 
It is not, AFAIK. In one of the GDC docs there's a specific downlevel-version of compute-shader mentioned: 4.1 as opposed to 4.0 for Nvidia. D3D11 requires 32 kiByte (I'm sure they mean, although they're all writing 32 kB...) of TLS, for downlevel versions 16 kiB will suffice.
I should have been more precise: I meant in terms of functionality, not in size.

Jawed
 
none. More compute cycles are eaten up by slower more complicated models with more cases to cover. Generally validation is doing the best they can just to complete in the same time as the prior design even given an order of magnitude increase in raw computing throughput.
(my bold)
That'd exactly what i meant: better/slower/more thorough models for simulation and validation. Maybe one or both are confident enough in their new simulation models to rush from tape-out to production a bit more than usual.
 
(my bold)
That'd exactly what i meant: better/slower/more thorough models for simulation and validation. Maybe one or both are confident enough in their new simulation models to rush from tape-out to production a bit more than usual.

No, the models are just as "good" as they were before. It is just that they are modeling more logic with more overheads related to the validation environment because of the more logic. Think of it this way, back in the 40s it took them weeks-month to do the calculations for the first atom bomb. With many many orders of magnitude increase in calculation performance, it actually takes them longer to do the calculations currently. This is because they are calculating a lot more things with much more complicated designs.

The confidence level of validation over time haven't really changed too much over the past decade or so. The only real improvements has come from taking some things that had to be exhaustively tested and instead using formal methods to validate them. But formal validation is fairly limited in its scope and doesn't really solve the connectivity/interaction side of validation.
 
:LOL: well maybe that's the significant design-change we're looking for, eh?
Well they expect the architecture to last them for another three to four years, just like G80. Seems to me that even nVidia now understands that in the high-end, multi-GPU is the only way to go and whoever makes their GPUs run better in SLI/CF (ATI has failed to impress me yet :nope: ) would win.
Jawed said:
Is this NVidia guy being more chatty with you than you've experienced with previous major NVidia GPUs before their launch?
He's from PR, so the only thing he is sure about and can tell freely is that their products will kick ass.
 
Status
Not open for further replies.
Back
Top