Nvidia GT300 core: Speculation

Status
Not open for further replies.
A0 is the generally recognized term for first silicon. ATI/NV might use A11, everyone else starts at A0.
Well they aren't the only ones from my experience, but you're nearly certainly right that most companies start with a 0. Doesn't mean Charlie is excusable for still not getting it right, though...

You have to have incredibly high confidence to order production volume of wafers off of A0 Tape-in. Generally companies will only do this with derivative parts.
I said you only started the process and stopped in time that you can still send the metal layers to the fabs later (i.e. respin). The chips that'll actually come out of the fab will be A1 (or A12 in NV's case) but that's not really the point from a schedule POV.

2 weeks for baseline testing before A1 is incredibly short unless you are fixing a DoA issue.
I included 4 weeks for testing+fixing, but yeah, that might be too short. It's hard to get real-world data on that kind of thing sadly...
 
(my bold)
That'd exactly what i meant: better/slower/more thorough models for simulation and validation. Maybe one or both are confident enough in their new simulation models to rush from tape-out to production a bit more than usual.

In my experience, the pre-tape out verification time for a chip has been going up instead of down. Larger farms make life easier, but they also make designers more wasteful.

E.g. when you had 2 machines, you only rerun block level simulations for blocks that changed. With 20 machines, you run full regression suites every weekend. With 200, you do them daily, no matter what happened.

In addition, you may throw more random test at the problem. Or write a few more directed tests. But, in practice, man power is the limiting factor. Not farm capacity.
 
Well they expect the architecture to last them for another three to four years, just like G80. Seems to me that even nVidia now understands that in the high-end, multi-GPU is the only way to go and whoever makes their GPUs run better in SLI/CF (ATI has failed to impress me yet :nope: ) would win.
Is it? I don't think that we should make such claims from GT200 vs. RV770x2 scenario while in every other previous single vs multi GPU scenarios single GPU came out victorious.
If anything it says a lot about how bad GT200 design is, not that multi-GPU is the only way to go in high-end.

I've been thinking about this for some time, and it's not that multi-GPU in the high-end is the only way to go, it's that you have to have an answer to what your competition is selling.
So the best way to answer that AMD mGPU high-end strategy while still maintaining your own big single cores for high-end strategy would be to... launch GT30x middle class GPU first!
And then you may decide where you're going from there - you may build your own mGPU card to counter AMDs high-end, you may finish that big single GPU card in time to counter AMDs mGPU solution or you even may do both (like right now we have GTX285 and GTX295 at the same time).
But in any case it just makes so much sence to launch your new middle-end GPU against your competition middle-end GPU first that i wouldn't be surprised at all if that's exactly what NV is doing with GT3xx -- and with GT30x middle-class GPU coming out first you really don't need that GT212 anymore...
 
Is it? I don't think that we should make such claims from GT200 vs. RV770x2 scenario while in every other previous single vs multi GPU scenarios single GPU came out victorious.
It is my opinion that from now on, multi-GPU solutions will always win over monolithic (assuming similar manufacturing costs of both competing products). The current situation has a lot to do with GT200 being not so good. But generally, performance scales linearly to transistor count, while the function of yields is concave. The only problem of today's multi-GPU solutions is software. Monolithic GPU means performance is guaranteed in any game, multi-GPU means living in uncertainty.

But maybe this problem can be solved by adding special multi-GPU logic that would make the whole solution work more like a single-GPU system. Yeah that sounds like Hydra... perhaps future solutions will use this "dispatch processor" model? I don't know, but I feel these are areas worth to explore.
So the best way to answer that AMD mGPU high-end strategy while still maintaining your own big single cores for high-end strategy would be to... launch GT30x middle class GPU first!
And then you may decide where you're going from there - you may build your own mGPU card to counter AMDs high-end, you may finish that big single GPU card in time to counter AMDs mGPU solution or you even may do both (like right now we have GTX285 and GTX295 at the same time).
But in any case it just makes so much sence to launch your new middle-end GPU against your competition middle-end GPU first that i wouldn't be surprised at all if that's exactly what NV is doing with GT3xx -- and with GT30x middle-class GPU coming out first you really don't need that GT212 anymore...
Hmm, that's an interesting point. For AMD, this strategy proved useful. On the other side, people kind of expect a performance bomb, not a middle class.
Maybe they'll launch a middle-class GPU alongside with a SLI-on-a-stick version that would claim the performance crown and create positive publicity to bolster single-GPU card sales.
 
And then you may decide where you're going from there - you may build your own mGPU card to counter AMDs high-end, you may finish that big single GPU card in time to counter AMDs mGPU solution or you even may do both (like right now we have GTX285 and GTX295 at the same time).

Well for that you'd need to keep power in check. It's kinda hard to see nv doing that while making an uber gpu in the first place. Why would they put constraints on themselves. gtx 285/295 is kind of a freak thing since it is not always that you can launch a chip and it's shrink 6 months later.
 
If we're going to be relying on SLI and CF for high-end boards from now on this one will be the last I'll ever buy.
 
and with GT30x middle-class GPU coming out first you really don't need that GT212 anymore...
Shouldn't GT212 have arrived by now?

How big is GT200b if shrunk to 40nm? Does that count as a mid-range GPU then? Surely, by 2009Q4 GTX285 performance is what we'll be calling "mid-range", so that would give us a good idea of a "mid-range" GT3xx.

Jawed
 
With a 200-225 watts chip, you're going to have problems going mGPU on a stick if you want to stay inside the currently accepted power budget (dunno if it's really the max of PCIe-spec) of 300 watts. You either have to clock it down, bin your chips really well for lower voltage etc. pp.

40nm apparently has made HD4850-perf-levels about 30 percent cheaper perf-wise (given the rumored 80-watt-TDP for HD 4770 and the proposed 110 watts of HD4850).

If the real next-gen-GPUs aren't going to deliver significantly more bang per watt (which is AMDs primary goal for about 1.5 years now, mind you), I really doubt that mGPU-single-connector-cards are really the way to go for the future.

Take HD 4870 X2 with a TDP of 289 and a maximum measured power consumption of about 370 watts: make this 70 percent and re-invest the spare power back into performance... you do the math.

Now, that doesn't apply to real multi-card multi-GPU apparently, but IMO the el-cheapo-solutions for "we have the longest bars errr fasted card" could be over soon.
 
40nm apparently has made HD4850-perf-levels about 30 percent cheaper perf-wise (given the rumored 80-watt-TDP for HD 4770 and the proposed 110 watts of HD4850).
Though even as 650MHz preview model the 4770 was pretty much matching 4850, and the retail models are apparently clocked 750MHz, so they can apparently punch a bit more than just 4850 to that 80W
 
40nm apparently has made HD4850-perf-levels about 30 percent cheaper perf-wise (given the rumored 80-watt-TDP for HD 4770 and the proposed 110 watts of HD4850).
How much of that gain is due to GDDR5, which is, additionally, clocked really low?

Jawed
 
Though even as 650MHz preview model the 4770 was pretty much matching 4850, and the retail models are apparently clocked 750MHz, so they can apparently punch a bit more than just 4850 to that 80W
If you refer to the test at guru3d, it was most of the time closer to HD4830 than to HD4850 - I guesstimated the increase in clock frequency compared to the final retail model in, but refused to simply apply it a hundred percent due to my suspicion that the HD 4770 is going to be a bit bandwidth starved with only 60 percent more bw than 4670.

How much of that gain is due to GDDR5, which is, additionally, clocked really low?
Jawed
Frankly: I don't know, since I have yet to see a convincing GDDR5-Implementation really delivering on the promise of low power operation. HD4670 proved, that AMDs Powerplay 2.0 can be quite effective, also later partner models of the HD 4850 proved to have quite appealing idle-modes. But HD4870 as well as HD 4890 are not a promising indicator for GDDR5 wrt being very power efficient.,

=>CarstenS: Who says the (hypothetical future) chips ought to have 200 watts TDP by themselves?
Nobody - but if they don't be in the range of 150 - 200 watts, they'll hardly be besting performance-levels of current offerings IMO.
 
Frankly: I don't know, since I have yet to see a convincing GDDR5-Implementation really delivering on the promise of low power operation. HD4670 proved, that AMDs Powerplay 2.0 can be quite effective, also later partner models of the HD 4850 proved to have quite appealing idle-modes. But HD4870 as well as HD 4890 are not a promising indicator for GDDR5 wrt being very power efficient.,
I thought you were talking about maximum power, not idle power, i.e. 80W versus 110W.

Jawed
 
If we're going to be relying on SLI and CF for high-end boards from now on this one will be the last I'll ever buy.

Too true. Well unless they can make multi-gpu gaming actually work just as well, instead of super patchy performance that works great in some games and crappy in others.
 
I'm secretly hoping that D3D11 makes AFR-multi-GPU break :LOL:

For good.

Now, hurry up and get them D3D11 games out.

Jawed
 
I thought you were talking about maximum power, not idle power, i.e. 80W versus 110W.

Jawed
I am/was. But obviously you seem to be under the impression, that GDDR5 is more power efficient in it's current usage. That I doubt - and the high power draw in current implementations even in idle mode, which is apparently due to GDDR5-usage, made me state that I'd like to see a convincing implementation of GDDR5 where it can deliver on it's lower power promises from the theory.
 
Status
Not open for further replies.
Back
Top