Nvidia GT200b rumours and speculation thread

Question about ROPs/MSAA/Bandwidth

That story seems like little more than baseless and uneducated speculation.
With a 512bit bus, a straight shrink while adding GDDR5 is completely redundant, since GT200's primary performance limitation doesn't lie in its native memory bandwidth.
Not entirely true... It's already been demonstrated that bandwidth is the limiting factor on MSAA performance (a limitation of the ROP design).

Could you explain this a little ?

I'm really interested in the relation between bandwidth & AA.
 
They could have positioned the G200 at any point in the market they wanted before the RV770 ... after there was competition their hands were tied. Intentionally delaying a product in a competitive and innovative market is hubris.
 
Was it 'intentionally delayed'? I mean, do we have some facts on that it was ready to release earlier than it actually was released?
 
If they intentionally delayed it their decision making was fucked.
Why? You can make more money by selling G92-based cards for $500 instead of selling GT200-based cards for the same price. From short-term business perspective this was a right decision while AMD wasn't been able to compete with G92.
In the long-term they've underestimated RV770 but let's suppose that GT200 was released instead of G92GX2 -- what does it change now, when RV770 is on the market? It would still come down to the same situation but they would have lost all the margins they've got on high-priced G92 cards.
The thing is -- NV doesn't have an answer to RV770. And it doesn't matter when GT200 would've been released -- it's a bad answer anyway. And since they've underestimated RV770 the real problem is that they still has no answer to it -- they thought that G92b/GT200 combination will be the answer but it's not working out and i doubt that GT200b will change this situation. So the best thing they can do right now is try and make good competitor to RV870 -- it likely won't be so hard for them to predict its price/perfomance as it was with RV770. And that means that they have no reason to put much work into getting GT200b to market right now.
 
True, but while an earlier launch would improve GT200 sales, it would have likely cut into their 9800GTX/GX2 sales since then and thus would have barely helped NVidia overall.
$650 GTX280 + $500 GTX260 + $400 9800GTX (G92) + $300 9800GT (G92) all before Christmas, with G80 EOL.

9600GT just after at $200-250.

8800GTX owners mostly had no upgrade path. I suspect the delay in GT200 has meant that NVidia lost sales to 8800GTX upgraders - some have definitely gone with ATI. They were a captive market.

NVidia's margins fell the instant RV770 came out. Early GT200 sales were gimped in a large part to huge price cut of G92 based products.
Yep, with all that stuff released before Christmas there'd have been no gimping - and no 9800GX2 to muck things up either.

Jawed
 
8800GTX owners mostly had no upgrade path. I suspect the delay in GT200 has meant that NVidia lost sales to 8800GTX upgraders - some have definitely gone with ATI. They were a captive market.

Jawed
Yep, money to spend and no upgrade had me waiting until the GTX280s were released.
 
That's nearly certainly GT200a, yes. As for GT200b, I thought a little bit about what we could expect if NVIDIA actually delivered for once, which I'll admit to be semi-skeptical since they haven't really met expectations for a *single* chip ever since G80 (where they obviously beat them by a lot). So here goes:

GT200a 65nm: ~595mm²
GT200a 55nm: ~485mm² [Does not exist; for comparison's sake]
GT200b 55nm: ~450mm²

GTX 290 X2: 216 SPs, 72 TMUs, 24 ROPs, 384-bit 2.0GHz GDDR5, 700MHz Core, 1750MHz Shader, $599
GTX 290: 240 SPs, 80 TMUs, 24 ROPs, 384-bit 2.3GHz GDDR5, 700MHz Core, 1750MHz Shader, $399
GTX 270: 216 SPs, 72 TMUs, 20 ROPs, 320-bit 2.0GHz GDDR5, 600MHz Core, 1500MHz Shader, $299
GTX 250: 168 SPs, 56 TMUs, 16 ROPs, 256-bit 1.8GHz GDDR5, 560MHz Core, 1400MHz Shader, [OEM-Only]

Honestly, that's nothing that AMD couldn't beat easily in terms of perf/dollar, but at least it'd be competitive and would have the performance crown back. As for the fact it's GDDR5 vs GDDR3 in GT200; I would definitely expect 768MiB of GDDR5 to be both cheaper and faster than 1024MiB of GDDR3, especially when you consider it'd also reduce the PCB/cooling costs a bit.
 
I doubt a "b" variant chip would have a radical change in memory configuration and GDDR type. GDDR5 seems extremely unlikely to me.

This was a GPU planned for summer 2008 release, if GT200 had released in 2007Q4 - purely as a 55nm refresh, much like G92->G92b.

Also, if its original plan was for summer 2008 launch then that would be too early on the GDDR5 curve. GDDR5 isn't ratified as a standard till September, so AMD has taken a fairly big risk - although arguably less than with GDDR4.

I don't think NVidia's planning on a GDDR5 GPU until the end of this year or even later. The sensible thing would be to pull that forward if at all possible - which might be why there's a vague hint of GT200b being skipped entirely in favour of something else. Something, because of GDDR5, that should be higher performance and with a much smaller die.

Jawed
 
I doubt a "b" variant chip would have a radical change in memory configuration and GDDR type. GDDR5 seems extremely unlikely to me.

This was a GPU planned for summer 2008 release, if GT200 had released in 2007Q4 - purely as a 55nm refresh, much like G92->G92b.
But the original GDDR5 timing already made clear that 4Q07/1Q08 would be too early for GDDR5 (or at least not at sane pricing), while 3Q08 would be just fine. At the same time, GT200a was always going to be used for Tesla and GT200b likely never was, and 4GB of GDDR5 would have been very expensive even for a $1.5-2K GPU...

Also, if its original plan was for summer 2008 launch then that would be too early on the GDDR5 curve. GDDR5 isn't ratified as a standard till September, so AMD has taken a fairly big risk - although arguably less than with GDDR4.
It is very important to remember that it's not just GT200 that was delayed; GDDR5 was too! ;)

I don't think NVidia's planning on a GDDR5 GPU until the end of this year or even later. The sensible thing would be to pull that forward if at all possible - which might be why there's a vague hint of GT200b being skipped entirely in favour of something else. Something, because of GDDR5, that should be higher performance and with a much smaller die.
Uhhh, well either they're planning it on GT200b or they'll have to wait until 40nm in late 1Q09/early 2Q09. There is nothing between GT200b and 40nm; along with GT206 (ultra-low-end chip), it is the last NVIDIA chip on 55nm AFAIK.

Also, under what sane roadmap would you have had GT200 in 4Q07, GT200b in 2Q08 and GT20x with GDDR5 in 4Q08, all replacing one another and on very similar or identical processes? There is no way in hell to justify such a fast schedule in the ultra-high-end given the low volumes and the design+mask costs. BTW, I don't think it's realistic to expect NV to go from 512-bit of GDDR3 to 256-bit of GDDR5 for GT200b as I pointed out even excluding the memory/PCB cost factors. That would mean either 16 ROPs (too few) or 32 (too many), and given the lower effective bandwidth of a given bitrate of GDDR5 you couldn't achieve the same effective bandwidth despite likely increasing core clocks slightly. We'll see what happens though...
 
But the original GDDR5 timing already made clear that 4Q07/1Q08 would be too early for GDDR5 (or at least not at sane pricing), while 3Q08 would be just fine.
My main point is simply that NVidia would tend to be conservative in adopting it - 1 or 2 quarters behind ATI. If they're pushing other things then do they want to increase risk by pushing GDDR5 too?

Though it's notable that NVidia was so aggressive with GDDR3 that it appeared on a low-end part first (5700U wasn't it?).

It is very important to remember that it's not just GT200 that was delayed; GDDR5 was too! ;)
And I expect that all the architectural features of GDDR5, such as the longer bursts, are a non-trivial change for the GPU. "b", so far with NVidia GPUs, is far from being able to encompass the kind of radical changes you're proposing. "b" is nothing more than indicative of a "cost-cutting 55nm refresh".

So something doesn't add-up. A refresh part doesn't change memory architecture significantly - unless the original part was planned to have that memory system too.

So I'd only be willing to accept the idea of GT200b being GDDR5 if GT200 was originally planned to be GDDR5 too.

Uhhh, well either they're planning it on GT200b or they'll have to wait until 40nm in late 1Q09/early 2Q09. There is nothing between GT200b and 40nm; along with GT206 (ultra-low-end chip), it is the last NVIDIA chip on 55nm AFAIK.
40nm is late too. Now I have to admit the idea that NVidia would go for both 40nm and GDDR5 on their winter 2008 enthusiast part seems pretty unlikely.

Also, under what sane roadmap would you have had GT200 in 4Q07, GT200b in 2Q08 and GT20x with GDDR5 in 4Q08, all replacing one another and on very similar or identical processes?
GT200 and GT200b are no different from G92 and G92b in this regard - just a refresh, and in the CC they put quite some weight on the money they can save by going to 55nm. I said "summer 2008" deliberately, to imply a gap that's more like 9 months than 6, by the way.

GT20x with GDDR5 would be following their schedule to introduce an enthusiast part in Q4 of each year. That was their promise back in 2006, wasn't it? Tick-tock, enthusiast-refresh?

One thing that is suspicious is the timing of some 55nm parts - G96 appeared in July as 65nm. Now G96b in 55nm form has just turned up? What the hell's going on there?

Overall, as you say, NVidia seems to be very wobbly these days.

There is no way in hell to justify such a fast schedule in the ultra-high-end given the low volumes and the design+mask costs.
Hmm, when GT200 is "yielding badly" and there's only about 80 of them per wafer, I expect there was a strong incentive to go with 55nm if it was at all practicable. Repeated re-spins of GT200b plus a large inventory of GT200 could well have put the kibosh on 55nm though.

Also, don't forget that the 3/4 part, GTX260, is not meant to be ultra-high-end in price. And as Bob explained it, you shouldn't think of the 3/4 part as the "salvage" but instead think of the fully functional GPU as "bonus" to sell at even higher margin.

GTX260 was originally only going to be $50 more than 8800GTS-640's launch price. Then it launched at $400.

BTW, I don't think it's realistic to expect NV to go from 512-bit of GDDR3 to 256-bit of GDDR5 for GT200b as I pointed out even excluding the memory/PCB cost factors. That would mean either 16 ROPs (too few) or 32 (too many), and given the lower effective bandwidth of a given bitrate of GDDR5 you couldn't achieve the same effective bandwidth despite likely increasing core clocks slightly.
For what it's worth I'd tend to agree with the 384-bit configuration.

Jawed
 
The 384-bit interface with GDDR5 is definitely a keeper option for the 55nm shrink of G200. That is from ROP performance PoV, now with full cycle blending op's in G200 and still top-of-the-line depth/stencil rate, the new memory will blast fill-rates to new peaks. GDDR5, with its insane transmit rates, is all about max bandwidth per pin -- deliver big and fast! Latency is another issue, but there is much improvement since GDDR4 in that area, so let's not dig it too much, considering the modern GPUs are well fitted to counter it.

In a nut shell: ~1650MHz shader clock, 650~700MHz base clock, 384-bit GDDR5 (4GHz+) and a smaller die foot-print might put on the right track the new product line.
 
Last edited by a moderator:
So something doesn't add-up. A refresh part doesn't change memory architecture significantly - unless the original part was planned to have that memory system too.
Or, unless, it was planned from the very start that GT200 would be 65nm/GDDR3 and GT200b would be 55nm/GDDR5. Remember also that NVIDIA copy-pastes the SMs and TMUs, but the ROPs and MCs are all individually synthetised. This is both to minimize space wastage and to allow for different MC capabilities in different members of the family (obviously in the mid-term you'll want some chips to support GDDR5 and some not!)

So I'd only be willing to accept the idea of GT200b being GDDR5 if GT200 was originally planned to be GDDR5 too.
Or, as I just said, if GT200b was always planned to be GDDR5 (i.e. the GT200's less conservative little brother). This also makes sense in the context that GDDR3 is preferable for Tesla.
 
Well, i also think that a 55nm GT200b with 24 ROPs, 384bit, & GDDR5 could shrink the chip, to allow faster working frequencies & better heat disipation, but, even with the GDDR5 boost, which should be around 172 GB/s (900MHz base GDDR5), compared to the 140 GB/s in the GTX280 with DDR3, could this new chip perform better with high resolution & SSAA enabled ? Cause i dunno if the ROPs are working at 100% on the GTX280, and if they reduce the number, the AA performance could be lower too, unless they clock them faster, to compensate the difference in rop units.

I wold be really pissed if i have to wait for the refresh, and then end buying the old GTX280 due to higher AA performance.

Also, moving from 512bit GDR3 to 384bit GDDR5, doesn't look that great :???: 172/140 -> Only a 22% faster, and you need to change a lot of parts of the design.
 
I doubt the GT200b will use GDDR5. This rumor would have been confirmed by some insiders (as the original GT200 specs).
 
It's all about the availability and as it looks for now, ATi has been booked virtually all the production output which on top of this is still sub-par to the demand, leaving any possible GDDR5 aspirations by NV at the dry bottom, at least for the fall season.
 
It's all about the availability and as it looks for now, ATi has been booked virtually all the production output which on top of this is still sub-par to the demand, leaving any possible GDDR5 aspirations by NV at the dry bottom, at least for the fall season.
I'm not sure what that is based on. Also, it could be >1.8GHz GDDR5, which the manufacturers would be able to sell for more money instead of having to sell chips capable of 2-2.3GHz as 1.8GHz ones because those are the only ones ATI needs/uses. Then it's pretty obvious who would have the priority for chips with that level of performance.
 
Back
Top