NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
I think the story repeats itself again - like in the days of ATI Radeon 8500 w/ DX8.1 and Nvidia Geforce 3 & 4 w/ DX8.0 only.
 
I think the story repeats itself again - like in the days of ATI Radeon 8500 w/ DX8.1 and Nvidia Geforce 3 & 4 w/ DX8.0 only.

GF4's supported actually 8.1 too, just not up to 1.4 like 8500, but up to 1.3, and 8.0 only supports up to 1.1
 
regarding the use of the term "next-gen" did you guys consider the following Nvidia chips/cards to be next-gen:

NV4 / TNT2
NV15 / GeForce 2 GTS
NV25 / GeForce 4 Ti 4600
NV35 / GeForce 5900 Ultra
NV47 / G70 / GeForce 7800 GTX

?

If Nvidia's G100 / GT200 / GeForce 10 / whatever, is not a clean-sheet new architecture (NV60) then it's not truly next-gen, but merely the long-awaited highend refresh (thus "NV55") of G80/GeForce 8800.

Recall that Nvidia heavily pushed G70 / GeForce 7800 GTX as "next generation", even though it was really a refresh of NV40 / GeForce 6800 just like all their previous refreshes.
 
LordEC911, it's DDR -- double-data-rate. That 2Ghz is an effective rate of 4Ghz.
 
I'm pretty sure that 128Gbps is the right number...

.2*51281.25 =128

As noted above it states GDDR4; there are many aspects that raise an eyebrow to that detail. For one I'm not so sure that 3x times the bandwidth compared to G8x would be necessary; historically transistor count (amongst others) scales much more than memory bandwidth on GPUs in recent years. As a close second if an IHV is going to pick anything outside GDDR3 I wonder why he'd go for a more complex and wider MC; if simulations have shown that bandwidth X is sufficient then it sounds more reasonable to go for the cheapest all around sollution. Last I'd believe that GDDR5 might be easier available in late 08' than GDDR4@2.GHz/4.0 effective DDR.
 
As noted above it states GDDR4; there are many aspects that raise an eyebrow to that detail. For one I'm not so sure that 3x times the bandwidth compared to G8x would be necessary; historically transistor count (amongst others) scales much more than memory bandwidth on GPUs in recent years. As a close second if an IHV is going to pick anything outside GDDR3 I wonder why he'd go for a more complex and wider MC; if simulations have shown that bandwidth X is sufficient then it sounds more reasonable to go for the cheapest all around sollution. Last I'd believe that GDDR5 might be easier available in late 08' than GDDR4@2.GHz/4.0 effective DDR.

I think that quote was meant to mean GDDR4 running at 1000mhz, or 2000mhz DDR, ala what we saw with the X1950xtx. There still isn't 1600 (3200) rated GDDR4, let-alone 2000 (4000) rated GDDR4. This means the product would likely use 1100mhz (2200mhz) rated GDDR4, if true. That's going by Samsung's product list which is composed of 1100, 1200, and 1400 atm. 1600 was sampled ions ago, but still isn't on the product list...

EDIT: Hynix does show 1600mhz GDDR4 in their catalog pdf.

GDDR3 or 4 makes the most sense if Nvidia is opting to go with the 512-bit bus because while GDDR3 would be capped at 128gbps, and while probably not enough to remove the bandwidth limitation from the architecture, especially with an upgrade in the rest of the spec, it would sure be an easy way to refresh the lineup down the line with GDDR4. GDDR4 could (theoretically) allow them up to ~180gps (1400mhz) or ~205gbps (1600mhz) ...Probably granting a substantial performance increase with or without a new chip (55nm dumb shrink?)

I wouldn't be surprised to see the G100GTX with 2000mhz (effective) GDDR3 and the 55nm refresh have a "GT/GTS" part that used 2000mhz (effective) GDDR4, with the new king of the hill replacing G100 would use higher-rated stuff.

On the total flipside, it looks like ATi is shooting for 256-bit and GDDR5. IIRC that puts them at somewhere between 160-192gbps eventually, if they use 5000-6000 rated stuff, which Hynix (1gbit, 5000 rated) and Samsung (512bit, 6000 rated) have both showcased. They'll probably release slower-specced stuff first and they'll use that, but even so, the low-ball guess would be 4000 effective, or roughly 128gps...the exact same number we've come to expect from nvidia and their 512-bit bus using 2000 (effective) GDDR3 or GDDR4. Again, just like Nvidia and GDDR4, they too would have room to grow to 192gbps (6000 effective) and beyond with GDDR5 on their 256-bit bus.

Like someone else said...We'll probably end up with similar numbers, both with room to grow, but we'll get there dramatically different ways. Does it matter how we get there? I would venture yes. ATi's method sure looks better when you think about using multiple GPUs on a PCB, while Nvidia could be poised to do well with just a change of RAM and/or a switch to a smaller process, each fitting their respective choices for the future, each with their obvious pros and cons. 256-bit requires less transistors, and doesn't need a large die to fit pins, although you have the disadvantages of multi-gpu (unless R700 truly fixes these issues). The reverse is true for a 512-bit large single chip.
 
Last edited by a moderator:
If ATI would have revised edition of R7xx / RV7xx on 45nm tech for their next generation - it would not eat up to much die size and transistors count using 512bit memory bus as appose R600.
 
LordEC911, it's DDR -- double-data-rate. That 2Ghz is an effective rate of 4Ghz.

LoL, when was the last times specs were listed with the actual mhz and not the effective rate?
2000mhz effective makes the most sense...
#1 It is readily available in large quantities.
#2 It correlates with the 128Gbps number given.

I am not saying that these specs go with GT200 or what-have-you.

As noted above it states GDDR4; there are many aspects that raise an eyebrow to that detail. For one I'm not so sure that 3x times the bandwidth compared to G8x would be necessary; historically transistor count (amongst others) scales much more than memory bandwidth on GPUs in recent years. As a close second if an IHV is going to pick anything outside GDDR3 I wonder why he'd go for a more complex and wider MC; if simulations have shown that bandwidth X is sufficient then it sounds more reasonable to go for the cheapest all around sollution. Last I'd believe that GDDR5 might be easier available in late 08' than GDDR4@2.GHz/4.0 effective DDR.
And as noted above you, you misunderstood the specs and what I posted.
 
No doubt I misunderstood the supposed "specs"; yet there's also no doubt that the effective double data rate of DDR ram has absolutely nothing to do with the real frequency of a ram module. At best it's 1.0GHz / 2.0 effective DDR.
 
20080215_968b45f8381414da79a9Jm3NIlgKZVhs.png


http://www.crhc.uiuc.edu/IMPACT/ftp/talks/toronto-11-29-2007.pdf
 
Yup, very nice catch. 1TFlop & Spring 2008 clearly implies it's not GT200 though, IMO. G90? G92-v2? I wonder if that's just outdated or if NVIDIA has successfully been hiding something...
 
Maybe 128-GPU is not so right and they mean 128-cards -> 9800GX2-Tesla 2*128SPs@1.35GHz.

Rumors talking about 550MHz for 9800GX2 -> with 2.5 ratio you get this ~1.35GHz.

And since the Tesla-launch they talked about dual-gpu-cards
 
Last edited by a moderator:
Indeed, but that doesn't explain the DP numbers. It has always been claimed by everyone at NVIDIA that it would be 1/4th SP. I think the only ways to explain this slide, if it is indeed correct (which is a big if), is to assume either that:
- The MADD unit can only do MUL *or* ADD, both at 1/4th speed, which is very possible.
- There are two units per multiprocessor, and only one of the two can do FP64 processing. Also possible, I think, and that'd make this single-GPU.
 
Eh? Spring is the launch timeframe for this GPU. Remember, they've promised DP for 2008H1.
Okay, I'll try to be clearer... :) They actually promised DP for Q108 (delayed from Q407 originally). GT200 definitely never was a Q108 product in the timeframe where they indicated that would be their goal for DP (and in fact I don't think it was ever aimed at Q108). Thus, as far as I can tell, this slide does not refer to GT200, but rather to their first DP-compliant product which will thus likely be a G9x.
 
Well if AMDs Rv770 is scheduled about June and NVIDIAs GF9800 will be still G92 based cards then they have no chance to beat new next-gen AMDs GPU. We should remember that NVIDIA G80 (we can say G9x too) has almot a 1,5 year.
 
Status
Not open for further replies.
Back
Top