NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
There's not really much point on putting GDDR5 on it in its current form, but it might be useful for smaller future iterations.

Yea, probably with their 55nm refresh of the GT200. 256bit Memory interface + GDDR5 sounds alot more economical than 512bit + GDDR3.
 
Yea, probably with their 55nm refresh of the GT200. 256bit Memory interface + GDDR5 sounds alot more economical than 512bit + GDDR3.

The only problem with that is they'd have to cut ROPs back down to 16 after pumping it up to 32. So I don't see 512 bit buss going away for Nvidia's enthusiast class card until the successor for GT200(b).

Such a move, IMO, would be a far larger change than the one from G80/92 to GT200.

Regards,
SB
 
I guess there's some economic loss associated with early leaks for those sites depending on high traffic during release season.
I don't think so. With a few exceptions, there are usually two types of sites. The first is like the Inq or Fudzilla, they don't write reviews so they don't sign the NDA and they can leak everything that comes past them, unfortunately either they make things up or someone's feeding them crap, so you can't take their word seriously. The second type is the classic hardware review site with arcticles *and* news (that's the one I work for btw). Of course we have NDA and we will be publishing a review at launch. But at the same time, we can write news about the leaks, be it from CJ or anyone else. That way it doesn't hurt us even if we don't break the NDA - and we have something to write news about.
There's still a lot of meat left on the bone especially for the B3D crowd. We still know nothing of architectural enhancements to either R700 or GT200 so there'll be plenty more to chew over.
Exactly, and even those of us under NDA don't know a lot of details about GT200 (is there even an NDA programme for RV770?). Which is not exactly good.
Fudo says GT200 supports GDDR5
I don't believe that. nVidia clearly calculated with GDDR5 not being available soon enough for GT200 so they went for a wider interface with slower (GDDR3) chips. It makes no sense for the GT200 memory controller to support GDDR5. Perhaps some of its mainstream derivatives could use it, but they're still far away so nVidia has a lot of time to redesign the MC.
 
Incorporating the logic for GDDR5 should be minimal. If G200b is just supposed to be a shrink would they really have considered going from 512bit -> 256bit?

Right now GDDR5 seems pointless and if they're having clock issues they might have thought it would be useful down the road. If the core clock on the 280 got up above 800MHz it might make sense.

As large as the chips are why even worry about the number of external pins? Might as well use a bunch of cheap memory and have the option for a large memory pool.
 
why dont nvidia just say "here ya go world these are the specs of the card enjoy"
instead of all this "here are the specs dont tell anyone for 3 weeks nonsense"

Stretches out the free publicity and prepares the consumer for the message during launch. NDA at these big press events, seems to be used as a way of massaging the journalists - not at all a serious attempt to actually avoid publicity.
 
=>Anarchist4000: Well according to the leaked info we have, GT200 even on 55nm will be large enough to carry a 512bit bus. Besides, going to 256bit would require either cutting the number of ROPs to half or redesinging that part of the chip so that there 32 ROPs but only 4 memory channels instead of 8. Either way the chip would have to be modified and that defeats the purpose of a linear shrink.
 
I probably should have worded that post better. Using an old 1024x768 monitor didn't help things. That is exactly what I was getting at though.
 
http://www.fudzilla.com/index.php?option=com_content&task=view&id=7721&Itemid=1
We've learned that GT200, the one we call Geforce GTX 200, does support GDDR5 memory, but you won't see a card based on it anytime soon.

http://www.bit-tech.net/news/2008/06/05/d12u-will-be-first-nvidia-gpu-with-gddr5-support/1
Several memory manufacturers have confirmed that the first Nvidia GPU to feature support for GDDR5 memory will be the D12U part, which is said to be currently scheduled for a late 2009 launch.

hrm...
 
I don't believe that. nVidia clearly calculated with GDDR5 not being available soon enough for GT200 so they went for a wider interface with slower (GDDR3) chips. It makes no sense for the GT200 memory controller to support GDDR5. Perhaps some of its mainstream derivatives could use it, but they're still far away so nVidia has a lot of time to redesign the MC.

Maybe a GTX 290 or Ultra 280 in fall with low-end GDDR5 ~ 1.6GHz, to offer first >200GB/s card. :LOL:
There is not always sense in NVs doing, remember 8800 Ultra.
 
Code:
                                  GTX280/   GTX280/
GPU  8800GTX    GTX280  9800GTX    G80GTX    G92GTX
ALU      128       240      128      +88%      +88%
clock  ~1350     ~1300    ~1700       --       -31%
TA        64        80?      64      +25%      +25%
TF        32        80       64     +150%      +25%
ROP       24        32       16      +33%     +100%
clock    575      ~600     ~675       --        --
BW        86       142      ~70      +65%     +100%
G80's TAs should be 32, with 64 TFs.

GT200's TA should be 40, with 80 TFs.

Jawed
 
The only problem with that is they'd have to cut ROPs back down to 16 after pumping it up to 32. So I don't see 512 bit buss going away for Nvidia's enthusiast class card until the successor for GT200(b).

Such a move, IMO, would be a far larger change than the one from G80/92 to GT200.

Regards,
SB

Maybe i should have mentioned it, since we all know that 4 ROPs are tied to 2 32bit memory channels i.e cutting the memory bus in half would result in the 16 ROPs.

Im suggesting that it could be a possibility seeing as the transition from G80 to G92 was similar in that aspect (although it was far more simpler to do so). Maybe for the mid-high end derivative of GT200 possibly. But im guessing that either the rumor regarding a simple "die shrink" of the GT200 to 55nm refresh around late 08 is wrong, or alternatively theres a refresh in the works that could incorporate the NVIO back to the main die with its memory interface cut in half to create a more economical solution to replace GT200 sometime next year. (Not sure how the ROP count will work out but having a higher number of ROPs never indicated any performance advantages e.g G80 24ROPs vs G92 16 ROPs)

512bit memory bus paired with GDDR5 sounds very illogical though so i dont think any GT200 will use GDDR5. (or else this means >200GB/s!)
 
=>Anarchist4000: Well according to the leaked info we have, GT200 even on 55nm will be large enough to carry a 512bit bus. Besides, going to 256bit would require either cutting the number of ROPs to half or redesinging that part of the chip so that there 32 ROPs but only 4 memory channels instead of 8. Either way the chip would have to be modified and that defeats the purpose of a linear shrink.

I suspected half the units with double fillrate before though.

But the memory controller probably has to be reworked to support longer bursts, the implication of which could be bigger than whether "b" would be linear or not for the whole architecture (anyone care to drop a clue?).
 
=>Anarchist4000: Well according to the leaked info we have, GT200 even on 55nm will be large enough to carry a 512bit bus. Besides, going to 256bit would require either cutting the number of ROPs to half or redesinging that part of the chip so that there 32 ROPs but only 4 memory channels instead of 8. Either way the chip would have to be modified and that defeats the purpose of a linear shrink.
It would also be possible to still have 8 memory channels but each just 32bit wide (so each of the 8 quad-rops still connects to its own memory channel). I think this should be a relatively easy modification.
Another possibility would be something like a 384bit memory interface with 6 ROPs again, coupled with GDDR5 it would still offer more memory bandwidth while the reduced ROP count probably wouldn't really hurt performance that much (though yes I agree that cutting ROPs in half might hurt this beast indeed).
 
GT200's TA should be 40, with 80 TFs.
Whoa. An interesting move if true - that monster chip would have less bilinear fillrate than old G92 (not that the increased TA count on G92 actually did help, with the chip not even able to realize its theoretical fillrate in the most simple situations). Still, with vastly more ALU units, vastly more bandwidth available and a only very modest increase in TF units it seems odd it would have half the TA units again.
 
Still, with vastly more ALU units, vastly more bandwidth available and a only very modest increase in TF units it seems odd it would have half the TA units again.

Well, like I said earlier we know nothing about architectural improvements. For all we know each TF does single cycle FP16 filtering now like R6xx and we'll get free 4xAF INT8 or free 2xAF FP16 due to the 2:1 ratio.

In that scenario your theoretical INT8 bilinear performance gets cut in half but FP16 and AF performance get a shot in the arm relative to G92. Of course it could just be 80 plain old INT8 TF's in which case RV770 could potentially have an FP16 texturing advantage over GT200. I would be surprised if that was the case.

I'm not sure if a setup like that would be worth it though. Anybody have a general ideal about the ratio of FP16 to INT8 surfaces in recent titles? And how many of those FP16 surfaces are subject to AF?
 
Or maybe register bandwidth has been a drag on TEX performance in G80 and has been solved in GT200 - therefore performance is better?

Although there's been no architectural explanation for the oddities of G80/G92 TEX performance, perhaps NVidia will reveal all with a description of the new/improved TMU architecture of GT200?

Jawed
 
Status
Not open for further replies.
Back
Top