ELSA hints GT206 and GT212

... GT200 isn't nearly bandwidth starved even with GDDR3 ...

If you own a GTX 280, i can provide a sample where the frame rate is almost linear with the bandwidth of the card.

I'm waiting for a 512bit GDDR5 monster. That day, i'll say 'goodbye' to my performance problems.
 
Oh, yeah, i heard that GT200 has GDDR5 support in the MCs but NV doesn't see any reason to use GDDR5 on GT200 boards (which is understandable considering that AFAIK GDDR5 costs 3-3.5 times more that GDDR3 right now while GT200 isn't nearly bandwidth starved even with GDDR3).
In the MCs? I don't know, but I'm pretty damn sure the PHYs don't support GDDR5 so that'd seem strange although not impossible. Also, 3-3.5 times? I more than a little bit doubt that to say the least but heh :)
 
I think:


(my emphasis) is pretty interesting there :D

Jawed

He again mentions G96 and GT214 together in this:

•Did loadline analysis and core power transient simulations for G96 and GT214.

A 9600 GT/G94b at higher core/shader clocks, coupled with high-speed GDDR5 on a 256bit bus sounds plausible, especially if it's intended to replace the G92-based 9800 GT/9800 GTX/GTX+ boards, using a smaller, simpler core.
It can get close to the RV770 LE/HD 4830, if not HD 4850 levels.


BTW, did i mention the awesomeness of the name "Zhenggang" ? :D
 
Last edited by a moderator:
I swear all of you guys are just trying to figure out what roadmap would guarantee NVIDIA's bankruptcy despite their huge amounts of cash in the bank :p I know G92 and GT200 deflated expectations, but come on...
 
I took that to mean that G96 will never see GDDR5 (since it's out there). It seems to me more likely that NVidia was evaluating the timing/pricing of GDDR5 and considering whether G96 or GT214 would be the chip that introduces it.

But, well, we're unlikely ever to find out about G96-specific stuff. Any revision of G96 in the same performance bracket is surely going to be GT21x, perhaps GT214, not G9x based.

But I dare say the future's looking bright for GT214...

Jawed
 
So..., can we settle on the "GT214" vs "RV740" codename war scheme for Q1'09 in the performance midrange segment ?

Arun, it's not Lehman Brothers, is it ? :D
 
Meh, GT212/GT214/GT216/GT218 codenames were leaked, what, in 3Q07? If he gets any real trouble for putting that on his resume, his boss should seriously reconsider his value to humanity.

Nice:)
 
So..., can we settle on the "GT214" vs "RV740" codename war scheme for Q1'09 in the performance midrange segment ?

Looks like it. But I suspect Nvidia will be pissed if AMD tries to push RV740 for ~ $100. The 9600GT debuted at closer to $200 IIRC.
 
I'm still thinking more in lines of...
- 256-bit GDDR5 40nm GT212 @ ~150% GT200 performance, two of those for AFR top end
- 384-bit GDDR5 40nm GT300 @ ~300% GT200 performance
512-bit bus probably won't come back until the second DX11 generation line-ups.

Hmmm, I don't know. I think we're at the point where the flagship needs to have at least 1GB RAM (see GTX 295 tanking at high resolutions in modern games). A 384-bit bus would imply 1.5GB of GDDR5 - not a cheap proposition.
 
I think that chip which NVIDIA needs the most is a worthy succesor of G92 as performance chip and chip like NV43 or G73.
Mainstream chip with 192SP/48TMU/16ROP/256-bit Mem. Bus could have a chance to be "the second GF6600GT". Die size in 40nm wouldn`t be bigger than 200mm^2.

I don`t think that any of new GT2xx in 40nm will be "next REAL highend" chip. Why? Because they don`t need another highend chip when GT300 is going to be released in next 10-12 months.
 
If you own a GTX 280, i can provide a sample where the frame rate is almost linear with the bandwidth of the card.
Anybody can create a sample which will show you that any card is limited by anything. I'm talking about average usage situations.
According to what i heard NV's prototypes of GT200 with GDDR5 showed +10% performance increase on average which kinda confirms my point.

In the MCs? I don't know, but I'm pretty damn sure the PHYs don't support GDDR5 so that'd seem strange although not impossible. Also, 3-3.5 times? I more than a little bit doubt that to say the least but heh :)
Well, that's what i heard. Is it true or not -- i don't know myself.

Hmmm, I don't know. I think we're at the point where the flagship needs to have at least 1GB RAM (see GTX 295 tanking at high resolutions in modern games). A 384-bit bus would imply 1.5GB of GDDR5 - not a cheap proposition.
GT300 will probably show up in big quantities at MSRP price level in the begininning of 2010. 4870 X2 have 2 GB of GDDR5 right now and it's selling (if i'm not mistaken) for $499. I don't see what's not cheap in having a 1,5 GB of GGDR5 in more than a year.
And as for GTX295 @ high resolutions -- i'm not really sure that the dip in performance is related to having only 896 MB of on-board memory. Similar dip can occur on a regular GTX280 with 1 GB VRAM. I think that there's something different going on.
 
Anybody can create a sample which will show you that any card is limited by anything. I'm talking about average usage situations.
According to what i heard NV's prototypes of GT200 with GDDR5 showed +10% performance increase on average which kinda confirms my point.

Yes, i see your point, but, try to understand mine too. I'm trying to run old titles with SSAA @ 1920x1200, and there is still no way to run them, after several gens of gfx cards. The bandwidth has not grown linearly with the shading power, and yes, the new games do run much better, but damn it, some old titles that i want to run, do not scale properly. As reference, my own BR2 patch runs at 50fps on the 100GB/s 8800GTX (oc'ed), but, it runs at 70fps on the 140GB/s GTX 280. 140/100 = 40% -> 70/50 = 40%. Now you can understand why i wish a revolution at the memory bandwidth department. And SLI/CF do not run properly with my code, because there are too many dependencies between the frames. :cry:
 
Hmmm, I don't know. I think we're at the point where the flagship needs to have at least 1GB RAM (see GTX 295 tanking at high resolutions in modern games). A 384-bit bus would imply 1.5GB of GDDR5 - not a cheap proposition.
Just thought I'd quickly comment on that: just like 256Mbit GDDR3 probably wasn't such a great idea in the R600 timeframe, 512Mbit GDDR5 in 2H2009 likely wouldn't be a good idea either; it would be substantially more expensive than just half the price of a 1Gbit chip.

So if you need that amount of bandwidth, I don't think you can get away from such monstruous levels of memory. At the same time the level of performance of these cards will hopefully be astonishing, and so unless 1H09's games are much more performance intensive than I think they will be, they will be tested in many cases at very high resolutions with massive textures and tons of AA. Also remember GT212 might be used for CUDA even if GT300 hits its ETA, so being able to easily support massive amounts of memory (3-6GB) makes some sense there too.
 
Hmm if GT216 has this specs then i wonder what specs will get GT214 (performance-mainstream chip) which is supposed to be positioned between GT212 (40nm highend chip) and GT216 (40nm mainstream chip).
 
Back
Top