NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
thanks for answering and not deleting my question(i have a bad habit of going off topic) im really interested in what the gt200 will offer price/performance wise
 
Could also mean 240 in Clusters of 24. At least, that's what seems to get most consensus right now.
 
The number of SPs in a cluster is in a certain way connected with the number of TUs. Read my previous post on page 41.
 
I think alot of people are expecting Nvidia to increase the amount alu power available per TMU (more efficient use of die space on chips designed for high resolutions). I'm curious if it will have much of an impact with dynamic branching though. The 24 SPs per cluster also raises some questions WRT double precision in my mind....
 
According to CUDA docs branching granularity remains at 32 pixels. So each cluster probably just gets another 8-way SIMD. That should give a healthy boost to shading while making better use of the already abundant texturing capability introduced with G80.
 
According to CUDA docs branching granularity remains at 32 pixels. So each cluster probably just gets another 8-way SIMD. That should give a healthy boost to shading while making better use of the already abundant texturing capability introduced with G80.

IOW: 24 SPs per cluster, as suspected.
 
Am I the only one who wonders about all the stuff GT200 does not deliver?

No DX10.1? Come on, will we have to stick another 1,5 years to bad DX10 AA?

For some reason I believe the GT200 is just a huge monster chip, enabling all the brute force off G80/G92 architecture. I don't think it will be the technical foundation for the next 1.5 years.

Actually I expect Nvidia to roll out a nextgen chip with future proof technologies in Q1/09.
 
According to CUDA docs branching granularity remains at 32 pixels. So each cluster probably just gets another 8-way SIMD. That should give a healthy boost to shading while making better use of the already abundant texturing capability introduced with G80.


It is possible that from the beginning, G80 is actually a GPU, empowered by SIMD, with high speed layer enacted as hardware wazzle for high speed data transimittion ?
 
No DX10.1? Come on, will we have to stick another 1,5 years to bad DX10 AA?

Lol bad AA? Please explain this mystical advance in AA that DX10.1 brings.

For some reason I believe the GT200 is just a huge monster chip, enabling all the brute force off G80/G92 architecture. I don't think it will be the technical foundation for the next 1.5 years.

Brute force compared to what?
 
Lol bad AA? Please explain this mystical advance in AA that DX10.1 brings.
Oh, come on...

As for something new from NV in 1Q09 -- don't count on it. G100 is teh new. It's shrinks and G1xx's after that.
I'd say that if G100 isn't DX10.1 compatible (which seems to be more and more likely =( ) then we'll have to wait till DX11 NV GPU for any new features.
 
I think alot of people are expecting Nvidia to increase the amount alu power available per TMU (more efficient use of die space on chips designed for high resolutions). I'm curious if it will have much of an impact with dynamic branching though. The 24 SPs per cluster also raises some questions WRT double precision in my mind....
I like your way of thinking.
 
Did NVIDIA make a G80 "X2" card? No. The G80 core is too big and too hot to stuff two of them onto a 'single' card. With G92's reduced power size and power consumption, it became much much more feasible to put two cores so close together. But even with G92's higher efficiency, the 9800GX2 is just adequately cooled.

Not only that, but unlike single G80 / single G92 which have 24 ROPs, the dual G92-based 9800GX2 card have only 16 ROPs each.

If two GT200 cores (rumored TDP >200W each) were to be put in the same place, a dual-slot aircooler wouldn't be able to effectively dissipate that kind of heat. A triple-slot design would be cumbersome and perhaps too heavy, and there aren't enough enthusiasts with watercooling to justify having a 'watercooling only' card.

If/when GT200 is shrunk to 55nm, it might be possible, but not likely before then.

I also really doubt a monster like GT200 has "awesome margins." Yields won't be great with such a big core, and in a time where $200 cards can run most games at high settings on 1920x1200 monitors, it's going to be a hard sell. It's not like average gamers are really clamoring for more performance right now.

Now I am thinking that when a GT200 GX2 card arrives, it'll be be after a shrink to 45nm and each GT200 will be reduced in a similar fashion to how the G92s in 9800GX2 were reduced (less ROPs or something).

If highend GT200 has 32 ROPs, a GX2 version might only have 24 ROPs each. That way Nvidia offers a highend offering without actually giving you two full GT200s.

The 8800 Ultra is said to outperform the 9800 GX2 in some areas.
 
Oh, come on...

As for something new from NV in 1Q09 -- don't count on it. G100 is teh new. It's shrinks and G1xx's after that.
I'd say that if G100 isn't DX10.1 compatible (which seems to be more and more likely =( ) then we'll have to wait till DX11 NV GPU for any new features.



I don't expect Nvidia's next clean-sheet architecture (NV60) / DX11 GPU until late 2009 at the soonest. Maybe not even until 2010, with the GT200 being "NV55", a new GPU, but not a totally new clean-sheet architecture as G80 (2006), NV40 (2004), NV30 (2003), NV20 (2001), NV10 (1999) and TNT (1998) and Riva 128 (1997) were.

The transistion from G80/G92 to GT200 can probably be thought of much like the transition from NV40 to NV47/G70. GT200 is not a respun/tweaked G80/G92, it's a major upgrade/overhaul of that basic design, yet not a fresh new architecture altogether, that's what many people don't really understand. Of course I could be wrong, GT200 could be a totally new design that breaks with with work done on G80, but I highly doubt it.

In years past, something like the GT200 would've been out 6 months to a year after the G80 instead of almost 2 years.
 
Last edited by a moderator:
Could also mean 240 in Clusters of 24. At least, that's what seems to get most consensus right now.



240 SP sounds much more reassuring than 200 SP.

With only 200 SP, would it not take quite a leap in clockspeed to reach the TeraFlop barrier. with 240 SP, it shouldn't take much of a clock increase. Of course that's an oversimplified way of thinking about it. There's more to it than that, but I don't know/understand the finer details.
 
Not only that, but unlike single G80 / single G92 which have 24 ROPs, the dual G92-based 9800GX2 card have only 16 ROPs each.

The 8800 GT, 8800 GTS 512MB and 9800 GTX only have 16 ROP's.
In fact, there is no G92-based product (single or dual-GPU) with 24 ROP's, not even 20.
 
In years past, something like the GT200 would've been out 6 months to a year after the G80 instead of almost 2 years.

Shall we write down a timeline with new chip releases, manufacturing processes, transistor counts and according die sizes through the past years? Something tells me that you'll be completely wrong.
 
I for one would be really surprised if DX11 (and Nvidia's next chip for it) was as close as 2009, not to mention Q1 2009!!

On the one hand, we don't even have ONE full DX10 game out right now. On the other hand, we started hearing about DX10 a long time before it was out, and who's heard of DX11's details yet? On the third hand, DX9 has been around for quite some time too (I'll contend it's not gone yet), and it seems as the API matures its updates are coming in more slowly.

On the fourth hand (who's counting?), the cross-platform nature of most games nowadays means that most game developers would like the PC API to remain as close as possible to the one of the XBox360, which itself leads me to thinking that DX11 might be tied to the next-gen XBox. While I got that last argument mostly out of my lower backside, there's been scarce information about that next-gen XBox architecture so far, so I don't think it's around the corner.

And the fifth hand is here reminding me that ATI and Nvidia have slowed down their release cycles lately, and I would totally expect them to adopt Intel tick-tock methodology, something one could argue Nvidia and ATI have been following with G80-G92-GT200 and R600-RV670-RV770.

So I think that Nvidia's next product line will be a shrunk GT200 in 9 months, followed by finally a brand new architecture 9 months after that (early 2010?), which I would more readily believe will coincide with DX11 (and Windows 7?)
 
Status
Not open for further replies.
Back
Top