If it's 240, that means 256 in clusters of 16, with one disabled to improve yields?
According to CUDA docs branching granularity remains at 32 pixels. So each cluster probably just gets another 8-way SIMD. That should give a healthy boost to shading while making better use of the already abundant texturing capability introduced with G80.
According to CUDA docs branching granularity remains at 32 pixels. So each cluster probably just gets another 8-way SIMD. That should give a healthy boost to shading while making better use of the already abundant texturing capability introduced with G80.
No DX10.1? Come on, will we have to stick another 1,5 years to bad DX10 AA?
For some reason I believe the GT200 is just a huge monster chip, enabling all the brute force off G80/G92 architecture. I don't think it will be the technical foundation for the next 1.5 years.
Oh, come on...Lol bad AA? Please explain this mystical advance in AA that DX10.1 brings.
I like your way of thinking.I think alot of people are expecting Nvidia to increase the amount alu power available per TMU (more efficient use of die space on chips designed for high resolutions). I'm curious if it will have much of an impact with dynamic branching though. The 24 SPs per cluster also raises some questions WRT double precision in my mind....
Did NVIDIA make a G80 "X2" card? No. The G80 core is too big and too hot to stuff two of them onto a 'single' card. With G92's reduced power size and power consumption, it became much much more feasible to put two cores so close together. But even with G92's higher efficiency, the 9800GX2 is just adequately cooled.
If two GT200 cores (rumored TDP >200W each) were to be put in the same place, a dual-slot aircooler wouldn't be able to effectively dissipate that kind of heat. A triple-slot design would be cumbersome and perhaps too heavy, and there aren't enough enthusiasts with watercooling to justify having a 'watercooling only' card.
If/when GT200 is shrunk to 55nm, it might be possible, but not likely before then.
I also really doubt a monster like GT200 has "awesome margins." Yields won't be great with such a big core, and in a time where $200 cards can run most games at high settings on 1920x1200 monitors, it's going to be a hard sell. It's not like average gamers are really clamoring for more performance right now.
Oh, come on...
As for something new from NV in 1Q09 -- don't count on it. G100 is teh new. It's shrinks and G1xx's after that.
I'd say that if G100 isn't DX10.1 compatible (which seems to be more and more likely =( ) then we'll have to wait till DX11 NV GPU for any new features.
Could also mean 240 in Clusters of 24. At least, that's what seems to get most consensus right now.
Not only that, but unlike single G80 / single G92 which have 24 ROPs, the dual G92-based 9800GX2 card have only 16 ROPs each.
In years past, something like the GT200 would've been out 6 months to a year after the G80 instead of almost 2 years.