NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
Upps!

Does that mean that the "missing MUL" is still missing when you use the 280GTX for gaming?
General shading ~gaming...

Therefore the Vantage-Scores would make sense!

280GTX: 240 x 2 x 1296 = 622 GFLOPS.
8800Ultra: 128 x 2 x 1512 = 387 GFLOPS

622/387 = 1,61

The 280GTX has a 1,66 times higher score in Vantage than the 8800Ultra.
Do not forget that GTX 280 has in comparison to 88 Ultra only:
33% more ROPs @ ~ same clock
25% more TMUs @ ~ same clock
35% more BW

And we are talking about a benchmark at 1920x1200 with 4xAA and 16xAF enabled, so this 66% more performance would be not a bad result.

But lets see what in real games happen... :D
 
I currently own a GF6800GT and I wonder whether it's worth waiting for the GT200b or just get an GT200. If it's just a simple die shrink the difference should be marginal (except for Nvidias profit).
 
AFAIK, G80 duplicated the same instruction among the two SIMD arrays, somewhat limiting the threading flexibility -- what is the chance GT200 to break from this "tradition", or there isn't need to do so?
Damn if it works that way it puts the cat amongst the pigeons. My mind boggles :p but I can't entirely rule it out...

Jawed
 
I'm claiming the price for best guess of transistor count. :smile:

The 1.4 B could refer to all transistors including the unused redundant ones.




Quote:
Originally Posted by Voxilla
My guess is we will see another monster GPU for the 9800GTX:

55 nm
1.2 B transistors

256 SP
64 trilinear texture units (64 TA 128 TF)
2 GHz shader
750 MHZ core
512 bit bus, 150 GB/s
1 GB

So basically twice a G80, 3x shader speed at 1.5 TFlop
 
Basically, you're saying that performance increases more than linearly with transistor count increase. And that higher-end chips are more effective (because the fixed parts that always have to be there take up relatively less portion of the total number of transistors) - of course, excluding the cases where the chips are bandwidth-limited, like G92, but I don't suppose this will be the case with RV770XT or GT200. So, theoretically, GT200 should have more perf/tran. count than RV770.

Maybe, maybe not. We cannot state it now, AFAIK, as there are other factors to becounted in. I.e. , if G92 is bandwidth limited, GT200 will be the same, as if it even would have more than 2x the power of G92, has "only" two times the bandwidth. Of course, if this is the case, it will fare more than 2 times higher only when the 512 Mbyte framebuffer of most of sold G92s is exhausted and the 896-1024 Mbyte framebuffer is not. Only exception, if Nvidia studied some new bandwidth saving technique we are not aware yet.
 
I'm claiming the price for best guess of transistor count. :smile:

The 1.4 B could refer to all transistors including the unused redundant ones.




Quote:
Originally Posted by Voxilla
My guess is we will see another monster GPU for the 9800GTX:

55 nm
1.2 B transistors

256 SP
64 trilinear texture units (64 TA 128 TF)
2 GHz shader
750 MHZ core
512 bit bus, 150 GB/s
1 GB

So basically twice a G80, 3x shader speed at 1.5 TFlop

The only thing you got right on those was the buswidth and the memory amount. Considering the amount of mistakes from the name over the manufacturing process and the rather long list of the remaining I wouldn't be too proud about those "laurels".
 
Still if GT200 is truly such a massive power hungry beast. One wonders if they can pull an ATI and do something similar to a R600 -> Rv670 conversion. Somehow I doubt it as part of the problem with the R600 was a very leaky process which I don't think GT200 has to contend with.

If not, then it's quite possible that Nvidia might be leaning towards abandoning monolithic. After all things are only going to keep getting larger with the need to add features for a future DX11.

They already couldn't get enough tranny budget to add both FP64 support and DX10.1 support in the current chip. I'd hate to see how monstrous this might become with DX11 in the mix at some point in the future.

However, with the resources available to NV. I'd surely expect they are already doing as much R&D on multi-gpu as ATI is. At least I would hope so.

Regards,
SB
 
Hell, it might even be able to do it ... actually supporting it in the drivers though is clearly a bad idea for them for the moment.
 
Im still baffled by nVIDIA's decision on sticking with DX10 instead of DX10.1. They've added roughly 50% more shaders, along with doubling of ROPs (compared to G92), doubling of them memory interface and so on, but at the end of the day they decided to leave DX10.1 out.

Does a GPU require alot of a change/tweaks in the pre-existing DX10 hardware for them to satisfy the requirements of DX10.1?
 
Supporting DX10.1 only in their high end range puts weight behind the whole rest of the range from their competitor ... it's a net loss. Until they are ready to support it at least in the mid end they won't support it at all IMO.
 
Supporting DX10.1 only in their high end range puts weight behind the whole rest of the range from their competitor ... it's a net loss. Until they are ready to support it at least in the mid end they won't support it at all IMO.

it still does not explain Nvidia decision to not support Dx10.1 - at all - in Tesla
- or do you think they will be able to add support in their refresh and shrink?
- - Do they just not care - at all - that AMD will tout it and even flaunt it if they can .. we are talking at least 18 months of us doing without while every benchmarker and HW site mentions it
 
Dont they already meet some of the 10.1 spec, atleast thats what I remember as one of Nvidia's exec saying.
 
DX10.1 cost is probably very small (area wise).

If the rumour that they would have to revamp the G8x TMUs is true, then there might not be a signficant cost in area, yet quite a bit in added R&D resources?

Supporting DX10.1 only in their high end range puts weight behind the whole rest of the range from their competitor ... it's a net loss. Until they are ready to support it at least in the mid end they won't support it at all IMO.

Allowing the fact to leak out that they've just taped out a 55nm whatever GT200 variant isn't a net win exactly either. I realize it's not related, yet under that reasoning they shouldn't have combined HDR+MSAA since G80 either.

The real question is if there are any games under development with deferred shading/shadowing with a 10.1 path for MSAA.
 
The 1.2B transistors is what Nvidia talked about yesterday.

http://anandtech.com/weblog/showpost.aspx?i=453
See text besides the die picture of the G100.

Were it not for spoiling the R770 launch the 65nm version would not be released.




The only thing you got right on those was the buswidth and the memory amount. Considering the amount of mistakes from the name over the manufacturing process and the rather long list of the remaining I wouldn't be too proud about those "laurels".
 
Status
Not open for further replies.
Back
Top