Nvidia GT200b rumours and speculation thread

I am aware of that, but given Nvidias current situation, I think, a reassessment of old wisdom might be in order, no?

That would push back the release date of 55nm chips, not to mention the release of subsequent mid range derivatives of gt200. So, perhaps, following conventional wisdom might actually be helpful.
 
That would push back the release date of 55nm chips, not to mention the release of subsequent mid range derivatives of gt200. So, perhaps, following conventional wisdom might actually be helpful.

Depends on when they started thinking about alternatives - if at all.
 
OK 65nm could be but what is that chip if it is not GT2xx series?
Well, someone told me that it's actually the real G100 chip -- 384 SPs with 384-bit bus -- which was scraped in favor of GT200, cause it wasn't fast enough (because of 384-bit bus) and they wanted to push CUDA (it doesn't have FP64 ALUs).
Now don't shoot the messenger =)
 
Well, someone told me that it's actually the real G100 chip -- 384 SPs with 384-bit bus -- which was scraped in favor of GT200, cause it wasn't fast enough (because of 384-bit bus) and they wanted to push CUDA (it doesn't have FP64 ALUs).
Now don't shoot the messenger =)
Woah :oops:

Jawed
 
This makes no sense. Unless it only had, say, 48 TMUs... I don't see how 384 SPs/24 ROPs/384-bit GDDR3 could fit in a 470-480mm² chip on 65nm given NV's poor density this gneration. Not very credible IMO; plus why tape it out and then cancel it? If the problem really was that bandwidth limited performance too much, then obviously a lot of people in their performance analysis team should have been fired on the spot.

It's not impossible, I'll just reserve myself the right to remain very skeptical for now.
 
12 cores each with 4 8-wide SIMDs? 96 TMUs?

We still don't know how much area the double-precision in GT200 costs, do we? Actually, has anyone worked out where in the die shot the double-precision ALUs are?...

As to bandwidth, maybe they were expecting to be able to use GDDR5?

Jawed
 
It's not impossible, I'll just reserve myself the right to remain very skeptical for now.
I'm skeptical too but that's what i've been told.
NVs poor density with GT200 is probably intentional (maybe for cooling reasons?). Simplier chip with more SPs per cluster, without FP64 support and with less ROPs might have 384 SPs and be a little less complex than GT200 i guess. The part that really doesn't make sense is the 512-bit bus -- if they thought that 384-bit GDDR3 wasn't enough and went with GT200 then why didn't they use GDDR5? Unless the original plan for GT200 was to come out in the end of 2007 but that would mean that this "G100" was supposed to came even earlier...
 
We still don't know how much area the double-precision in GT200 costs, do we? Actually, has anyone worked out where in the die shot the double-precision ALUs are?...

I would expect them to use the SPFP ALUs to produce partial results for DP work, - requiring double the cycle count for adds and four passes for muls.

Cheers
 
Wouldn't that contradict their claim of 1/8th DP-performance compared to SP, plus they've explicitly stated to have dedicated hw-units for DP sitting idle when SP is used?
 
I'm skeptical too but that's what i've been told.
NVs poor density with GT200 is probably intentional (maybe for cooling reasons?). Simplier chip with more SPs per cluster, without FP64 support and with less ROPs might have 384 SPs and be a little less complex than GT200 i guess. The part that really doesn't make sense is the 512-bit bus -- if they thought that 384-bit GDDR3 wasn't enough and went with GT200 then why didn't they use GDDR5? Unless the original plan for GT200 was to come out in the end of 2007 but that would mean that this "G100" was supposed to came even earlier...

Maybe your "G100" was suppose to be G90/G91?
They obviously didn't need it, G92 was roughly even with the highend G80 and R600/RV670 was already lagging behind.
 
if they thought that 384-bit GDDR3 wasn't enough and went with GT200 then why didn't they use GDDR5?
Well, apparently nV got just sued for some memorycontroller stuff regarding several different controllers, didn't they? GDDR5 controller was nowhere to be seen, though, every other flavor was - what if they really just haven't developed / bought / whatever GDDR5 controller?
 
No idea about any legal action but it would seem to me that a litigant might struggle to sue somebody for developing an infringing technology if said technology had never been used/released in any products?

The fact that NV haven't been sued for something doesn't indicate that they have never tried to develop such technology, just that it has never been released in a product. I can't see any way that NV haven't been working on a GDDR5 controller - it would be sheer madness if they hadn't!
 
Back
Top