Nvidia GT200b rumours and speculation thread

Offical GTX285 specs and LOADS of pics

GTX285 uses G200-350 rev B3 chips
GTX295 uses G200-400 rev B3 chips
GTX260 uses G200-103 rev B2 chips

€370 no thanks - classic case of buy refresh, really repent 6 months later. If that translates at €1 = £1 then the last 280's around at £275 now look a bargain (and I still don't want to pay over £200). Guess I'll have to wait for a faster sub £200 card. Its struck me that at that price a 4850X2 is still ~15% faster and ~18% cheaper.
 
Last edited by a moderator:
your best bang for buck looks to be to get another hd4850
ps: why did you go 3870x2 to hd4850 seems a strange choice
 
your best bang for buck looks to be to get another hd4850
ps: why did you go 3870x2 to hd4850 seems a strange choice

Because I got fed up with dual GPU issues i.e. bugs/ performance issues in new games like Stalker:CS and Crysis:Warhead whilst waiting fro crossfire profiles and bugs in old games like Rome:Total War where the minimap flickers when CF is enabled. I sold my 3870X2 for about £90 and the 4850 was £115 so the 'sidegrade' didn't hurt too much, plus there are plenty of times when the 4850 performs better. That's one reason I'm not buying another 4850 and waiting for the next well priced to performance single gpu option. As I'm on a P45 mobo it kind of makes sense for it to be an ATI card so I can crossfire if I want to.
 
Seems NVidia's had as much trouble with 65 and 55nm as AMD had with R600.
Hmm, I missed this:

http://www.theinquirer.net/inquirer/news/801/1049801/nvidia-55nm-parts-update

Why is the GT200b such a clustered filesystem check? We heard the reason, and it took us a long time to actually believe it, they used the wrong DFM (Design For Manufacturing) tools for making the chip. DFM tools are basically a set of rules from a fab that tell you how to make things on a given process.

These rules can be specific to a single process node, say TSMC 55nm, or they can cover a bunch of them. In this case, the rules basically said what you can or can not do at 65nm in order to have a clean optical shrink to 55nm, and given the upcoming GT216, likely 40nm as well. If you follow them, going from 65nm to 55nm is as simple as flipping a switch.

Nvidia is going to be about 6 months late with flipping a switch, after three jiggles (GT200-B0, -B1 and -B2), it still isn't turning on the requested light, but given the impending 55nm 'launch', it is now at least making sparking sounds.
"Wrong DFM" does sound unlikely, I admit. But with the B3 revision of GT200b in GTX285, there's little arguing with the fact that NVidia's really struggled.

The real question is, with all the constraints and checks in place, how the heck did Nvidia do such a boneheaded thing? Sources told us that the answer is quite simple, arrogance. Nvidia 'knew better', and no one is going to tell them differently. It seems incredulous unless you know Nvidia, then it makes a lot of sense.
I wonder if the "wrong DFM" is really about the high shader clocks. Do any of TSMC's other customers run any chips or parts of chips at anything like 1.3-1.7GHz?

If it is indeed true, they will be chasing GT200 shrink bugs long after the supposed release of the 40nm/GT216. In fact, I doubt they will get it right without a full relayout, something that will not likely happen without severely impacting future product schedules. If you are thinking that this is a mess, you have the right idea.
I dare say in theory 65/55nm-specific problems shouldn't necessarily impact 40nm.

The funniest part is what is happening to the derivative parts. Normally you get a high end device, and shortly after, a mid-range variant comes out that is half of the previous part, and then a low end SKU that is 1/4 of the big boy. Anyone notice that there are all of zero GT200 spinoffs on the roadmap? The mess has now officially bled over into the humor column.
Ever since the shock and awe of discovering that G92 was road-mapped into 2009Q1, way back when, I don't think anyone's particularly surprised.

Jawed
 
Charlie has no idea what he's talking about in that article, period. In fact he has no idea whatsoever what he's talking about wrt shrinks; he still believes B3 is the 4th 55nm version even though everybody knows it's B1->B2->B3. The guy is just hopeless and should start redirecting more of his TheInq salary towards psychiatric help.

Yes, NV's entire 65/55nm line-up is one hell of a fiasco, but Charlie's FUD has little to do with the real problems.
 
Charlie has no idea what he's talking about in that article, period. In fact he has no idea whatsoever what he's talking about wrt shrinks; he still believes B3 is the 4th 55nm version even though everybody knows it's B1->B2->B3. The guy is just hopeless and should start redirecting more of his TheInq salary towards psychiatric help.

probably because both AMD and intel and ... heck.. everyone else DO use B0 revisions for their processors just nvidia doesn't but then again.. assumption is....?
 
probably because both AMD and intel and ... heck.. everyone else DO use B0 revisions for their processors just nvidia doesn't but then again.. assumption is....?
That is simply not true. Some companies do use A0/B0, but many don't. ATI, even now that it's part of AMD, certainly doesn't... (Remember RV670? A11?) - and while I don't have the time to check the list of all possible companies that don't use A0/B0, for example Icera which prides itself in never needing a respin is always A1 or e1... I'm not aware of anyone not using A0 but using B0, and I'm not sure that'd make much sense.

However I see your point and you're right that it does give him a good excuse... :) Although not one to make the same mistake all the time, and to keep distorting things in the same direction. I doubt I'm the only one who's slightly annoyed about how he uses his sources' tidbits; of course, I'm sure at least a small part of those sources are very happy about it. Heh, whatever - it's what he's paid for, and when it comes to sensationalism he's one hell of a scandal.
 
same could be seen with the 9800GTX+ the process got cheaper but somehow power consumption wasn't influenced in a good way because of the extra "performance"

http://en.hardspell.com/doc/showcont.asp?news_id=3628


xbitlabs only measured the PCIe power connectors while HC measured total system consumption. I don't think XB measured the power draw from the PCIe slot.
XB is also missing "2d peak" for some cards.
 
Last edited by a moderator:
Cant edit my previous post?!

xbitlabs only measured the PCIe power connectors while HC measured total system consumption. I don't think XB measured the power draw from the PCIe slot.
Dont think so.
Check the above link. I believe the PCI-E slot power is labeled "+12v" and "+3,3v", the "+12v Ex.1" and "+12 Ex.2" are the power connectors.

:?:
 
Greetings!

Check the numbers in the bar graph.

The numbers in the bargraph don't need to add up as they're cumulative. There are three numbers indicated, Idle, Peak 2D, and Peak 3D.

EG: EVGA GTX 260 (715/1541) = 45.1 Idle, 47.6 Peak 2D, 111.8 Peak 3D
 
That is simply not true. Some companies do use A0/B0, but many don't. ATI, even now that it's part of AMD, certainly doesn't... (Remember RV670? A11?) - and while I don't have the time to check the list of all possible companies that don't use A0/B0, for example Icera which prides itself in never needing a respin is always A1 or e1... I'm not aware of anyone not using A0 but using B0, and I'm not sure that'd make much sense.

If I remember correctly, ATI have historically used A11 silicon. Wasn't the R600 A13?
 
Back
Top