Price Search sez:ahh, thanks
do you know by how much ?
Offical GTX285 specs and LOADS of pics
GTX285 uses G200-350 rev B3 chips
GTX295 uses G200-400 rev B3 chips
GTX260 uses G200-103 rev B2 chips
your best bang for buck looks to be to get another hd4850
ps: why did you go 3870x2 to hd4850 seems a strange choice
Hmm, I missed this:Seems NVidia's had as much trouble with 65 and 55nm as AMD had with R600.
"Wrong DFM" does sound unlikely, I admit. But with the B3 revision of GT200b in GTX285, there's little arguing with the fact that NVidia's really struggled.Why is the GT200b such a clustered filesystem check? We heard the reason, and it took us a long time to actually believe it, they used the wrong DFM (Design For Manufacturing) tools for making the chip. DFM tools are basically a set of rules from a fab that tell you how to make things on a given process.
These rules can be specific to a single process node, say TSMC 55nm, or they can cover a bunch of them. In this case, the rules basically said what you can or can not do at 65nm in order to have a clean optical shrink to 55nm, and given the upcoming GT216, likely 40nm as well. If you follow them, going from 65nm to 55nm is as simple as flipping a switch.
Nvidia is going to be about 6 months late with flipping a switch, after three jiggles (GT200-B0, -B1 and -B2), it still isn't turning on the requested light, but given the impending 55nm 'launch', it is now at least making sparking sounds.
I wonder if the "wrong DFM" is really about the high shader clocks. Do any of TSMC's other customers run any chips or parts of chips at anything like 1.3-1.7GHz?The real question is, with all the constraints and checks in place, how the heck did Nvidia do such a boneheaded thing? Sources told us that the answer is quite simple, arrogance. Nvidia 'knew better', and no one is going to tell them differently. It seems incredulous unless you know Nvidia, then it makes a lot of sense.
I dare say in theory 65/55nm-specific problems shouldn't necessarily impact 40nm.If it is indeed true, they will be chasing GT200 shrink bugs long after the supposed release of the 40nm/GT216. In fact, I doubt they will get it right without a full relayout, something that will not likely happen without severely impacting future product schedules. If you are thinking that this is a mess, you have the right idea.
Ever since the shock and awe of discovering that G92 was road-mapped into 2009Q1, way back when, I don't think anyone's particularly surprised.The funniest part is what is happening to the derivative parts. Normally you get a high end device, and shortly after, a mid-range variant comes out that is half of the previous part, and then a low end SKU that is 1/4 of the big boy. Anyone notice that there are all of zero GT200 spinoffs on the roadmap? The mess has now officially bled over into the humor column.
Charlie has no idea what he's talking about in that article, period. In fact he has no idea whatsoever what he's talking about wrt shrinks; he still believes B3 is the 4th 55nm version even though everybody knows it's B1->B2->B3. The guy is just hopeless and should start redirecting more of his TheInq salary towards psychiatric help.
That is simply not true. Some companies do use A0/B0, but many don't. ATI, even now that it's part of AMD, certainly doesn't... (Remember RV670? A11?) - and while I don't have the time to check the list of all possible companies that don't use A0/B0, for example Icera which prides itself in never needing a respin is always A1 or e1... I'm not aware of anyone not using A0 but using B0, and I'm not sure that'd make much sense.probably because both AMD and intel and ... heck.. everyone else DO use B0 revisions for their processors just nvidia doesn't but then again.. assumption is....?
No, not everyone does. Some companies don't even use numbers...probably because both AMD and intel and ... heck.. everyone else DO use B0 revisions for their processors just nvidia doesn't but then again.. assumption is....?
No, not everyone does. Some companies don't even use numbers...
Dont think so.xbitlabs only measured the PCIe power connectors while HC measured total system consumption. I don't think XB measured the power draw from the PCIe slot.
Besides the fact that the numbers from XbitLabs dont add up, there is a noticeable difference in both reviews.
Check the numbers in the bar graph.What exactly do you mean - xbit's numbers don't add up?
Greetings!
Check the numbers in the bar graph.
That is simply not true. Some companies do use A0/B0, but many don't. ATI, even now that it's part of AMD, certainly doesn't... (Remember RV670? A11?) - and while I don't have the time to check the list of all possible companies that don't use A0/B0, for example Icera which prides itself in never needing a respin is always A1 or e1... I'm not aware of anyone not using A0 but using B0, and I'm not sure that'd make much sense.