NVIDIA GF100 & Friends speculation

Oh thanks for the correction! ...so rearranging what i know.....GF114 stock competes with Cayman Pro....900mhz GF114 should come close to Cayman XT/GTX 570...1Ghz GF114 edges GTX 570 and beats Cayman XT....that is some heavy performance..Caymans is in a room of hurt if Nvidia manages to sell GF114 at SRP $259 ...the average user should be able to OC GF114 to 900mhz....btw.
 
http://www.xtremesystems.org/forums/showpost.php?p=4708609&postcount=154

More info on GTX 560 1GB....at 900mhz SOC speed, Gigabye rates it at close or equal to HD6950 2GB....the chinese website have prices of it at Y1900 while HD6950 2GB goes for Y2250..I am predicting stock GTX 560 SRP will be at $259, can AMD make Cayman Pro 1GB as affordable?
Well if the 900Mhz version is competing with Cayman Pro, at 820Mhz it would really only compete with Barts XT so Cayman Pro 1GB could remain more expensive (a bit) than stock GTX 560.
Not sure if the super duper version poses any real threat, might be too expensive, too noisy, only small quantities (e.g. design pushed a bit too far), though I guess we'll see. But yeah if at 900Mhz it competes with 6950 the 1Ghz version (despite quite a bit lower memory bandwidth) will likely compete with HD 6970 (1GB) (and the GTX 570 for that matter).
1.0V at 820Mhz is quite amazing OTOH though unless I see the reviews I'm guessing the average will be higher (these are likely multi-VID as well), though it would still be low in any case.
 
the power use is nearing that of a 570 too.
for fun I've computed 448 / 384, that's only a ratio of 1.17.

You know that 570 has 480 cores?


Well if the 900Mhz version is competing with Cayman Pro, at 820Mhz it would really only compete with Barts XT so Cayman Pro 1GB could remain more expensive (a bit) than stock GTX 560.
Not sure if the super duper version poses any real threat, might be too expensive, too noisy, only small quantities (e.g. design pushed a bit too far), though I guess we'll see. But yeah if at 900Mhz it competes with 6950 the 1Ghz version (despite quite a bit lower memory bandwidth) will likely compete with HD 6970 (1GB) (and the GTX 570 for that matter).
1.0V at 820Mhz is quite amazing OTOH though unless I see the reviews I'm guessing the average will be higher (these are likely multi-VID as well), though it would still be low in any case.

I doubt the 1Ghz model will be available in significant quantities. Also I would say most 570s should work pretty well at 1.0v and 820Mhz too.
 
You know that 570 has 480 cores?




I doubt the 1Ghz model will be available in significant quantities. Also I would say most 570s should work pretty well at 1.0v and 820Mhz too.

Correct and correct!

Both my 570s run fine at 850Mhz with 1.013V. They even completed 3dmark11 at 900Mhz albeit at 1.075 while the went half way through at 950Mhz. Other 570s maybe more capable.

As I said, the 560 seems to be a very impressive card, but let's not forget that a 570 at 1100Mhz memory clock (very easy to do), has a stunning 176GBs/sec of bandwidth, something that the 256bit bus of the 560 cannot even dream of. That's why I specifically guessed it will be as fast as a 570 in non bandwidth limited scenarios. If nothing else, the 570-560 OC comparison will be mighty interesting.

It's nice to see Nvidia back in the game, by just tweaking some fermi stuff! Let's hope they remain modest about it and focus on better products.
 
Things like that make me wonder if Nvidia are looking to un-officially phase out the GTX 570 for a variety of reasons.

GTX 570 launching 1 month later than GTX 580 suggests that yields on GTX 580 are good enough that it can't support a healthy ecosystem of GTX 570's without using non-salvage parts. IE - it took an extra month to get enough chips that aren't fully enabled GF110's to support a launch.

I don't think that is going to happen any time soon. 570s availability has been more abundant of the two and there is a very good market position for it. With the size of the Fermi chip, it's hard to believe that the yield would be so good, that there wouldn't be a plentiful supply of 480 core chips.

I think the yields for 384 core GF114 chips that can reach 1Ghz is lower than what they should be for it to take the place of GTX 570. 820-900Mhz is good for that chip, and makes way more sense in the overall lineup.

Imo the nVidia lineup is looking pretty damn solid after the 560 enters the marketplace.
 
.......

Imo the nVidia lineup is looking pretty damn solid after the 560 enters the marketplace.


Indeed. And there's also GTS 550 in the making/coming.

I guess the only thing they can do about it, is enable the disabled ROPS and the third 64bit memory controller. So no real extra shading or texturing power per clock will be available, but there's still the possibility of higher clocks and coupled with the extra bandwidth, could lead to a 30% over the GTS 450, which would put it above the 5770 for sure.

I'd also guess that we will see 768MB and 1536MB versions of this card. The 1536MB will surely make a lot of people thing that it actually matters for a card of this caliber, but still it will sell.
 
Indeed. And there's also GTS 550 in the making/coming.
Which should based on GF114?

After the introduction of the "Ti" other GF114 SKUs might be GTX 560 or GTX 560 MX.

GTS 550 could be based on the rumored GF117. Probably a GF106 like GPU with only 128-Bit MC or 192-Bit/12 ROPs/192KiB L2.

If they can reach the same clock performance like on GF114, a GTS 550 with 192SPs, 128-Bit @ 925/1850/1100MHz should be possible and ~20% faster than GTS 450, while have a reduced die-size.
 
Which should based on GF114?

After the introduction of the "Ti" other GF114 SKUs might be GTX 560 or GTX 560 MX.

GTS 550 could be based on the rumored GF117. Probably a GF106 like GPU with only 128-Bit MC or 192-Bit/12 ROPs/192KiB L2.

If they can reach the same clock performance like on GF114, a GTS 550 with 192SPs, 128-Bit @ 925/1850/1100MHz should be possible and ~20% faster than GTS 450, while have a reduced die-size.


I am thinking of a full spec'ed GF106 actually. 192 shaders, 24 ROPS, 192bit bus. A core clock in the 900s seems possible.
 
Update (01/18): Gigabyte commented on this article. The company outright denied to have anything to do with whatever is in those pictures, and alleged it to be some kind of a "malicious attack" on it. In a statement, it said: "the information is false and the data is simulated from our old card. The picture is incorrect and was obviously photoshopped from our previous GTX460 model. The GTX560 card looks nothing like pictured on the article. We have good reason to believe this is a malicious attack."

Now Gigabyte is calling malicious fake on those slides....the drama :LOL:

1Ghz may sound too good but 880mhz is still acceptable and a max power use of 180W... So it is known GF114 is possible for a 60/50mhz stock OC...not bad when "OC" Cayman Pro i know only have a measly 10mhz above stock...
http://forums.techpowerup.com/showthread.php?t=138681
http://www.arlt.com/Hardware/PC-Kom...e/GTX560/MSI-N560GTX-Ti-Twin-Frozr-II-OC.html
 
I am thinking of a full spec'ed GF106 actually. 192 shaders, 24 ROPS, 192bit bus. A core clock in the 900s seems possible.
IMO , GF106 isn't good enough , I have compared GTS 450 to GTS 250 on several occasions , and the later was generally better ,and faster at handling more pixels and AA and even running PhysX titles , whereas GTS 450 was slow or inadequate .
 
Salvage parts will cost less due to the fact that you can recover/use a GPU that would otherwise have been thrown away.
GPU cost = cost of wafer / GPUs used from this wafer. All GPUs will cost the same no matter if they are salvage parts or not.

Wafer costs will be roughly equivalent for both.
Wafer costs for different customers can be different because it's up to TSMC to decide the pricing in each case. What's even more important is that there can be different pricing schemes for different customers and it's not entirely true that a GPU cost made at TSMC is almost purely based on yields and die sizes. TSMC has to make some money too while making sure that they won't loose their customers to competition. Saying that GPU cost is based solely on yields and die sizes is like saying that oil costs are based solely on cost of extraction and transportation.
 
GPU cost = cost of wafer / GPUs used from this wafer. All GPUs will cost the same no matter if they are salvage parts or not.

Yes but the existence of a salvage part should increase usage per wafer, decreasing the unit cost. So while they don't necessarily cost less individually, it certainly is a cost savings for the IHV to use them.
 
GPU cost = cost of wafer / GPUs used from this wafer. All GPUs will cost the same no matter if they are salvage parts or not.

Just as an example. Lets assume a wafer costs 50000 USD.

Chip A yields 100 working GPUs, no salvage chips used. 500 USD per chip.

Chip A yields 100 working GPUs, 400 salvage chips used (Chip B). 100 USD per chip.

Chip A yields 100 working GPUs, 400 salvage chips (Chip B), and an addition 500 salvage chips (Chip B). 50 USD per chip.

Salvage chips decrease the cost of chips. So if no salvage chips are used that full enabled chip in this example would cost a whopping 500 USD each. By the time you get down to a 3rd salvage SKU, it's now 450 USD cheaper. Or adding a 2nd salvage SKU (3rd chip SKU) it's made the chips 50 USD cheaper than only have one salvage SKU.

And with all those examples. Non-used chips still = 0 USD. And unused wafer space = 0 USD. Cost per chip will be purely based on wafer cost divided by working chips. So if you end up with only 1 working chip, you just made a chip that cost you 50000 USD.

So, in the end. Smaller chip = better use of available area on a circular wafer = better cost per mm squared even if yields were 100% for a smaller chip and a larger chip. Add in the fact that larger chips yield worse given equal levels of redundancy, etc. and smaller chips have a far smaller cost per mm squared.

Wafer costs for different customers can be different because it's up to TSMC to decide the pricing in each case. What's even more important is that there can be different pricing schemes for different customers and it's not entirely true that a GPU cost made at TSMC is almost purely based on yields and die sizes. TSMC has to make some money too while making sure that they won't loose their customers to competition. Saying that GPU cost is based solely on yields and die sizes is like saying that oil costs are based solely on cost of extraction and transportation.

Sure, company A might negotiate a cost of say 4500 USD per wafer, while company B might only manage to negotiate a cost of 4550 USD per wafer. It's not going to overly affect the cost per mm squared of a large versus small chip. And perhaps one company will pay less for less validation, but does a company really want to risk the potential of massive levels of failure and public recall of product?

Your oil example is particularly bad. As I explicitly mentioned that there's addition costs once you get to board manufacturing and packaging. IE - refining and other costs associated with transforming Oil to petrol, or diesel, or plastic, or lubricants, or whatever. But that just makes your analogy even worse. As refining Oil into petrol inevitably leads to additional saleable products made from the byproducts. Although that could be considered similar to salvage chips, in which case it bolsters my example even more.

Regards,
SB
 
I can't believe people have problems with this. Reminds me of something like a 5th grade math question:
Bob has 10 apples and is planning to sell them on the farmers market. But alas, half of them are rotten, so he has to sell the good ones for $2 to make a profit. Even worse, apple munchers are hurt by the recession so the best price he can get is $1.50. Poor Bob, he’ll be ruined!

But wait! A passerby sees Bob’s rotten apples and offer to buy them for $0.50 a pop to feed to his pet pig. This offer makes Bob break down and cry. “Even if I can sell all my apples, I still have to make a dollar for each one”, he exclaims sobbingly, “and I’ve already lost $2.50 today. I can’t afford to loose that much money a second time!”

Q: Explain why bob is a moron.
 
Haha poor Bob! Seriously though guys he was obviously referring to working chips all having the same cost whether they are salvage or fully enabled. Hence the "GPUs used" in the denominator.
 
WTF the 1Ghz GF114 is real now? ....what malicious attacks are you getting Gigabyte...dat P2200 scores for Vantage beats ma Cayman ProXT by a good 300+....
Length of stock 560 Ti is 9 inches....NV TDP of 175W....pcb is longer than 460..but the components installed looks....far simpler than Cayman 1GB....
 
Any reason not to think that this is the same rumour resurfacing, which Gigabyte already denied? The slide looks fake. Would be a monster of a card though.

The slide itself could be fake indeed.

That does not mean that 1Ghz would be completely out of the question for the GF114 though.

My GTX 570s can do 3dmark11 at 900/1800/1100 just fine and that's on auto fans, which was quite silent actually. They even managed 950Mhz halfway through.

So I guess a careful design of the card, along with the much smaller GF114, should be able to do 1Ghz for more than just a quick run through.
 
Back
Top