Nvidia BigK GK110 Kepler Speculation Thread

They are likely not being produced anymore, though, are they? Aside from that: Renaming sucks.
Why should one stockpile so many GPUs when you can easily continue to run 40nm wafers at TSMC. Its not like the 40nm process isn't available anymore gets more expensive or TSMC wants to phase it out anytime soon. If you don't have a replacement for a product you want continue to sell, you don't stop production.
 
More like, why would AMD spend R&D on low end gpu's that rival the performance of their APU's? They'd just get compared unfavourably vs i3's or Pentiums.
 
They are likely not being produced anymore, though, are they? Aside from that: Renaming sucks.

Why assume that? Just because they were not renamed/refreshed in the channel does not mean they are dead or don't have good volume. Channel products in this segment have a different dynamic than products above ~$80. Volumes are not necessarily driven by reviews and flashy game marketing campaigns; in established markets these are primarily driven by distrubution channels and they like product stability - there has been a case for NVIDIA (maybe the 8400 GS) that the markets were so stabilised on that brand they actually had stopped producing the chip and they started to take newer chips and back brand them!

I thought the APUs were the replacement or are supposed to be in the mid- to longterm?
How so? Look at the marketshare of AMD CPU's and think about the market that discrete is primarily serving - we'd all love the situation to be different because the company would be earning billions more. However, even then we do also have the dual graphics strategy (pairing an APU graphics with a discrete GPU via Crossfire technology) as discrete is still seen as a benefit on higher end notebooks; dual graphics is best served by a discrete chip that is fairly close the the performance of the APU graphics.

More like, why would AMD spend R&D on low end gpu's that rival the performance of their APU's? They'd just get compared unfavourably vs i3's or Pentiums.
See above. Additionally, when the IP is done, GPU's are relatively cheap to productize in comparison to APU's.
 
See above. Additionally, when the IP is done, GPU's are relatively cheap to productize in comparison to APU's.

It's generally been a previous years gpu however - and I'm sure that would have continued, ie the 7750 goes with Kaveri...had it not been declared missing.
 
Dave Baumann said:
GPU's are relatively cheap to productize in comparison to APU's.
Eh, either CPU + GPU < APU or APU < CPU + GPU. Either way...

And you can't say HSA, because it isn't here yet.

Dave Baumann said:
However, even then we do also have the dual graphics strategy (pairing an APU graphics with a discrete GPU via Crossfire technology) as discrete is still seen as a benefit on higher end notebooks; dual graphics is best served by a discrete chip that is fairly close the the performance of the APU graphics.
Well, if AMD is looking for people to fire, anyone who genuinely believes this to be a good and viable strategy would be a good place to start...
 
Last edited by a moderator:
Well, if AMD is looking for people to fire, anyone who genuinely believes this to be a good and viable strategy would be a good place to start...

That's very mean/ rude from your side. Hope no one wishes you lose your job.

I thought the APUs were the replacement or are supposed to be in the mid- to longterm?

Integration is always hand by hand with sacrificing performance. It's waste of precious die space which otherwise would have been used for something more specialised.
But, I thought that putting shaders next to classic CPU was because of the need to accelerate all kind of programmes. ;)

why would AMD spend R&D on low end gpu's that rival the performance of their APU's?

Because doing so they give the option to people like me to buy a fast processor from competition and graphics card from favourite brand. ;)
 
They are likely not being produced anymore, though, are they? Aside from that: Renaming sucks.

Renaming only sucks when there's major feature set differences. Process and VLIW4 vs GCN are not major feature sets. Different DirectX versions are.
 
I disagree. If I see a HD7000 GPU in an OEM PC and I heard how fast my friends 7000 GPU is, I would not be amused to find a relatively slow old part in there. Renaming was not okay when Nvidia did it in GTX8000/9000 days and it is not okay now. The only mitigating circumstance is that it is OEM, but still...
 
I disagree. If I see a HD7000 GPU in an OEM PC and I heard how fast my friends 7000 GPU is, I would not be amused to find a relatively slow old part in there. Renaming was not okay when Nvidia did it in GTX8000/9000 days and it is not okay now. The only mitigating circumstance is that it is OEM, but still...

You can addition today Nvidia cards too, why do you say 8000-9000 from nvidia ? you dont count the Fermi laptop not kepler, but called 600 series ? ohh yes i forgot it, they have increase the clock and memory speed.
 
I disagree. If I see a HD7000 GPU in an OEM PC and I heard how fast my friends 7000 GPU is, I would not be amused to find a relatively slow old part in there. Renaming was not okay when Nvidia did it in GTX8000/9000 days and it is not okay now. The only mitigating circumstance is that it is OEM, but still...

Don't forget the Nvidia renaming in the 2xx, short lived 3xx, 4xx, 5xx, and 6xx days as well. :) I'll be looking forward to you blasting Nvidia on their currently renamed products.

Regards,
SB
 
You can addition today Nvidia cards too, why do you say 8000-9000 from nvidia ? you dont count the Fermi laptop not kepler, but called 600 series ? ohh yes i forgot it, they have increase the clock and memory speed.

It was an example, dude. I'm not going to count the renaming fails up against each other, what purpose would that serve?
 
Eh, either CPU + GPU < APU or APU < CPU + GPU. Either way...
And you can't say HSA, because it isn't here yet.

...productize for AMD, as they only sell GPUs to their partners, so a large portion of, say managing inventory is not an in-house problem any more.
 
K20 and K20X NDA has been lifted. Titan claims #1 on the top500 supercomputer list.

http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last

K20X has a TDP of 235watts. GF110's Tesla had a TDP of 250 watts, and the gtx580 was able to increase it's core clocks 19% while staying within the same power envelope. I know that comparing Kepler @ 28nm to Fermi @ 40nm is not apples to apples, but that bodes well for potential Geforce core clocks.
 
K20 and K20X NDA has been lifted. Titan claims #1 on the top500 supercomputer list.

http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last

K20X has a TDP of 235watts. GF110's Tesla had a TDP of 250 watts, and the gtx580 was able to increase it's core clocks 19% while staying within the same power envelope. I know that comparing Kepler @ 28nm to Fermi @ 40nm is not apples to apples, but that bodes well for potential Geforce core clocks.

I think we should rather be looking at K20 clocks and TDP, as this is apparently the higher-volume part, though still much lower-volume than a GeForce.
 
Back
Top