boxleitnerb
Regular
They are likely not being produced anymore, though, are they? Aside from that: Renaming sucks.
Why should one stockpile so many GPUs when you can easily continue to run 40nm wafers at TSMC. Its not like the 40nm process isn't available anymore gets more expensive or TSMC wants to phase it out anytime soon. If you don't have a replacement for a product you want continue to sell, you don't stop production.They are likely not being produced anymore, though, are they? Aside from that: Renaming sucks.
They are likely not being produced anymore, though, are they? Aside from that: Renaming sucks.
How so? Look at the marketshare of AMD CPU's and think about the market that discrete is primarily serving - we'd all love the situation to be different because the company would be earning billions more. However, even then we do also have the dual graphics strategy (pairing an APU graphics with a discrete GPU via Crossfire technology) as discrete is still seen as a benefit on higher end notebooks; dual graphics is best served by a discrete chip that is fairly close the the performance of the APU graphics.I thought the APUs were the replacement or are supposed to be in the mid- to longterm?
See above. Additionally, when the IP is done, GPU's are relatively cheap to productize in comparison to APU's.More like, why would AMD spend R&D on low end gpu's that rival the performance of their APU's? They'd just get compared unfavourably vs i3's or Pentiums.
See above. Additionally, when the IP is done, GPU's are relatively cheap to productize in comparison to APU's.
Eh, either CPU + GPU < APU or APU < CPU + GPU. Either way...Dave Baumann said:GPU's are relatively cheap to productize in comparison to APU's.
Well, if AMD is looking for people to fire, anyone who genuinely believes this to be a good and viable strategy would be a good place to start...Dave Baumann said:However, even then we do also have the dual graphics strategy (pairing an APU graphics with a discrete GPU via Crossfire technology) as discrete is still seen as a benefit on higher end notebooks; dual graphics is best served by a discrete chip that is fairly close the the performance of the APU graphics.
Well, if AMD is looking for people to fire, anyone who genuinely believes this to be a good and viable strategy would be a good place to start...
I thought the APUs were the replacement or are supposed to be in the mid- to longterm?
why would AMD spend R&D on low end gpu's that rival the performance of their APU's?
Because doing so they give the option to people like me to buy a fast processor from competition and graphics card from favourite brand.
They are likely not being produced anymore, though, are they? Aside from that: Renaming sucks.
I disagree. If I see a HD7000 GPU in an OEM PC and I heard how fast my friends 7000 GPU is, I would not be amused to find a relatively slow old part in there. Renaming was not okay when Nvidia did it in GTX8000/9000 days and it is not okay now. The only mitigating circumstance is that it is OEM, but still...
I disagree. If I see a HD7000 GPU in an OEM PC and I heard how fast my friends 7000 GPU is, I would not be amused to find a relatively slow old part in there. Renaming was not okay when Nvidia did it in GTX8000/9000 days and it is not okay now. The only mitigating circumstance is that it is OEM, but still...
You can addition today Nvidia cards too, why do you say 8000-9000 from nvidia ? you dont count the Fermi laptop not kepler, but called 600 series ? ohh yes i forgot it, they have increase the clock and memory speed.
Eh, either CPU + GPU < APU or APU < CPU + GPU. Either way...
And you can't say HSA, because it isn't here yet.
It was an example, dude. I'm not going to count the renaming fails up against each other, what purpose would that serve?
K20 and K20X NDA has been lifted. Titan claims #1 on the top500 supercomputer list.
http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last
K20X has a TDP of 235watts. GF110's Tesla had a TDP of 250 watts, and the gtx580 was able to increase it's core clocks 19% while staying within the same power envelope. I know that comparing Kepler @ 28nm to Fermi @ 40nm is not apples to apples, but that bodes well for potential Geforce core clocks.