NVIDIA Kepler speculation thread

He's not talking about a paper-launch or not, he's talking about WHEN you can buy it.
OK, let me rephrase that for you: I'm pretty sure that it'll be possible to buy GK104-based card on the same day it'll be announced.
Tahiti's paper launch is already a fact of the past.
 
it doesnt seem that theyre in such a rush to release anything.

EDIT: If this was a few years ago then things would be very different.
 
Lovebirds, please take it to PM. Thank you.

popcorm1.gif

Awww...

Anywho... So we have Jan 31st and Feb 18th for supposed "announcement" dates.
The best rumor for launch is sometime/end of Q1 '12, for an unspecified Kepler.
More common is the sometime 1H '12, again for an unspecified Kepler.
Charlie is still hassling his moles for a GK100/104/110 tapeout.

Basically we have nothing even somewhat doulbe confirmed about Kepler except 2012.
 
I'm really looking forward to Charlie's article on Kepler after it launches. It should be a fascinating read either way :LOL: I wonder why his moles are so quiet - we're not even getting the usual bad news and scathing "analysis". He's usually good with tape-outs and such.
 
If Xenos was "R500" then what was R5xx desktop?

It was obviously way too early for a USC in the desktop, a console is a completely different environment and Xenos isn't fully DX10 if I recall correctly. Your point was that R400 knew what was going to be in DX10, while chances are high the original design wasn't even DX10 as it ended up to be and ATI canned any USC architecture from the desktop before DX10/R600. What am I missing here?
Dave's article might go into more details but ATI kept unified shaders from the PC because R400 was late and needed changes and there weren't enough resources to meet the Xenos schedule and still ship R500.
 
I think that GK104's availability should be more or less the same as Tahiti's. If Tahiti will be in healthy supply soon there is no reason for GK104 to not have one on the launch day.

Does it also considering that one of them -- in about the same timeframe -- may have to provide the GPUs for next Macbooks and seemingly significant portion of Ivy Bridge laptops as well?
 
I'm really looking forward to Charlie's article on Kepler after it launches. It should be a fascinating read either way :LOL: I wonder why his moles are so quiet - we're not even getting the usual bad news and scathing "analysis". He's usually good with tape-outs and such.

Yeah he's unusually quiet. Maybe NVIDIA/TSMC finally plugged the leaks?
 
What if Nvidia is skipping kepler and going straight to maxwell!!!!:oops::oops:

Nvidia will never skip kepler.

Nvidia will have to humble for about at least 6 more months as GTX580 is no longer worlds fastest single GPU, since it was being dethrone by HD7970....

Plus, we don't know how well kepler will perform; but I think Kepler is going to be good chip unless it's another GTX480.
 
Last edited by a moderator:
Does it also considering that one of them -- in about the same timeframe -- may have to provide the GPUs for next Macbooks and seemingly significant portion of Ivy Bridge laptops as well?
I don't think that GK104 will be the chip that will go into most of Macbooks and Ivy Bridge laptops. GK104 should be the top-end offering in notebook space in the same way GF114 was. Most of Kepler notebooks wins are probably for GK107 and GK106.
 
If a lack of die shots were a definitive criterion, AMD hasn't released a new GPU in years.
I think we will see how GCN would probably look like, when it makes its way into the next Fusion APU, just like the case with Trinity and Cayman, and Llano before that. But indeed I have little hope for a die shot of dedicated GPU from AMD to ever again surface in the public space.
 
what if nvidia drops using int + fp cores for their CUDA cores and go to simplier way with a bit bigger single fp/int cores in order to improve performance/watt and dropping hotclock just make them even more smaller.. 1024CC @1.8GHz vs 1536CCWTFBBQ@1.2GHz latter still should be smaller..

if they are not using their int units for texture filtering etc. it shouldnt be much important for games.. its better to get rid of them and dont waste area for waiting them idle and make a single core able to do fp or int ops and make them more no ?
cudacore_fermi_jioje.jpg
 
Last edited by a moderator:
Back
Top