NVIDIA Kepler speculation thread

seahawk,

Yes if their original target has always been to tape out the top dog last from all the bulk of Kepler chips, which I'd say could be the case. If true then there was never any GK100; if there was one and they canned it then NV must have quite a bunch of resources to spare or suddenly think its Intel.

------------------------------------------------------------------------------------------------------------------

As for the other stuff above as much as FLOPs != FLOPs the same goes for about every unit in any architecture. With a 4:1 ratio for the FLOPs and reduced clocks for HPC you'll get more or less what's been rumored about for Teslas.
 
It was in June last year that nVidia decided to postpone GTC from October 2011 to May 2012. We still don't know what triggered that move but it could be related to shifts in the Tesla roadmap.
 
It was in June last year that nVidia decided to postpone GTC from October 2011 to May 2012. We still don't know what triggered that move but it could be related to shifts in the Tesla roadmap.

On such short notice you don't cancel hypothetically one high end project and move straight ahead to its supposed successor. Unless you'd want me to believe that they're using pixie dust.
 
On such short notice you don't cancel hypothetically one high end project and move straight ahead to its supposed successor. Unless you'd want me to believe that they're using pixie dust.

Not necessarily a cancellation. They obviously changed their minds about something though. Whether or not it's related to silicon is anybody's guess. It could be completely unrelated, like giving attendees more time to produce content. Or not having anything new to talk about in October.
 
That's quite a quick update and just for the record's sake here are the first exclusive GK110 specifications:

4*32SPs/SM
8 TMUs/SM
4 SMs/GPC
8 GPCs
64 ROPs
850MHz

One freq. Loss of "double-pumping" (accounts for unexpected doubling of SPs?) -- does that make double-precision easier?
 
If double precision is half rate (and assuming that this is not solely a reference to half rate for add combined with quarter rate for mul/mad/fma - I can't be bothered to dig out NVidia's exact wording) then it seems reasonable that the only chip that does this is the big chip which is also the Tesla chip. The consumer chips would offer much lower double precision performance.

If that's the case, then the ALU designs would be quite different for the two kinds of chips.

This might then imply that the ALU design for the top chip is having problems or is having knock-on effects. e.g. overall balance of the chip: too much area spent on ALUs? too little SP throughput for ROPs?
 
trinibwoy said:
It was in June last year that nVidia decided to postpone GTC from October 2011 to May 2012. We still don't know what triggered that move but it could be related to shifts in the Tesla roadmap.

GTC was moved primarily because Nvidia realized they had scheduled it too close to the Supercomputing conference, and didn't want to force people to choose between them. There may have been other reasons, too, but I doubt the rescheduling was due to a shift in the roadmap.
 
From 3DCenter (translated):

nVidia-GeForce-GTX-680-Specifications.gif


The translation doesn't seem entirely clear to me, but it seems they don't consider this chart to be real?
 
Well, given that Kepler is rumoured to have lost the hotclock, that image doesnt make much sense with "Gfx/Proc Clock"...
 
Only if the heat produced from your PC is magically sealed off from the rest of your house. You know, warm air generally spreads around :)
Except the room with PC getting hotter
When I leave my PC on for the night my room is about 1-1.5C hotter in the morning than when it's turned off :)

It more of a localized heating, as you stated when the room with your pc is warmer. Myself, I have never noticed any big increase in heat in the rooms I have kept my pc in. Ijust have never noticed the amount of heat that would be required to effect anything else except that room.
 
GTC was moved primarily because Nvidia realized they had scheduled it too close to the Supercomputing conference, and didn't want to force people to choose between them. There may have been other reasons, too, but I doubt the rescheduling was due to a shift in the roadmap.

Ok, that's a decent enough reason. Now the question is whether the May conference holds anything interesting in store. I suppose they can hold back on the CUDA/HPC news till then even if GK104 is out in the wild. Or maybe Jawed is right and GK104 doesn't have much in common with GK1x0.

4096 SPs? nVidia's gone Creative's StemCell route :oops:
j/k

I hope I'm not the only one that thinks Ailuros is just fucking around with his breaking news on GK110 :)
 
Let's fill in some blanks then:

GTX680 = GK104
3*32SPs/SM
8 TMUs/SM
4 SMs/GPC
4 GPCs
32 ROPs
256bit bus
2GB 5.0Gbps GDDR5
950MHz core
2*6pin
 
GTX680 = GK104
3*32SPs/SM
8 TMUs/SM
4 SMs/GPC
4 GPCs
32 ROPs
256bit bus
2GB 5.0Gbps GDDR5

950MHz core
2*6pin


Well if it manages to outperform a 7970 with only 60% of the bandwidth color me impressed.... I have a hard time believing that though.
 
Back
Top