NVIDIA Kepler speculation thread

OEM official said, "GK104 power exceeds 250W" also points out. In fact, NVIDIA is in a developing stage of PCI Express Gen.3, and has sought to incorporate into standard PCI SIG power 400W.

Wait what? Last time I checked, PCIE 3.0 was still sitting at 300W
 
Feels like too many parts at the very high end.

More likely the high end part is 512bit and its cut-down version has a 384bit bus. Though in that case the castrated version would probably have a 448bit bus. Some confusion there.

So you would end up with 512, 448, 384, 320, 256, 192 (going on past history), 128bit and whatever is lower than that. Seems like overkill.

But if there is a 512bit Tesla part, I suppose it isn't too unlikely that they'd release it in some form as a enthusiast gaming GPU.

In any case I won't take the 512bit rumor seriously until it is at least double confirmed.
 
Last edited by a moderator:
So the 384bit card is not the flagship? I see this as being not likely at all.

I thought the dual-GPU GK110 would be their flagship card. Of course, here one can assume that GK110 will be faster than GK112, exactly the opposite of what the graph shows. But it is also logical to expect a dual-GPU card to be faster than GK112. ;)
Something like:

GK110 = GTX 690
GK112 = GTX 685
GK104 = GTX 680/670
GK106 = GTS 650/660
GK107 = GT 630/640


So you would end up with 512, 448, 384, 320, 256, 192 (going on past history), 128bit and whatever is lower than that. Seems like overkill...

Yeah, because now the situation is less complicated. /sarcasm
You alone can list how many versions of GTX560s alone are out there present at the moment. 4? 5? :D



Or...
There is another possibility. GK110 is the one with 512-bit MI, and GK112 is the dual-GPU. In this case the graph might not be accurate with the expected launch time frames. Also, in this case, the codenames may be wrong...
 
Last edited by a moderator:
nVidia has never given a chip codename to graphics cards. Why do people keep believing these stupid rumours every generation?
 
...In this case the graph might not be accurate with the expected launch time frames. Also, in this case, the codenames may be wrong...
What´s most important is in the text. The graph might give you a false impression, because GK110 is barely above the GTX580.
NVIDIA wants to avoid another GF100. However, since it´s a completely new architecture on a new process and there are some further changes ;) this is still up in the air.
 
Yeah, because now the situation is less complicated. /sarcasm
You alone can list how many versions of GTX560s alone are out there present at the moment. 4? 5? :D
6 if you count the ones being released now/shortly

GTX 560 (OEM)
GTX 560
GTX 560 192bit
GTX 560 Ti (OEM)
GTX 560 Ti
GTX 560 Ti 448
 
That depends on whether or not they can clock up their PHYs. At the ~6GHz, it isn't pretty.

Plus, power-wise, a wider bus with relatively lower-clocked RAM tends to be more advantageous; unless it gets too wide and leakage starts to counteract the gains from the lower-clocked RAM, I guess.

But for a given bandwidth around 240~300GB/s, 512 bits is probably more power-efficient than 384.
 
Plus, power-wise, a wider bus with relatively lower-clocked RAM tends to be more advantageous; unless it gets too wide and leakage starts to counteract the gains from the lower-clocked RAM, I guess.

But for a given bandwidth around 240~300GB/s, 512 bits is probably more power-efficient than 384.
Additionally, unless the uncore-arch changed drastically over Fermi, 512 bit would also mean 33% more L2 cache, 33% more ROPs, and 33% larger maximum frame-buffer.
 
They can already get 3GB on a 384bit bus. Will they need more than that?
You're probably thinking in gaming terms ;)

The high-end Teslas and Quadros have 6 GB, so there seems to be some demand for as-large-as-possible framebuffers in HPC/workstation space. A 512 bit interface would increase that to 8 GB.
 
Back
Top