NVIDIA Kepler speculation thread

Lol, what? Why did they even bother hyping up that shit when they know people would be expecting Kepler news and hence be disappointed.
 
I suspect in a week or two another announcement will be coming, so we should all wait before buying an AMD card. Then the cycle will repeat itself, until Kepler actually comes out.
 
Lol, what? Why did they even bother hyping up that shit when they know people would be expecting Kepler news and hence be disappointed.

In my opinion, they did it precisely in order to get people to wait for Kepler some more. And because Tegra is trying to be moved into a core business asset (it could be argued that it already is), so piggybacking a bit on desktop users' expectations can't hurt, au contraire!
 
Kepler and Maxwell are both cancelled as Nvidia is moving directly to mass production of their new chip "Osborne". It is so fast that new physics need to be created in order to understand what is actually going on.

Rumours of Nvidia's 20nm yield issues due to a lack of unobtanium and wishalloy are thought to be no more than semi-accurate lies straight out of AMD HQ.
 
Not sure how much secret info he claims to have, but:
http://vr-zone.com/articles/nvidia-...sure-on-both-high-end-and-low-end-/14937.html
VR-Zone said:
The GK100 'Kepler' and its GK110 follow on were supposed to wrest that lead back to Nvidia, just a month or two from now. However, right now it looks a gone case, at least for the next half year. For whatever reasons, the GK104 upper mid-range part, a follow on to the GeForce 560 family, is now the top-end part to launch, and it can't - I mean can't - win over the HD7970 in any meaningful use. That role was reserved for the GK100 and follow ons.
 
ok so they're focusing on getting their mid-range, and possibly lower-end parts out sooner? Isn't that where the money is anyway? Maybe they're just not caring about enthusiast crown anymore and focusing on mainstream, tablet and HPC markets. If they've garnished a number of large contracts and perf/$/tdp wins for HPC already with Fermi then they can ride that for quite a while longer?
 
Who knows, maybe Charlie was right and GK100 is cancelled while GK110 is basically to GK100 what GF110 was to GF100.
 
If true this would make 2 nodes in a row where Nvidia's big die strategy has been problematic when moving to a new node. Even if GK 100 isn't cancelled, it'll still arrive far later than the competition, similar to GF 100 compared to Cypress. Except this time there's no radical changes like Fermi. If there was there'd have been white papers released already to get HPC companies briefed on compute changes similar to Fermi.

Regards,
SB
 
Wasn't that (GK104 launching first, GK110 later) what the rumors suggested for quite a while already (though the timeframe might not be right)?
I can't really see the news here. The only question seemed to be if GK104 can beat (or equal) Tahiti or not. Ok so VR-Zone thinks it can't but don't say why. I'm not quite sure anymore it really can't given that it most likely is a chip with very similar size compared to Tahiti, factor in (in contrast to Tahiti) it won't have any features just for compute and it looks even better. Given the relatively conservative clocks/TDP AMD have chosen that might enable nvidia to compete quite easily with it (at least with the 7950).
 
Last edited by a moderator:
Wasn't that (GK104 launching first, GK110 later) what the rumors suggested for quite a while already (though the timeframe might not be right)?
I can't really see the news here. The only question seemed to be if GK104 can beat (or equal) Tahiti or not. Ok so VR-Zone thinks it can't but don't say why. I'm not quite sure anymore it really can't given that it most likely is a chip with very similar size compared to Tahiti, factor in (in contrast to Tahiti) it won't have any features just for compute and it looks even better. Given the relatively conservative clocks/TDP AMD have chosen that might enable nvidia to compete quite easily with it (at least with the 7950).

Though if you consider the past, AMD has had leaps and bounds better perf/mm^2 before - the real question is can they hold it with GCN-architecture?
 
Who knows, maybe Charlie was right and GK100 is cancelled while GK110 is basically to GK100 what GF110 was to GF100.

There's always a small but finite chance that Charlie is right
- even a broken clock is right twice a day ...
- eventually a proton will decay ...

:LOL:
 
Wasn't that (GK104 launching first, GK110 later) what the rumors suggested for quite a while already (though the timeframe might not be right)?
I can't really see the news here. The only question seemed to be if GK104 can beat (or equal) Tahiti or not. Ok so VR-Zone thinks it can't but don't say why. I'm not quite sure anymore it really can't given that it most likely is a chip with very similar size compared to Tahiti, factor in (in contrast to Tahiti) it won't have any features just for compute and it looks even better. Given the relatively conservative clocks/TDP AMD have chosen that might enable nvidia to compete quite easily with it (at least with the 7950).

I agree, it's just saying what has been rumored for several weeks now
- the GK104 will likely not beat the HD7970, but maybe the HD7950....
- and the GK110 is due in the Autumn

Here's my guess - the GK104 used TSMC's 28HP process
- however, they canned the GK100 as used too much power
- and the GK114 & GK110 will move to the 28HPL process, just like AMD are using for Tahiti...
 
Though if you consider the past, AMD has had leaps and bounds better perf/mm^2 before - the real question is can they hold it with GCN-architecture?

AMD had better perf/mm^2 in the past, mainly because
1) They didn't have all the GPGPU gunk that nVidia chose to put in
2) They didn't have a hot clock

With this generation
1) AMD has now gone down the GPGPU route, much like Fermi
2) nVidia has allegedly dropped the hot-clock

So, I would expect things to be much closer now
- though AMD would still have an advantage if they are on 28HPL, and NV is on 28HP...
Edit: Actually, that's a perf/w issue, not a perf/mm^2 issue, AFAIK
 
Last edited by a moderator:
Though if you consider the past, AMD has had leaps and bounds better perf/mm^2 before - the real question is can they hold it with GCN-architecture?
That depends on the exact chip you're looking at, it wasn't really all that much recently. Cayman vs. GF114 is just a tiny bit better in perf/mm². Barts, being just very slightly larger than GF116, is of course better there wrt all of Cayman, GF114, GF116, quite massively so compared to GF116, though this one seems to be larger than it should be (if you compare how it scales down from GF114, I guess the mostly unnecessary 192bit bus when coupled with gddr5 and the 8 excess ROPs are at least partly to blame). Juniper is also much better in perf/mm² than GF116 (but again, GF116 just seems too big).
Turks (or Redwood) against GF108, nvidia seems at a disadvantage again, but the die size advantage is small and performance quite close if equipped with ddr3.
GF119 vs. Caicos, I don't know who wins this. Caicos is smaller and faster with gddr5, but paired with ddr3 (which is the only version you can buy) GF119 easily wins the "best of the crap" title.
So nvidia catching up with GCN in terms of perf/area seems quite doable. I think some performance "sacrifice" for GCN in overall perf/area was expected in exchange for the much more predictable compute performance (and easier compiler too...) whereas I don't see any reason why it would change a lot for Kepler from Fermi (apart from dropping hot clock and twice as many alus which should result in larger area but potentially also higher possible clocks).
 
Though if you consider the past, AMD has had leaps and bounds better perf/mm^2 before - the real question is can they hold it with GCN-architecture?

Part of the answer could be computed from gaming perf/mm² when comparing Cayman and Tahiti.
 
AMD had better perf/mm^2 in the past, mainly because
1) They didn't have all the GPGPU gunk that nVidia chose to put in
2) They didn't have a hot clock

With this generation
1) AMD has now gone down the GPGPU route, much like Fermi
2) nVidia has allegedly dropped the hot-clock

So, I would expect things to be much closer now
- though AMD would still have an advantage if they are on 28HPL, and NV is on 28HP...
Edit: Actually, that's a perf/w issue, not a perf/mm^2 issue, AFAIK

but thats just a massive generalization in itself.

first off you have our beloved :LOL: saying:
This is the best desktop graphics architecture and physical implementation ever. Some rough edges, but that's the long and short of it.

2nd if you look in GCN thread lots of people where boarderline orgasmic about GCN's ALU architecture specifically around scheduling and how it is far simpler then fermi but almost as functional.

3rd when has more cache ever been a bad thing?

4th. you cant blame hot clock, its an assumption that NV can get better performance per mm of shader ALU without a hot clock. As everyone says ALU's are cheap, effectively moving data around is much more expensive.

5th we dont really have a real GCN driver yet so let hope that comes soon and we can see what we really can expect, it could still be current level or we could get a nice boost.

6th ALU's take up what 25-30% of the die on most GPU's?
 
Last edited by a moderator:
Back
Top