Nvidia BigK GK110 Kepler Speculation Thread

That probably has some effect on the big improvements Nvidia made. Getting rid of the hotclocks probably had a larger effect, though. The "compute" parts of the chip don't do much of anything when gaming, hence those transistors aren't going to be burning through too much power when playing BF3 or Crysis 3 or whatever.

While I'd fully agree with the hotclocks part the "compute" parts as you call them are highly debatable. Do we know for sure that Kepler cores have actually physically independent logic for double precision? Other than that larger caches and register files f.e. on GK110 will partially idle in 3D or how should I understand that?
 
jimbo75 said:
They realise that no matter how much faster they are, people will still pay more for Nvidia cards.
Now if only they could figure out WHY.

Of course it took them over 5 years to figure out that shimmering AF no matter how "correct" pissed end users off...
 
Why do you think AMD's high-end cards are faster?

Unless you're counting the ridiculous dual-gpu cards AMD has been faster for 8 out of 11 months this year, first with the 7970 and then with the 7970 GHz Edition.

If you do count the dual-gpu cards, AMD has been faster for the vast majority of the past 4 years.

What I've learned is that it doesn't even matter because most people laying out $300+ on a graphics card don't even know that AMD exists. They don't look at benchmarks either - they simply buy whichever Nvidia card that is in their price range and ends with 60, 70 or 80. Nvidia will be able to command $600+ for GK110 and it will still probably outsell the 7970.
 
Now if only they could figure out WHY.

Of course it took them over 5 years to figure out that shimmering AF no matter how "correct" pissed end users off...

It's simple brand. AMD's name is junk to the "elite". I argue with them constantly on gaming forums (this might not come as a surprise to many) and the vast majority of them are absolutely amazed to find out that AMD is actually faster. They simply did not even regard it as a possibility.

It's going to take a concerted effort for AMD to change that, and they won't do it by slashing prices on their cards - that just reinforces the belief that they must be slower. Take two cards, show them to a person who has no idea about the performance of either and they'll assume the most expensive one is faster.
 
Unless you're counting the ridiculous dual-gpu cards AMD has been faster for 8 out of 11 months this year, first with the 7970 and then with the 7970 GHz Edition.

If you do count the dual-gpu cards, AMD has been faster for the vast majority of the past 4 years.

I though the Tahiti based mGPUs are vendor specific initiatives?

As far as the 7970GE goes it sounded always to me like a reaction to end up a tiny bit ahead of the GTX680 which is also a tiny bit ahead of the 7970. Considering the price point of the GE I'm not sure why there has been a need for that one exactly, especially since 7970s are quite good overclockers anyway.

NV could have reacted with a further pumped GK104 variant considering the perf/W ratio, but I'd also fail to see the purpose, let alone that it'll bite in the future GK114 to GK104 performance difference, a price AMD might have to somewhat "pay" with the Sea Islands top dog compared to the 7970GE.
 
Unless you're counting the ridiculous dual-gpu cards AMD has been faster for 8 out of 11 months this year, first with the 7970 and then with the 7970 GHz Edition.
The 7970GHz Edition isn't clearly faster than the GTX 680. They tend to trade positions in different benchmarks. Plus it's hotter, louder, and uses more power. These are things that consumers also care about, not just pure performance.
 
AMD needs to be credible on software. They seem to mostly make promises for Steamroller and follow ups but nvidia is still perceived to have better drivers, more features (because of CUDA and PhysX support) and is a go to brand for the semi-pro or professional markets.

I'm still waiting for an audible and strong commitment to linux driver, i.e. top quality and long support of hardware, especially as they make APUs, want them to be used for HPC. They need to be rock solid under Linux, and Steam for Linux is coming : an APU would be apt for running games in the vein of TF2, counterstrike etc. at least on the hardware side but if I feel I'll be crippled, immediately or down the road when Catalyst drops it, then I won't buy it.

Even if desktop linux market share is 1% (1% of 500 million computers) some people feel more comfortable if they know their hardware can get off the road.
 
The 7970GHz Edition isn't clearly faster than the GTX 680. They tend to trade positions in different benchmarks. Plus it's hotter, louder, and uses more power. These are things that consumers also care about, not just pure performance.

And yet when Nvidia is hotter, louder and using more power, they don't seem to care as much.
 
And yet when Nvidia is hotter, louder and using more power, they don't seem to care as much.

Because those rich milksops (or cissies) don't have a damn clue what they buy. Perhaps geforce sounds better to them than Radeon, which is actually kind of true.
Truth is that people who are deep enough in hardware field are willing to buy Radeons better.

Apple or nvidia addiction , it is the same. People in the know would prefer to go for HTC or Samsung. :mrgreen:
 
The 7970GHz Edition isn't clearly faster than the GTX 680. They tend to trade positions in different benchmarks. Plus it's hotter, louder, and uses more power. These are things that consumers also care about, not just pure performance.

Depend what benchmark you are talking about, overall now,the 7970Ghz is faster in most bench. and there's some exception where the Nvidia one is marginally faster or equal ( i think to BF3 where they trade blow ).

Does this card was needed ? at my sense, they could have just push a 1050-1075mhz 7970v2 ( like nvidia have done by the past ) and it will have been surely enough. But i m sure it have got some effect on different level ( when you see now the Ghz edition and specially new model tested, on the highest positions, this have allways an impact )

Take a look at the Asus Platinum HD7970Ghz review. TDP is maybe a bit high. but damn this card is impressive.

It's simple brand. AMD's name is junk to the "elite". I argue with them constantly on gaming forums (this might not come as a surprise to many) and the vast majority of them are absolutely amazed to find out that AMD is actually faster. They simply did not even regard it as a possibility.

It's going to take a concerted effort for AMD to change that, and they won't do it by slashing prices on their cards - that just reinforces the belief that they must be slower. Take two cards, show them to a person who has no idea about the performance of either and they'll assume the most expensive one is faster.

You darken the pictures a bit there .. First it is needed to separate " AMD CPU " and the brand image of the GPU one. On all forums i see, there's offcourse some Nvidia fanatics ( and AMD fanatics ) ( even sometimes you can ask you if some are dont even just work for nvidia marketing when i read some post) .. But mainly i see many peoples on different forums who jump from a brand to another without problems. Now, you will effectively see many gamers who will just buy Nvidia, but they buy Nvidia cards till the Geforce 4 . They will buy a GT650 just cause they remember the 550 or believe it is as good as kepler high end.. They just dont even know what they buy.

Nvidia have a good marketing and have some argument ( 3Dvision and even PhysX can be important for some gamers ( i see even some who think, PhysX is running on all games, like if it was a standard acceleration made in transparency by the gpu with all games lol . ) And even if the 3d stereoscopic market is a niche ( you need a compatible 3Dvision monitor, specially with 3Dvision 2 ( due to the luminosity enhancement)... its definitively not suited for all games .

AMD need some improvement on 3D stereoscopic and features but they need to learn how to sell their features to a "global large public " and not as if they was adress their marketing to ultra enthusiasts.

What is funny is AMD take some good points on enthusiast and overclockers guys. The 7970 when released was allready a damn good piece for overclocking.
Today with the step back is doing Nvidia for voltage and overclocking control, the 7970's ghz or not, are even more appreciated. (but well ATI AMD gpu's have allways been a lot of fun for overclocking, as far i remember ) Multiple monitors are yet more convenient on AMD side and this will ofc only interest some enthusiast gamers ( maybe just a feeling due to they was the first to offer it ). Even on computing side, and there again, it will only interest some few enthusiast, the 7970 is doing really well for the image of AMD.
 
Last edited by a moderator:
The 7970GHz Edition isn't clearly faster than the GTX 680. They tend to trade positions in different benchmarks. Plus it's hotter, louder, and uses more power. These are things that consumers also care about, not just pure performance.

Yes it is faster, by about 10%. And it uses more power under load, but less when idle.
 
While I'd fully agree with the hotclocks part the "compute" parts as you call them are highly debatable. Do we know for sure that Kepler cores have actually physically independent logic for double precision? Other than that larger caches and register files f.e. on GK110 will partially idle in 3D or how should I understand that?

I don't know the technical side of how it works, but it's how Fermi's architecture operated. In benchmarks that were not CPU-bound, GF110 has exactly the same efficiency (perf/watt) as GF114, despite all the extra compute transistors GF110 had. I don't see why Kepler will be any different.

If Nvidia can improve the perf/watt metric of GK114 over GK104 by at least 10%, then a 250-260 watt GK110 should be 40-55% faster than gtx680.
 
And yet when Nvidia is hotter, louder and using more power, they don't seem to care as much.
When nVidia's parts were hotter, louder, and using more power, they didn't sell nearly as well against AMD parts. The GTX 6xx parts are selling well because they are quite good parts.
 
Yes it is faster, by about 10%. And it uses more power under load, but less when idle.
That's really stretching things. And the idle power is only less on long idle. But if you care about power, you don't usually leave your computer on anyway.
 
Are you referring to the long idle standby state?, because there doesn't seem to be any difference in regular idle power consumption between these cards.

Both. Well, mostly long idle, but looking at The Tech Report and HardWare.fr (my most common references) the 7970 GE has a ~2W advantage in idle. Not huge but I think the typical user will spend a lot more time in 2D mode than 3D.

As for long idle, I care about power, but for various reasons my computer needs to remain on for long periods of time when I'm not actively using it, so it matters quite a bit to me. I'm sure I'm not the only one.
 
When nVidia's parts were hotter, louder, and using more power, they didn't sell nearly as well against AMD parts. The GTX 6xx parts are selling well because they are quite good parts.
For what it's worth, the standard Nvidia coolers for the most recent generations seemed to be better-received than AMD's.
The noise factor came up as a frequent demerit in the GHz Edition reviews, so much so that AMD's stated position was to wait for cards with non-standard coolers. That's not a particularly good marketing optics, and the chilled the reception of the review samples was the only thing cooler for AMD's decision.
 
I don't know the technical side of how it works, but it's how Fermi's architecture operated. In benchmarks that were not CPU-bound, GF110 has exactly the same efficiency (perf/watt) as GF114, despite all the extra compute transistors GF110 had. I don't see why Kepler will be any different.

GF110 = 3.0b/530mm2
GF114 = 1.95b/355mm2

GK104 = 3.54b/294mm2
GK110 = 7.1b/550mm2

Notice any difference?

If Nvidia can improve the perf/watt metric of GK114 over GK104 by at least 10%, then a 250-260 watt GK110 should be 40-55% faster than gtx680.

Probably yes. But until they announce anything themselves it's all speculation.
 
Back
Top