Nvidia BigK GK110 Kepler Speculation Thread

It would be equally silly to say that games are badly coded because there is a GPU bottleneck at 1600p with 4xAA. Bottlenecks are completely normal.

Except that the gpu will be maxed out (assuming its not a framebuffer problem) while the cpu has 2 or more idle cores?
 
If your game maxes out six CPU / twelve threads, will you then complain it runs like crap on a dual or quad core, and thus is badly coded?
What we can conclude is a 4.5GHz 2500K is better for gaming than a stock 3960X or AMD FX. Well, too bad.

Will the game run relatively like shit on a 16 core PC with Quad SLI/Crossfire? Sure it will, too. The blame will be on you for buying useless hardware.
 
Last edited by a moderator:
Except that the gpu will be maxed out (assuming its not a framebuffer problem) while the cpu has 2 or more idle cores?

With a CPU, singlethreaded and multithreaded performance are both important. There will always be code that cannot be parallelized. Graphics is multithreaded only, so this is a bad comparison. There is only one use case for GPUs while there are at least three for CPUs (singlethreaded, multithreaded ans something in between).
 
So a 3960X has bad single-threaded performance now?

This is a coding bottleneck, not a cpu bottleneck. The behaviour of BL2 suggests it's running on a single thread, which is pathetic for this day.

The other chart is better -

BL2_01.png


Your 7970 GHz edition reduced to a 560 Ti at 1680x1050.
 
No, you're putting words into my mouth. If its bottlenecked by singlethreaded performance, then that's the way it is. I didn't say it was bad or good or anything else.

And no, because singlethreaded (or lightly threaded) performance is a valid CPU performance use case, it IS a CPU bottleneck. And no, BL2 behavior doesn't suggest that at all. You cannot tell by those results. Jeez.
 
I've seen enough games (especially MMO's) that perform disgustingly on high-end cpu's because of terrible single-threaded game engines. There's simply no excuse for it today. Guild Wars 2 is a great example of how a game can be killed because of a lack of decent multithreading.
 
I've seen enough games (especially MMO's) that perform disgustingly on high-end cpu's because of terrible single-threaded game engines. There's simply no excuse for it today. Guild Wars 2 is a great example of how a game can be killed because of a lack of decent multithreading.

So do it better with the allotted time and money and expertise. I find it quite arrogant to complain about this when we don't know if there are valid reasons for this and what they are.

For example I remember a dialogue with a developer from Creative Assembly, the company that is making Rome 2. He said, they would like to make the game more multithreaded, but that is difficult if not impossible since the thread management and consistency issues of the game state across multiple threads would effectively destroy all gains from going multithreaded in the first place.

But you know better, right?
 
It's very unlikely that Borderlands 2 maxes out the entire GPU given it's DX9.

You may well be right about Borderlands 2 specifically, but not for this reason. Sure, some parts of the GPU can't be used (tessellation units, to name only one kind) but the notion of "maxing out" mostly makes sense it terms of power these days.

Were all units of a GPU maxed out at the same time, it would far exceed its power budget, and throttle. After all, Furmark is only based on OpenGL 2.0, and sure as hell maxes out any modern GPU I know of. So it's perfectly possible, at least theoretically, for BL2 to take a GPU to its power limits, and in this sense (the only sense that matters, in my opinion) max it out.
 
rjMuqbi.jpg


1gJoIE4.jpg


Core Count – 2688 Cudas
Memory – 6 GB GDDR5
Interface – 384-bit
Core Clock – 837 MHz
Boost Clock – 876 MHz
Power Interface – 6+8 Pin

40-45% faster than a single 680.

If those performance numbers hold true I don't understand how nVidia could possibly justivy that 900-1000€ pricetag on this thing
 
How many do they need to sell to justify its existence? I doubt they want to impact 680 prices and those are still $500. I recall them charging more for a less impressive jump in performance in the past.
 
In any case, such arrogant prices are the prerequisite and one of the reasons for the shrinking market. And it is like a spiral of death- the less sales are, the more expensive everything will become.... until a certain point when both AMD and NV will be out of the market.
 
45% more gtx680, if true, means a mrsp about 750$....250$ of materials and vram? :LOL:

No competition on single gpu cards, if really limited on quantity, this will too impact the price.

And well im sure the slides of Nvidia will look far better of the +40%. could see the return of some computing based performance benchmark on their slides. Like particular CUDA bench for show his power.

But i agree if thoses 40-45% hold true, the price tag will be really out of reason.
 
If they priced it "reasonably" they'd be unable to meet the demand, so it would always be sold out (boring, makes customers and retailers nervous) or the chip's production would have to be very significantly increased.

At 700 euros the demand would be possibly very high (hundreds thousands easily sold), plus this card is clocked quite faster than the K20X making it the absolute fastest GK110 variant actually (ignoring FP64 and maybe some disabled feature)

I'd wager TSMC sells all 28nm they can make already.
 
~513mm² die area.

By the way, that's rather surprising to see big high-end GPU from NV without protective cap this time. Looking at the board itself and the bare die, it looks like this GeForce SKU was a quick Tesla model convert for the consumer market. I wonder if some HW or BIOS mods could be done here. ;)
 
Last edited by a moderator:
Back
Top