Nvidia BigK GK110 Kepler Speculation Thread

Awesome to see a demonstration like this but its hardly as if it takes DX12 and a Titan Black to run a 60fps XB1 game at 60fps! Now if they were running it at 120fps that would have been pretty cool. Or perhaps 4K at 60fps.
 
http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-Talks-DX12-DX11-Efficiency-Improvements

h19fdg1.jpg


GcIb2M2.jpg
 
Extremely interesting but we'll need independent benchmarks comparing the Mantle path to DX11 path on NV hardware in supported games before we come to any conclusions. My guess is NV are exaggerating what they can do with DX11 because if these huge efficiency increases were always available then why haven't we seen them before now.
I got 52 FPS with i7 940, GTX780 and ver. 335.23 driver in the Star Swarm right now (deffered contexts enabled, Extreme preset)
 
Extremely interesting but we'll need independent benchmarks comparing the Mantle path to DX11 path on NV hardware in supported games before we come to any conclusions. My guess is NV are exaggerating what they can do with DX11 because if these huge efficiency increases were always available then why haven't we seen them before now.

I got this feeling too, but in reality, when you saw the graph you get the feeling the increase is incredible.. but in reality we speak about a 2fps difference and with their magical sauce.. 7 fps difference then. its not a lot, thinking some setting "appropriate" or a margin of 2fps from a bench run to another is comon.. you end with something around maybe 10% performance more .. something we can have major between different resolution and setting.


I will run again starswarm with a single 7970 for check where goes the gain, but i find too funny they dont show thief D3D in the second benchmark.
 

It's unfortunate that as a developer you can't try yourself in improvements of the algorithms you are ultimately responsible for. You have to wait for the driver-team to agree that your proprietary hotspot is a general hotspot, and improves on it, if you don't have a deadline in the neck. Or figure out that you might want rewrite your engine to only use the 5 fastest and most optimized draw patterns, which messes up your code's future, and makes you ask yourself what the other 95% of the API is actually useful for. ;)

This is not a long term solution, but a - admittedly and regardless very welcoming - bandaid.
 
I'd be happy to know whether the scaling numbers for specific commands in StarSwarm are generic improvements to those commands, so that other apps using them heavily would see hundreds of percent in gains for those calls.
Hopefully StarSwar.exe gets the same benefit.

Star Swarm is an item of curiousity for me because the starting point for them when discussing Mantle was an extremely unoptomized case. This probably inflated Mantle's benefits versus an engine with at least some of the current best practices for optimizing for the PC.
Similarly, such an outlier in optimization would also leave room for other approaches within the standard API.
The desire to be very free with draw calls so that they number in the tens of thousands doesn't mean they are all that unique, and analysis of the application could yield the necessary profiling or detection logic for the common types.
 
I opened this thread to post the same thing, you got there a minute before me. :)

I'm surprised by the price, I thought it would be more like $1999. Does anyone know the TDP of this GPU?
 
Back
Top