NVIDIA Kepler speculation thread

There'll be plenty of forum goers that (rightly or wrongly) see any 3GB GPU as being unable to remain competitive with the 8GB consoles over the long term.
I fail to see how this is any different than the FUD that was spread about PS3's Cell processors, or the "DX10+" capabilities of XENOS? There's no limit to human stupidity, and 3GB vs 8GB is the exact same argument as it has always been.

Idiots will spew FUD, and no matter what AMD or NVIDIA put on their little marketing boxes, people who want to believe FUD are going to do it regardless. "Oh those PC fanboiizzz want to tell themselves that 3GB > 8GB, HAHAHAH!"

What good does adding your DDR3 system ram to your GDDR5 pool of video ram? It's not the same, it's not flat access, it's in pure technicality inferior anyway. Do we really need to have that specific discussion? No, because the people who understand that it will not matter in the end are the people who will know better.

This is the easiest way to say it:

The people who buy into the FUD will be those who want to validate their own preconceived bias about what platform they have already decided is best. If you have already decided that PS4-fo-Lyfe, then you'll grapple onto the 8GB > * "fact" and never let go.

It doesn't matter. The end.
 
I think he has a point

Thanks. It is exactly what I meant ;)

They obviously don't want to go into the detail of memory management but something to explain why the "3GB" they are so proudly plastering on the box of their highest end $500 GPU is not worse than the "8GB" Sony and MS will be advertising for their new consoles which will sell for a similar price could be a good idea

Absolutely. It is a fact that they use large memory sizes for marketing purposes on low-end cards where they push 2 GB on cards which would never ever make use of it.

And the other important part- I hope no one confuses them that those present cards with 3 GB of memory would be poor performers in 5 years, while the console with 8 GB of memory will be future-proof and capable to run those future titles.

I'm not saying they're right but if 3GB really is enough then that's something the GPU vendors may want to at least make some basic attempt to educate the masses over.

They should explain why it has ONLY 3 GB and not 4 GB or 5 GB, because otherwise many will either blame them, or demand it

Of course, it should not stop only with the explanation but improving future titles, so PCs with this gen graphics cards and consoles can compete more fairly in 5 years
 
Last edited by a moderator:
I don't think there is really any need for explaining. Seeing is believing. PC versions will continue to look as good or better than their console counterparts, which will speak for itself.

I don't doubt that the console versions may very well be "good enough" for most people though, at least for a while...
 
Personally, I'm still not concvinced 3GB will be enough to always retain parity let alone superiority. Or 4GB for that matter. Until someone like ERP, 3dilittante, Gubbi, sebbbi etc... explains why it would be enough and specifically how they think system RAM will make up for the deficit in the real world (not just what is possible in theory with dedicated developers) then I'm going to remain dubious.
A console developer on a unified memory console splits the memory in a way that suits their game the best. It's definitely possible that some PS4 games could use 4-5 GB of memory for graphics resources. Killzone Shadow Fall presentation already shows around 3 GB of graphics resources, and it's a launch game. We should expect games to use every last bit of available memory when PS4 gets more mature.

PC drivers swap textures in/out of GPU memory. It's a requirement, since multiple applications (and Aero / Modern UI) are sharing the same GPU memory pool. There's however no application side mechanism to tell DirectX to preload ("cache hint") a texture just before it's used. Driver/DirectX must stall every time you use a resource that is not in the GPU memory. Textures tend to be quite large, and the GPU has no knowledge what areas of the textures a shader is going to access, so it must download the texture(s) completely to the GPU memory. This causes stuttering.

I wouldn't personally buy a new 2 GB graphics card anymore. I am sure there will be next gen games that require more memory to run smoothly (without stuttering) at maximum detail levels. Titan (or a 6 GB 7970) is actually not a bad bet if you want to be absolutely sure you can play all the next gen games without any hiccups at max details in 2560x1600.

Also if anyone here is still using a 32 bit OS (Windows XP), now is a good time to upgrade it. 32 bit software running on a 32 bit OS can only utilize 2 GB of memory (without hacks that cause other problems). That's not going to be enough for all next gen games. Unlike graphics memory usage (that can be easily scaled down by reducing resolution and detail level), the system memory usage of a game can be very hard to scale down without simplifying the game logic (or creating special smaller levels for low end computers). Some games will certainly require a 64 bit OS to run. I would also recommend a memory upgrade if you have 4 GB or less system memory installed.
 
Thanks sebbbi and 3dilittante for your comments. I guess its probably going to be a while yet before 3GB really shows its limitations (at least 6 months!) But it sounds like the lesson here is that if you dont want to upgrade for several years but still want to play next gen games at or above console settings, dont choose a 3GB GPU today.

I guess from NVs pov though, most people willing to buy a 780 today will have switched to 880 or 980's by the time 3GB becomes a limitation. Why take away their incentive to do so by giving them enough memory today to skip over those generations.
 
A console developer on a unified memory console splits the memory in a way that suits their game the best. It's definitely possible that some PS4 games could use 4-5 GB of memory for graphics resources. Killzone Shadow Fall presentation already shows around 3 GB of graphics resources, and it's a launch game. We should expect games to use every last bit of available memory when PS4 gets more mature.
But you still have to wonder. How the heck did they manage to consume so large amounts of memory ? That's a whole small game , or half a big one!

And that was with no MSAA , for god's sake that's crazy ! I can understand the situation if they used Ultra-High texture resolution or very large environment assets, distance or whatever .. but none of that was present in their demo , heck an entire Crysis 3 level is miles bigger and of much higher quality than their demo , and it no where consumes even 2.5GB @1080p !
 
How much of Crysis is hidden by tall buildings etc? I'm no console gamer by far but I was very impressed by the Killzone demo when I saw it.
 
But you still have to wonder. How the heck did they manage to consume so large amounts of memory ? That's a whole small game , or half a big one!

And that was with no MSAA , for god's sake that's crazy ! I can understand the situation if they used Ultra-High texture resolution or very large environment assets, distance or whatever .. but none of that was present in their demo , heck an entire Crysis 3 level is miles bigger and of much higher quality than their demo , and it no where consumes even 2.5GB @1080p !

Considering PC games have been extremely limited in what they can do due to the 2 GB application accessible memory limitations of 32 bit Windows for quite a few years now, I wouldn't say it was crazy.

Just look at how texture quality and resolution have been relatively stagnant for many years now. Not to even go into the whole repeating textures problem of a very limited memory pool.

You'll still have some of those limitations at 8 GB (less than that for video obviously) but it'll take longer until you run into those limitations allowing for better looking textures, effects, levels, scene complexity, etc.

Regards,
SB
 
PC drivers swap textures in/out of GPU memory. It's a requirement, since multiple applications (and Aero / Modern UI) are sharing the same GPU memory pool. There's however no application side mechanism to tell DirectX to preload ("cache hint") a texture just before it's used. Driver/DirectX must stall every time you use a resource that is not in the GPU memory. Textures tend to be quite large, and the GPU has no knowledge what areas of the textures a shader is going to access, so it must download the texture(s) completely to the GPU memory. This causes stuttering

So, from what I understand here, it is that the nature of DirectX and the low memory amounts, the reasons for the so famous problem with micro-stuttering. Very annoying for dual-chip cards, and sometimes observable even on single-chip ones

And one more- it is true that we have 8 GB of system memory plus the local graphics but I have never heard about a game which can consume and occupy the whole system memory. And the other question- this memory is very slow in general- will it be helpful to achieve higher image quality or do we need large unified memory pools in PCs as in consoles?
 
So, from what I understand here, it is that the nature of DirectX and the low memory amounts, the reasons for the so famous problem with micro-stuttering. Very annoying for dual-chip cards, and sometimes observable even on single-chip ones?
I'm pretty sure he's talking about major stutters/hitches: the large stutters that happen only occasionally when you need to load a new texture into the GPU. If the driver is loading and unloading a texture each frame, there is something very wrong in its LRU algorithm.
 
You'll still have some of those limitations at 8 GB

...but you have PRT to deal with it, which also allows you - at the cost of some disk swapping delay- to keep much more textures that you could, in memory.

I think GPU/CPU memory is artificially divided for a very simple reason - after all, you HAVE gpu-specific memory (textures etc) and CPU specific memory. But likely, this is not 'fixed forever' otherwise you would not have 0-copy operations once you load a texture and want to move it to GPU.
 
Something strange, i dont know if they try to figure the performance with K20 spec, or if they have the 780 in test.. Chiphell dont even call it GTX780 on the table...

because here they have: 320bits bus + 5GB memory + 40ROPS ( K20spec ).. and WCC still say 384bit bus 48ROP

Edit: Yes this table is dated of 27April, they was trying figure the performance of TitanLE with a K20.

__________________________________________

Fudzilla: GTX 770 spec: http://www.fudzilla.com/home/item/31430-nvidia-gtx-770-detailed-as-well
 
Last edited by a moderator:
A 18% increase in TDP (230 W) over the 680 for a 40 MHz (4%) core boost and 1 Gbps (17%) memory boost over the 680? How much more power does that faster memory use?

The extra TDP headroom could be to allow for some decent overclocking. I'm pretty sure GK110 with only 12SMX units enabled can hit a higher clockspeed than 837MHz...
 
A 18% increase in TDP (230 W) over the 680 for a 40 MHz (4%) core boost and 1 Gbps (17%) memory boost over the 680? How much more power does that faster memory use?


Its more because this is mostly the minimum Boost speed ( warranty boost speed ) like with the 680.. many cards run way faster of it. I will not be surprised the cards will run at 11000+mhz most of the time.

( 680 =1058mhz boost clock but most run between 1084 -1100mhz ).

The extra TDP headroom could be to allow for some decent overclocking. I'm pretty sure GK110 with only 12SMX units enabled can hit a higher clockspeed than 837MHz...

He speak about 770 specs ( so GK104 )
 
Back
Top