KILLZONE Shadow Fall [PS4]

Speaking of bottlenecks.

Is this chart from 2003 Nvidias GPUGems still valid?
gpugems_bottleneckqxslx.jpg

Vary texture size - FPS Varies - Texture bamdwidth limited
 
Are you serious? How could more memory help getting more computational performance??
My laptop ran a lot more smoothly when I tripled the amount of memory in it. If you've got more memory, you don't have to access slower media as often, and you can keep things around longer. It indirectly affects your bandwidth usage, depending on what you're doing.
 
Has everyone gone completely crazy here?

Moving from 4GB to 8GB can not make the KZ game run at 60 fps instead of 30 fps.

There are cases where more memory helps, but this is far beyond what's possible.


I can't believe this has to be debated on B3D.
 
My laptop ran a lot more smoothly when I tripled the amount of memory in it. If you've got more memory, you don't have to access slower media as often, and you can keep things around longer. It indirectly affects your bandwidth usage, depending on what you're doing.

A larger memory pool does help avert paging to slower media, although you may have experienced other issues as well.

Did adding RAM (extra modules) up the bandwidth (single to double channel)?

Did adding RAM replace poor quality memory with memory with better CAS latency or frequency (bandwidth)?

Did adding RAM address a paging issue (e.g. your OS had a large page file cache on the HDD and the extra RAM resulted in more resources stored in RAM)?

Without knowing more about your system it is hard to know what was at issue.


@ More generally in regards to the PS4 issues and memory others were talking about: just adding memory (but not bandwidth) isn't going to help performance outside of issues related to memory pressure. Yes, if you are HDD limited performance will go down--but at 8GB of RAM that should be a rare corner case. The extra memory the PS4 developers have received should help games look better (if there is the computational performance to process the nicer textures) and reduce load times and allow larger, more varied worlds (again, if it can process such computationally) but it won't make the GPU faster. Someone posted an old flow chart but the idea pretty much stands:

* If the bottleneck is the GPU (shading/texturing/fill/vertex) then adding more memory won't help at all.

* If the bottleneck is memory (VRAM) bandwidth then adding more memory won't help.

* If the bottleneck is memory footprint adding more memory will help.

There are situations where more RAM can improve performance but there are a *lot* of GPUs with extra memory but no additional performance. This was especially a common tactic in the mid-to-low range discrete GPUs where adding more memory often was a great sales point that didn't cost a lot but could increase perceived value and thus price point.

Assuming a 1GB v.s 2GB scenario, there are situations where the 1GB board may need to pull data from the system memory which slows it down. But typically in these scenarios the 2GB system would underwhelm in these scenarios as well (e.g. the extra memory can store more textures but if you don't have the texturing performance to enable HQ textures it doesn't help much). But to my point outside this scenario the 1GB and 2GB boards perform the same at the same quality levels, so the extra memory helps in a corner case which isn't usually a relevant one.

This isn't so much true on a console that has an UMA. Assuming the same bandwidth 2GB, 4GB, or 8GB as long as the demo/game isn't caching from the HDD or Optical drive or doing funky decompression or such to avert the memory issue they should perform the same if the assets were all tailored to the lowest common denominator (2GB).
 
Has everyone gone completely crazy here?

Moving from 4GB to 8GB can not make the KZ game run at 60 fps instead of 30 fps.

There are cases where more memory helps, but this is far beyond what's possible.


I can't believe this has to be debated on B3D.

Liar.

You absolutely can believe it would be debated here. ;)

On a more positive note: *If* the PS4 dev kit had 1.5GB of VRAM, and *if* the lack of VRAM was the primary cause for the visual short comings (not enough memory for the desired geometry, texture resolution/variety, etc), and *if* the GPU was running fairly idle as the bottleneck was available assets and not the compute-power to process them -- then yes, KZ4 may look a LOT better at released.

But as Laa-Yosh said, it is NOT going to impact the target framerate (60Hz vs 30Hz). It may get ride of some of the studder if it was caused by caching; on the other hand maybe the GPU is computationally limited already and the extra memory won't make the game look much better (just allow better caching and streaming).
 
Again, to illustrate the case...

Let's say you have a PC, a video card with 1GB memory, and the framebuffer takes up about 200MB.

Now if the game wants to use 900MB of texture data for rendering each frame, you'll run into a serious problem. There isn't enough room for 100MBs of data, so the system will not be able to texture from VRAM, it will have to access the main system memory during the rendering of every frame. The problem is, of course, that the system memory has greater latency, having to serve the CPU and other units as well; and the bandwidth to the system RAM is much more limited than to the local VRAM, too.

And in practice it might be even worse than it sounds because it'll not just throw out 100MBs of textures to make room for the rest, but it'll keep unloading and reloading even more data. Which is called trashing.

In this one unique case, adding more VRAM to the system would significantly improve overall performance as all the texture access could be served from the GPU's local memory.

But, for various reasons, this situation can not arise on a console, especially with a unified memory architecture. There shouldn't be any situations where slower memory has to be accessed and the developer would have to be pretty damn stupid to try to access more data per frame than what can fit into RAM.
 
Has everyone gone completely crazy here?

Moving from 4GB to 8GB can not make the KZ game run at 60 fps instead of 30 fps.

There are cases where more memory helps, but this is far beyond what's possible.


I can't believe this has to be debated on B3D.

Not to mention that this Killzone Shadow Fall is a mere launch title developed on a console hardware that has a disclaimer "Hardware specifications subject to change"

If GG can get full 1080p 30fps out of Killzone SF then there is no telling what they will be capable of two years from now.
 
correction:
"framerate is also affected when memory intensive elements take a huge chunk of the available memory bandwidth."

Using smaller textures uses less bandwidth - that is where your performance savings come in. It has nothing to do with the amount of ram being used.
And this only applies when you are bandwidth limited. If anything, using more memory will typically mean reduced performance as it implies you are reading more data, missing caches more often etc (therefor more bandwidth hit).
At the end of the day, KZ:SF has ~4-5GB / frame bandwidth for 30fps. Use more, and the frame rate goes down. Remember that isn't ~4-5 GB of unique memory either, as typically most data will be read+written multiple times per frame.

As for Crysis ps3, if you hit the memory wall (as in, don't have any left) there really isn't much you can do. If you have exhausted algorithmic memory reduction (eg, stream more aggressively - popin), then you simple have to cut stuff out.
So the only way that Killzone may see a framerate improvement then should be under the assumption that it hit the memory wall on the kits available ( 1.5GB or 2GB in earlier kits?) or that the bandwidth was more limited. Are there any of the two scenarios possible taking into account the possible specs of the previous dev kits?
edit: If I got it right from what you said, total bandwidth consumed per second from Killzone SF is around 120GBs to 150GBs and that could be even higher due to multiple read+writes so that leaves out extremely small room for any framerate improvement considering the bandwidth is 176GB/s of the GDDR5 memory.
I apologize for asking since I dont have the knowledge you guys have and I am intersted to learn. Some things may be self-evident for you but not for all of us :)
 
Guerilla would have to be a bunch of bumbling idiots to have an engine and content optimized so badly that it suffers a 50% performance hit because of it. Do you guys believe this to be the case??
 
Guerilla already confirmed 1080p@30fps, and have given no indication this will go up even after they knew about the 8GB. So if anything we may see higher res textures here and there and perhaps lingering smoke ;) but being a launch game I don't expect much more than that. It may not even support 3D, who knows, but if it does, probably at 720p.
 
So the only way that Killzone may see a framerate improvement then should be under the assumption that it hit the memory wall on the kits available ( 1.5GB or 2GB in earlier kits?) or that the bandwidth was more limited. Are there any of the two scenarios possible taking into account the possible specs of the previous dev kits?
edit: If I got it right from what you said, total bandwidth consumed per second from Killzone SF is around 120GBs to 150GBs and that could be even higher due to multiple read+writes so that leaves out extremely small room for any framerate improvement considering the bandwidth is 176GB/s of the GDDR5 memory.
I apologize for asking since I dont have the knowledge you guys have and I am intersted to learn. Some things may be self-evident for you but not for all of us :)

Just to be clear I was guessing the bandwidth figures per frame based on the 176GB/s number, and also assuming you don't hit theoretical max (which you typically don't). It wasn't details of actual bandwidth the game demands - it was to illustrate the point that performance wise, faster ram is generally more important than having more of it - but there is obviously a balance between the two.

I should have said "KZ will have" instead of "KZ has". Apologies.
 
Not to mention ALL the other systems in the game that have to be fine tuned for the target frame rate, you can't just adjust one parameter and expect everything else from AI through physics to fill rate requirements to just simply scale up easily. Especially if the engine was already optimized to some extent.
 
There's certainly enough bandwidth and memory and fill rate for 1080p 30fps and still do more than KZ2/3 did. And the engine isn't doing anything shockingly complex so far - large draw distances and lots of simple single textured geometry for the city scape, but no sign of radically new rendering tech, and a few noticeable issues too (no character shadows in that airborne shot, for example).
 
Shouldn't they attempt other more interesting ideas in destruction, AI, user content and co-op/multiplayer ? We always get a visually gimped MP game this gen.
 
Back
Top