PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
http://www.examiner.com/article/geo...ch-ram-you-can-actually-touch?cid=db_articles



Makes sense as discussed here before.
I don't particularly mind if the PS4 only gets 5~6 GB of ram to the game.

176GB/30= 5.87 GB theoretical max usable BW per frame, and we know it's hard to even come close to this number.

Bandwidth will still be the real limiter.
so an ideal game will use the full 5~6GB for the full frame, but when you turn around in a split second it will display the same 5-6GB?
Quite weird logic.

The hard part is actually finding the data you want to display, and this means you regulary have several times more data you just skip with hierarchical searches.
 
There's a lot more in memory than just graphics, and even not every texture in memory would be displayed all the time, every frame.

The fact that top end GFX cards like tehe Titan has 6GB with 288GB/s has me pretty confident that PS4 with 6GB 176GB/s will do fine. That's not to include chunk of bandwidth will be used by the CPU. If the PS4 is limited by something, at this point it's probably not going to be the the bandwidth or the size of ram, both of which far ovr perform top of the line graphics cards "ratio-wise".

So 5.5-6GB is available for games?

That's the current story it seems.
 
The fact that top end GFX cards like tehe Titan has 6GB with 288GB/s has me pretty confident that PS4 with 6GB 176GB/s will do fine. That's not to include chunk of bandwidth will be used by the CPU. If the PS4 is limited by something, at this point it's probably not going to be the the bandwidth or the size of ram, both of which far ovr perform top of the line graphics cards "ratio-wise".

Unless I'm missing something your example showed the exact opposite of what you're claiming. Titan clearly has far more bandwidth per GB than the PS4. PC cards with similar bandwidth and less RAM have even more still.

By comparison the PS4 has a terrible bandwidth to memory size ratio.
 
so an ideal game will use the full 5~6GB for the full frame, but when you turn around in a split second it will display the same 5-6GB?
Quite weird logic.

The hard part is actually finding the data you want to display, and this means you regulary have several times more data you just skip with hierarchical searches.


Not sure if this illustrates my point, but we do have the Killzone pdf that comes quite in handy that actually tells us how much time is used for what types of jobs.
Full pdf here if you're interested.

WTGYsGx.jpg


The blue portion is obviously the rendering part of the frame being processed, and it takes up ~1/3 of the 33ms that is available for a 30 FPS game.

If I am extremely generous and give them 1/2 of the time for rendering, you're left with ~3 GB max theoretical memory that's avaliable for textures, maps, that sort of stuff, and you're still left with 2 GB or so of cache.

So I don't see how the GPU and the CPU can even come close to using the ~5.5 GB every frame even if it tried.


And yes, we do know how much Killzone SF uses for Video/System/Shared memory and what they use it for.

killzone-video-RAM-use.jpg
 
Last edited by a moderator:
The fact that top end GFX cards like tehe Titan has 6GB with288GB/s has me pretty confident that PS4 with 6GB 176GB/s will do fine.
I think your misunderstanding my point.
I'm not saying it wont be fine.

What I'm saying is that just measuring bandwidth and saying that the max it can access at one time is the max amount of memory a system should need doesn't make sense.
Unless there is some other general equation that takes in all other assets, textures that are and aren't currently in use, audio, game data, game code, etc, which just so happens to map to bandwidth and fps, then fair enough, but that's not what is being said.
 
Unless I'm missing something your example showed the exact opposite of what you're claiming. Titan clearly has far more bandwidth per GB than the PS4. PC cards with similar bandwidth and less RAM have even more still.

By comparison the PS4 has a terrible bandwidth to memory size ratio.

Sorry if I wasn't clear. Titan clearly outperforms the ~7850 on the PS4 by a very wide margin. The memory size and bandwidth needs to be put in context.

If we go by FLOPs count Titan has 4.7 Tflops to process 6GB @ 288GB/s
In comparison PS4 has 1.84 Tflops to process 6GB @ 176 GB/s.
Scaling the titan down to PS4 levels (Titan has ~2.5 times the Flops) will be 2.4GB @ 115.2 GB/s

Clearly PS4 is not memory size constrained or bandwidth constrained.
 
Sorry if I wasn't clear. Titan clearly outperforms the ~7850 on the PS4 by a very wide margin. The memory size and bandwidth needs to be put in context.

If we go by FLOPs count Titan has 4.7 Tflops to process 6GB @ 288GB/s
In comparison PS4 has 1.84 Tflops to process 6GB @ 176 GB/s.
Scaling the titan down to PS4 levels (Titan has ~2.5 times the Flops) will be 2.4GB @ 115.2 GB/s

Clearly PS4 is not memory size constrained or bandwidth constrained.

Ah fair enough, yes I agree, PS4 is pretty compute light compared to the rest of it's GPU specs.
 
The Titan can use all of the 6GB and 288GB/s for rendering/GPGPU. The PS4 shares RAM and bandwidth with the CPU and also is locked out of several GB via the OS.
 
Not sure if this illustrates my point, but we do have the Killzone pdf that comes quite in handy that actually tells us how much time is used for what types of jobs.
Full pdf here if you're interested.

http://i.imgur.com/WTGYsGx.jpg

The blue portion is obviously the rendering part of the frame being processed, and it takes up ~1/3 of the 33ms that is available for a 30 FPS game.
I don't think one sees there what you think. The time needed for the draw calls doesn't equate the time the GPU is busy with the rendering (or do you think the GPU idles ~65% of the time in Killzone, i.e. it uses only ~35% of the GPUs capacity?). There is a lot of queueing going on.
And as others have said already, one always needs to have a few more assets in memory (because one might need them) as one needs for rendering a single frame.
 
The Titan can use all of the 6GB and 288GB/s for rendering/GPGPU. The PS4 shares RAM and bandwidth with the CPU and also is locked out of several GB via the OS.

6GB already excludes the locked out portion by the OS and CPU takes a maximum of 20GB/s out of the 176 GB/s ending up with 156 GB/s. Still makes the ratio of bandwidth and memory size high compared to Titan.
 
I don't think one sees there what you think. The time needed for the draw calls doesn't equate the time the GPU is busy with the rendering (or do you think the GPU idles ~65% of the time in Killzone, i.e. it uses only ~35% of the GPUs capacity?). There is a lot of queueing going on.
And as others have said already, one always needs to have a few more assets in memory (because one might need them) as one needs for rendering a single frame.


I'm not saying you don't need a few more assets in the memory, but considering you're not exactly touching anything more than lets say 3GB per frame, having 5GB or more is already allowing you for a lot of cache. There is a point where having more RAM gives you, well, no performance boost whatsoever.

And I forgot that we do actually have the GPU profile, so I was getting the wrong impression




zYXDKgh.jpg


My point is that these times don't exactly match the size the assets take up in the RAM, so you won't ever be able to get full utilization.
 
So we analyze a game targeting a 4-6GB system and draw the conclusion that any more RAM would be wasted or unnecessary? Interesting.

Whatever amount of RAM is present for developers to use, that's how much the game will take; regardless of how much can be accessed per frame. And the allocation of that memory to each system will vary greatly from title to title depending on design goals and performance needs. Pre-computed data structures and databases are extremely useful for accelerating a great number of things, as are mega and 3d textures which also wouldn't be entirely accessed per frame.
 
So we analyze a game targeting a 4-6GB system and draw the conclusion that any more RAM would be wasted or unnecessary? Interesting.

Whatever amount of RAM is present for developers to use, that's how much the game will take; regardless of how much can be accessed per frame. And the allocation of that memory to each system will vary greatly from title to title depending on design goals and performance needs. Pre-computed data structures and databases are extremely useful for accelerating a great number of things, as are mega and 3d textures which also wouldn't be entirely accessed per frame.

No, that isn't what people are saying.

At any given moment what is being rendered directly to screen will be using data that takes up significantly less than 5 or 6 GB. But you want to have 5 or 6 GB's of data in memory at any given time as it is far faster to access data from memory than from HDD. The last thing you want to do (unless you are streaming in data) is to have to have to load something from HDD when it is needed for rendering the next frame or the 30th frame from now or the 300th frame from now.

And considering how slow 2.5" HDDs are relative to the huge advances in rendering power (compared to last generation of consoles) you want to stream as little as possible if you want to increase fidelity and variety as much as possible. 5-6 GB allows you to have a large amount of level specific or area specific (if streaming) data in fast memory versus having to retrieve it from HDD which is one of the things that will allow for significantly better visuals than the current generation of consoles.

So...

Yes, developers will likely use as much memory as they can.

No, developers won't be rendering 5-6 GBs of stored data to the screen for every frame, not even close to it.

Regards,
SB
 
No, that isn't what people are saying.

At any given moment what is being rendered directly to screen will be using data that takes up significantly less than 5 or 6 GB. But you want to have 5 or 6 GB's of data in memory at any given time as it is far faster to access data from memory than from HDD. The last thing you want to do (unless you are streaming in data) is to have to have to load something from HDD when it is needed for rendering the next frame or the 30th frame from now or the 300th frame from now.

And considering how slow 2.5" HDDs are relative to the huge advances in rendering power (compared to last generation of consoles) you want to stream as little as possible if you want to increase fidelity and variety as much as possible. 5-6 GB allows you to have a large amount of level specific or area specific (if streaming) data in fast memory versus having to retrieve it from HDD which is one of the things that will allow for significantly better visuals than the current generation of consoles.

So...

Yes, developers will likely use as much memory as they can.

No, developers won't be rendering 5-6 GBs of stored data to the screen for every frame, not even close to it.

Regards,
SB
I'm not sure why you said no to Rockster as it sounds like you're saying the same thing. It seems to me some people are still saying more memory isn't very useful since you can't access all memory each frame, but as you and others have said that line of thinking is incorrect.
 
I'm not sure why you said no to Rockster as it sounds like you're saying the same thing. It seems to me some people are still saying more memory isn't very useful since you can't access all memory each frame, but as you and others have said that line of thinking is incorrect.
The usability decreases after certain amount, but I would bet that it's quite good to have as much memory as possible for open worlds where you simulate the whole world, not just small bit around player.
In such case the bandwidth really isn't such a limiting factor as you would never try to access all agents and their interactions during each frame, but you want them to be available in regular intervals.

Same goes for things like voxelizing the world for lighting, it's something you can use a lot of memory for and it's nice to do beforehand.
 
I'm not sure why you said no to Rockster as it sounds like you're saying the same thing. It seems to me some people are still saying more memory isn't very useful since you can't access all memory each frame, but as you and others have said that line of thinking is incorrect.

Bleh, that's what I get for posting after a long, exhausting day. I completely blanked on his second paragraph. It still serves as a general response to people thinking either all the game reserved memory is being used every frame or it is being wasted.

Regards,
SB
 
And considering how slow 2.5" HDDs are relative to the huge advances in rendering power (compared to last generation of consoles) you want to stream as little as possible if you want to increase fidelity and variety as much as possible. 5-6 GB allows you to have a large amount of level specific or area specific (if streaming) data in fast memory
HDDs haven't improved much in the last 10 years, that's true... however the question is: How are you going to fill the 10x+ larger main memory, if you give up on streaming? Loading screens are already borderline too long in most last gen games. You can't just make your loading screens 10x longer and call it a day. If you cant load any more data during your loading screen, you need to have another way to fill the 10x+ larger memory. I see no other option to data streaming during game play. If you want to have higher fidelity content, you obviously need to stream more bytes (or dramatically improve data compression).
 
With regular HDDs we're expecting 120~150MB/s for 3.5 7200 rpm HDDs and 80~100 MB/s for 2.5 inches.

Put in perspective it will take them (if directly copying to the RAM) minimum 30 seconds to ~60 seconds, which is probably unacceptable for loading time unless you have some way to mask it.

SSDs are the future, at least in conjunction with the regular ones. :D
A relatively cheap one now with over 100 GB can hit 300+ MB/s easily and doesn't have seek time issues.
 
Unless I'm missing something your example showed the exact opposite of what you're claiming. Titan clearly has far more bandwidth per GB than the PS4. PC cards with similar bandwidth and less RAM have even more still.

By comparison the PS4 has a terrible bandwidth to memory size ratio.

But PS4 has only 3GB of VRAM (at least KZ only uses this amount). Most of bandwidth 176GB/s is consumed by GPU therefore PS4 is similar to a PC card that has 3GB of VRAM and > 150GB/s of bandwidth. This is still pretty good bandwidth per GB (for VRAM).
 
But PS4 has only 3GB of VRAM
Huh? PS4 has all its game RAM available as 'VRAM'. I suppose you could say with other aspects of the game taking up memory, that leaves 3 GBs available for graphics, but there's nothing like a hard VRAM limit as you call it, and the same content can be used for non-graphics work.
 
Status
Not open for further replies.
Back
Top