PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
I don't think decompression speed would pose any problems either, if that was your only problem. Security aspect is interesting, you could download encrypted data as is and de-encrypt / re-encrypt (with key specific to your hardware) after the fact, but that would require double the amount of storage space if that is not done in frequent intervals. I have no idea how PS4's security works tho :)
 
Noob question, if a game can run at 1080p60fps, could it also run at 2120p at 30fps? Same amount of pixels per second.
 
Last edited by a moderator:
Noob question, if a game can run at 1080p60fps, could it also run at 3840x2160 at 30fps? Same amount of pixels per second.

Its actually 4x the amount of pixel. There are different bottlenecks on different systems so its a complex question and answer actually.

Lets take Trine 2 for example:

The reason I take Trine 2 was its the only game I know of that the developers stated they could run the game at 2160p on either system.

It runs at native 1080p at 60 frames per second - but that's just the beginning of the story. Frozenbyte's impressive Trine 2 has migrated to PlayStation 4 in fine form - not only does it combine the optimal mix of resolution and frame-rate, but it's the only game to support stereoscopic 3D, running internally at an effective 1080p120 in the process. Indeed, according to the developer, Trine 2 could even hit 4K at 30fps should Sony ever unlock the output of the PS4 to support ultra-HD resolution.
http://www.eurogamer.net/articles/digitalfoundry-vs-trine-2-on-ps4

On Xbox One it uses a cache based memory architecture, and that creates big big problems when you want to render in absolutely massive resolution like that. There may or may not be almost enough triangle & shader performance to render Trine 2 at 2160p (aka 4k). Assuming that wasn't an issue another problem would be you also want to fit as much of your frame you are assembling inside the esram. As each frame gets rendered (assembled) it takes a massive amount of bandwidth at that resolution. They can put some of the frame in the esram and some in ddr3, but more that the frame overlap into ddr3 memory the more ddr3 memory bandwidth is used. This would probably lead into a domino effect of other memory bandwidth issues.

X1 might also be Raster Operations Pipeline (ROP) limited in rendering Trine2 at 2160p (4k).
 
Last edited by a moderator:
PlayStation 4 (codename Orbis) technical hardware investigation (news and rum...

Noob question, if a game can run at 1080p60fps, could it also run at 2120p at 30fps? Same amount of pixels per second.

This is simplified but 16ms is required for each frame to be completed for. 60fps. That doesn't really include the fact that most games use buffering but w/e.

That being said. Going from 60 to 30 fps gives breathing room of an additional 16ms. But you are quadrupling the pixels everything is stressed 4x harder the bottlenecks would be at different areas than it would be at 1080p - because of this there is no way to know what it would be without of course just testing it. It would likely be significantly worse.

Edited: for accuracy/fact arrangement.

Both CPU and GPU have 16ms to do it's work in most titles!
 
Last edited by a moderator:
If your CPU takes 4ms to complete its code that leaves 12ms for the GPU to do its. And so forth.
I think you'll find that games arrange their code so that while the GPU is busy rendering the previous frame the CPU is independently running game logic, reading inputs, rendering sound buffers (well, new consoles have DSPs doing that) and assembling display lists and so on for the next frame. You really don't want the GPU sitting on its ass doing nothing for 4ms (or whatever) every frame. That's a huge waste.
 
SINFO_01.jpg

SINFO_02.jpg

SINFO_03.jpg


Naughty Dog presentation.

If someone went into depth with this sorry. I had not seen it. It was linked Game Dev Presentation section but the link was broke.

The cache hierarchy part was pretty interesting.

 
How much RAM space does the os take up on the PS4? It can't be the 2gb+ unusable stuff right? No way does it cost that much. Isn't there some extra ddr3 ram coupled with that arm chip? How much does that actually do.

Seems like such a waste. Seems like they should take advantage of being a console and have a low os footprint and give devs them there extra ram. The Xbox One is the multimedia machine, not the PS4. Give us more o' that there ram plz. I'm talking out of my ass here, but you get the gist.

So devs get:
4.5gb GDDR5
0.5 GDDR5 managed by the os?
And 0.5 virtual memory right?
 
The large ram allocation is reserved for future use. It seems rather substantial now but its easy to scale back in the future once Sony an Microsoft decide where they want their respective OS to go. If they were to set the minimum now and there was any feature they wanted to add in the future they would be memory constrained.
 
If they were to set the minimum now and there was any feature they wanted to add in the future they would be memory constrained.
That's horse pucky. They could let games software decide what features they want to support and keep vast majority of RAM open to the game. 7.5+ gigs could be made available, easily. Things like networking, voice chat and other mission-critical resident features don't require gigabytes of RAM. Holding RAM in reserve for an underfeatured and crappy web browser which the user can invoke at any time but which most people rarely ever use is just stupid. I've tried the browser out exactly ONCE myself in the ~3/4 of a year since PS4 launch day. There's no need to have it near-instantly available for use at all times.

If a player wants to browse the web poorly with the built-in browser they can wait while the OS pages out a small part of the game to make room in RAM. The PS4 has a HDD as standard, so this is no biggie.

Same thing with camera support, morpheus support and so on. All this stuff doesn't need to be resident all the time during gameplay. Load support dynamically when and as required, like any computer traditionally works. Windows PCs don't load every single fucking device driver and software utility included with the OS. Consoles really don't need to either.
 
Also, the browser doesn't allow many tabs during game. I've once played Ground Zeroes and had two tabs open at the same time and it says there's no ram left!
 
How much RAM space does the os take up on the PS4? It can't be the 2gb+ unusable stuff right? No way does it cost that much. Isn't there some extra ddr3 ram coupled with that arm chip? How much does that actually do.

Seems like such a waste. Seems like they should take advantage of being a console and have a low os footprint and give devs them there extra ram. The Xbox One is the multimedia machine, not the PS4. Give us more o' that there ram plz. I'm talking out of my ass here, but you get the gist.

So devs get:
4.5gb GDDR5
0.5 GDDR5 managed by the os?
And 0.5 virtual memory right?

I was just thinking about the extra 256MB of DDR3 and the secondary processor the other day. I was wondering how much work they took off of the main chip & 8GB of GDDR5 also. I always thought the PS4 & Xbox One comparisons that didn't include the extra memory & secondary processor was a little unfair because we don't know how much is being handled by them.

So far I haven't seen anything from the OS that would make it hard to believe that the secondary chip & 256MB of DDR3 couldn't handle it. that's 1/2 of the RAM that was in the whole PS3 & Xbox 360. I'm sure that's at lease 5X the RAM that was being used by the OS last generation.
 
They could....

Well they didn't, and until the powers that be working on the PS4's system software can tell us why that much memory is being allocated to the OS then we can debate till the cows come home.

Maybe its a intentional limiter for software progression. It isn't unheard of from past Sony consoles.

They can release the majority of the reserve once they figure out the direction they want to go with the OS and system features. Its a safe way to go especially when the system software so immature during launch.
 
SINFO_01.jpg

SINFO_02.jpg

SINFO_03.jpg


Naughty Dog presentation.

If someone went into depth with this sorry. I had not seen it. It was linked Game Dev Presentation section but the link was broke.

The cache hierarchy part was pretty interesting.


That main memory access seems to have very high latency, at least by Intel (nearly double) and even AMD FX standards.

Can't find figures for desktop Jaguar, but I assume they're similarly high?
 
That main memory access seems to have very high latency, at least by Intel (nearly double) and even AMD FX standards.

Can't find figures for desktop Jaguar, but I assume they're similarly high?
Intel mem latencies are around 150 cycles, while AMD latencies are around 200 cycles. I couldn't find Jaguar/Puma latency tests either.

Sisoft tests (Intel)
http://www.sisoftware.net/?d=qa&f=ben_mem_latency
Sandy Bridge (2500K) = 162.7 cycles
Sandy Bridge E (3960X) = 234.4 cycles

Anandtech:
http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/6
Bulldozer (8150) = 195 cycles
Sandy (2500K) = 148 cycles

In comparison, PPC cores (last gen) had 600+ cycles memory latencies.
 
Intel mem latencies are around 150 cycles, while AMD latencies are around 200 cycles. I couldn't find Jaguar/Puma latency tests either.

Sisoft tests (Intel)
http://www.sisoftware.net/?d=qa&f=ben_mem_latency
Sandy Bridge (2500K) = 162.7 cycles
Sandy Bridge E (3960X) = 234.4 cycles

Anandtech:
http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/6
Bulldozer (8150) = 195 cycles
Sandy (2500K) = 148 cycles

In comparison, PPC cores (last gen) had 600+ cycles memory latencies.

Measured in ns rather than cycles, those Jaguar figures look even higher, though perhaps cycles is a more relevant measure. Maybe a power saving, low frequency memory controller simply has to make that tradeoff?

Figures for Jaguar with the cores and NB at different clocks could be interesting ...

And wow at those last-gen PPC figures. Even taking account of the clocks speed, those figures are high! And with no dynamic branch prediction and OoOE either! :)
 
Intel mem latencies are around 150 cycles, while AMD latencies are around 200 cycles. I couldn't find Jaguar/Puma latency tests either.

Sisoft tests (Intel)
http://www.sisoftware.net/?d=qa&f=ben_mem_latency
Sandy Bridge (2500K) = 162.7 cycles
Sandy Bridge E (3960X) = 234.4 cycles

Anandtech:
http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/6
Bulldozer (8150) = 195 cycles
Sandy (2500K) = 148 cycles

In comparison, PPC cores (last gen) had 600+ cycles memory latencies.
that is DDR3 Vs gddr5 cycles. Of course you need some more. If measured in ns, it may be almost the same, so you are just losing some more cycles, at least if those cycles are memory cycles. If those are CPU cycles needed to get data, it may also have to do something with GDDR5. GDDR is not that good for CPUs. But still way better than in the last console generation.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top