PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
He's saying the user doesn't have to quit the game if you bring up the XMB to do other stuff (like PS3). Not so much on protected memory.
 
He's saying the user doesn't have to quit the game if you bring up the XMB to do other stuff (like PS3). Not so much on protected memory.

It is protected memory. The OS has reserved memory that cannot be touched by games. That memory cannot be repurposed for anything other than OS features. That is why the OS can always remain fully loaded.
 
PS3 and 360 would have had this too I think.

Just in those cases, not enough. The 32MB on 360 wasn't enough to hold the dash, only the guide.

Now presumably the consoles will have 1GB+ so it can hold the whole thing. Another advantage of lots of RAM, a much bigger than generational leap in OS RAM (>10X in both cases likely).
 
He is implying there is actually more than 8gb and that they can access the full amount with additional memory for the os. No idea why you think its going to be a gb this is just because ms are reserving more the rumours suggest 512mb.

Seems a bit much but we have heard from df that they are leaving all cpu/gpu resources available to use so it's not impossible that the os is seperate maybe running on the 'background' chip with its own pool of cheaper more energy efficient nemory.

Or of course this guy might just not really know what is happening.
 
Last edited by a moderator:
He is implying there is actually more than 8gb and that they can access the full amount with additional memory for the os. No idea why you think its going to be a gb this is just because ms are reserving more the rumours suggest 512mb.

Seems a bit much but we have heard from df that they are leaving all cpu/gpu resources available to use so it's not impossible that the os is seperate maybe running on the 'background' chip with its own pool of cheaper more energy efficient nemory.

Or of course this guy might just not really know what is happening.

There was also the article that said Compute could use all 8GB of RAM so maybe they really did add Memory for the OS.

http://www.gamesindustry.biz/articl...-simple-experience-for-developers-and-players

Compute has access to the system's full set of "very expensive, very exotic" 8GB 256-bit GDDR5 unified memory.
 
I'm going to go out on a little limb and say that this is yet another example of a highly simplified PR statement that is not vetted for accuracy and is not intended to educate the audience.

Some arithmetic will probably be enough for sanity-checking.
 
There was also the article that said Compute could use all 8GB of RAM so maybe they really did add Memory for the OS.

http://www.gamesindustry.biz/articl...-simple-experience-for-developers-and-players

I think you're reading too much into that statement. I personally wouldn't assume that PR would know exactly how much memory developers actually have access to, so the default statement is "all".

I hope that Sony and Microsoft do give themselves a decent budget of RAM to work with like 2GB or so. If it turns out they don't need it, they can update their docs saying developers can use more, however, you can never give them less.
 
Whats the pro's of using a bsd os? I have googled it and read a bit about it but would rather have some of our more learned friends school me in layman terms.
 
Not sure if this the right thread or was this even posted already, but someone actually managed to do a C++ compatible D3D11/DXGI/D3DCompiler API on top of PS4's low-level graphics API (supposedly new version of libGCM)

http://n4g.com/news/1226625/paradox-and-yebis-running-on-ps4-with-direct3d11-layer
Valve's "togl" does the same for DirectX->OpenGL (https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/Porting Source to Linux.pdf). It allows the developer to write the rendering code on DirectX for Linux/Mac platform. The layer translates the calls to OpenGL equivalents (thin layer with fully inlined functions, so it has no overhead), and they also have a HLSL->GLSL shader converter. The presentation also has external links to open source HLSL->GLSL converters. It seems that most game developers want to write their games on top of DirectX instead of OpenGL, and are willing to spend extra time to create a layer / translator themselves, instead of just using OpenGL (that is mostly cross platform by default). So it doesn't surprise me that someone has done the same for PS4 (as the code reuse benefits are pretty big).

We have always been using our own thin low level API, and hiding all the other device APIs (console APIs, DX, OpenGL) under it. This is also a pretty popular method among cross platform game developers, and allows fast and easy porting from platform to another. DX11 however complicates the situation, since the API is no longer just for graphics programming. Many future engines will use more compute shaders than they use graphics shaders (even for graphics rendering). The compute shaders require more complicated resource management (structured buffers, append buffers, indirect draw calls, data passing from GPU kernel to another -> to graphics, etc) that is harder to wrap nicely under a common interface (that hides all the gritty platform details).
 
He mentioned system memory being "ring-fenced". I'm interested in this, what does it mean?
Reserved, as other's have mentioned. On PS3, OS functions had to be loaded/unloaded. PS4 will have OS in memory that the game cannot access. This might be hardware protected or not, but that's immaterial to the quote and feature

That's talking about the GPU data movement, not the system RAM.

There was also the article that said Compute could use all 8GB of RAM so maybe they really did add Memory for the OS.

http://www.gamesindustry.biz/article...rs-and-players
They were talking about system architecture, wherein compute (the whole APU) has direct access to a unified pool of 8 GBs. The actual system implementation then walls off 1 GB or whatever for the OS. The OS can still use compute in that memory space.
 
PS3 and 360 would have had this too I think.

Just in those cases, not enough. The 32MB on 360 wasn't enough to hold the dash, only the guide.

Now presumably the consoles will have 1GB+ so it can hold the whole thing. Another advantage of lots of RAM, a much bigger than generational leap in OS RAM (>10X in both cases likely).

On the surface, yes. But I think it may have more implications that "abundant memory".

It doesn't really mean protected memory or more than 8GB memory though. Protected memory is probably there for security and stability reasons, but it doesn't facilitate app switching per se.


Whats the pro's of using a bsd os? I have googled it and read a bit about it but would rather have some of our more learned friends school me in layman terms.

I am curious as well. e.g., In PS3, the app/game controls "everything". When they introduced the Move controller, I vaguely remember Sony filed a patent that "tricks" the game/app to pass the proper Move coordinates to the in-game XMB (or in-game web browser) when the latter becomes the foreground app. Perhaps the game needs to translate the cursor to the foreground widget's coordinate system.

In a more mature OS like BSD, I would think that the app multitasking model is fully developed and properly layered such that they don't special case solutions like this.

Would be interesting to see how everything is laid out and work with each other. In particular, the paging system, and the pre-emptively multitasking may interfere with a running game. iOS and Android design their app models differently. Sony will have their own concerns (e.g., Live streaming, RemotePlay). They also learned from their new Vita OS experience (e.g., PSN integration between PS3 and new model, game resume ?).

If anything else, the OS is another big area for techies to explore.

e.g., Another thing is: can you change an accessory setting outside the app/game that uses the accessory at that moment ?
 
Whats the pro's of using a bsd os? I have googled it and read a bit about it but would rather have some of our more learned friends school me in layman terms.
Well, it's a known kernel, so programmers already have a handle on the BSD API. But I would not base a console on the BSD kernel myself (and I say this with almost twenty years of experience with BSD). It's not a real time OS. The kernel is monolithic and very heavyweight.

Without massive architectural changes, it's not really suited to a primarily gaming device. It certainly indicates that Sony is aiming for a more general purpose box.
 
Reserved, as other's have mentioned. On PS3, OS functions had to be loaded/unloaded. PS4 will have OS in memory that the game cannot access. This might be hardware protected or not, but that's immaterial to the quote and feature

I'd say we inadvertently have a pretty close confirmation that perhaps all OS functions will be available in game (unlike PS3 where only Game Data and Mic setup can run from in-game XMB). Except for maybe those features that require additional compute/GPU resources (beyond what's already reserved or assumed reserved)? Along those same lines, I'm still wondering how robust the video encoder is. PS4 Eye video, Remote Play, and all of the Share functions will likely rely on it. Can the encoder handle these functions simultaneously? E.g. playing Sports Championship 3, with a live video chat over PSN, while uStreaming the whole thing? That's at least 2 completely separate encodes (the PS Eye RAW/YUV video feed and the games frame buffer both being encoded).
 
Some info (and speculation) on the Secondary Custom Chip from Watch Impress.

The translation is a bit sketchy, but it would seem that the secondary custom chip and the video encode/decode hardware sit outside of the APU and it looks like it doesn't write the video to main memory.

"Encoder and video decoder hardware-based is on the tip of a separate. Therefore, resources needed for the game I do not use any and all bandwidth and CPU as well. Memories never (to capture the video play) using any" and SCEA explains.

Encoding video of game play will not be performed in the APU. I can look at all the work, and takes place in a completely different chip.

Which likely means its writing to the HDD, or some separate buffer dedicated to the video encode/decode hardware, or perhaps even shared with the secondary custom chip. A smaller secondary storage location of what could be currently undecided size would explain why we've not seen a firm confirmation of how much of the gameplay video is stored for upload. We've heard 15 minutes from rumor, but all official Sony statements seem to say "the last few minutes". Wherever its stored, it appears that the video can be saved until the system goes into standby and then upload automatically (perhaps a good option while playing online games).

They also speculate that the display output goes through the secondary chip instead of in the APU's itself (along with the supposition that the secondary custom chip includes the video encode/decode hardware as well):

Then, this shows that there is a possibility that the screen output from the APU is output to the display via the secondary tip once. That is, in the PS4, display output interface instead APU, may have been mounted on the secondary chip. I put out via the secondary chip screen output, you will also be easy to encode the screen buffer inside the secondary chip. If you were thinking of course, but can also be sent to the secondary chip on a separate line with output from the APU, and the output from the secondary chip I'm more natural.

As others have speculated, Watch Impress thinks there may be separate memory for the Custom Chip (and video hardware):

 If we take such a configuration, even the secondary chip, a certain amount or more of memory and a microcontroller or processor cores is required. I put the CPU cores along the reverse, can be performed only by a secondary chip various things you will be easier. Although costs piling up into chips and external memory, because it is difficult to limit the foundry also in eDRAM, hard to guess what's going on.

They also go quite a bit deeper into why they think that the custom chip sitting outside of the APU with the video encode/decode hardware is a Sony design, and not AMD. Good read overall, a solid translation would be very useful, though.
 
Well, it's a known kernel, so programmers already have a handle on the BSD API. But I would not base a console on the BSD kernel myself (and I say this with almost twenty years of experience with BSD). It's not a real time OS. The kernel is monolithic and very heavyweight.

Without massive architectural changes, it's not really suited to a primarily gaming device. It certainly indicates that Sony is aiming for a more general purpose box.

If they want, they can run BSD on top of a real-time kernel. Not saying PS4 has one, but if they need it, there should be a solution somewhere.

So far, both PS3 and Vita use bits and pieces from BSD (FreeBSD and NetBSD).
 
I'm playing with Juniper network stuff at work, and they are all on RTCore/BSD, so there certainly are options for slim RT kernels based on BSD.
 
Some info (and speculation) on the Secondary Custom Chip from Watch Impress.

The translation is a bit sketchy, but it would seem that the secondary custom chip and the video encode/decode hardware sit outside of the APU and it looks like it doesn't write the video to main memory.
That doesn't sound right, UVD and whatever they called the encoder block in GCN doesn't take squat space from the chip, what could be the point of moving it offchip?
 
Status
Not open for further replies.
Back
Top