Scott_Arms is probably correct. Ring-fenced just means reserved and possibly out of bound from games.
So you can switch between them without giving up memory.
Scott_Arms is probably correct. Ring-fenced just means reserved and possibly out of bound from games.
So you can switch between them without giving up memory.
He's saying the user doesn't have to quit the game if you bring up the XMB to do other stuff (like PS3). Not so much on protected memory.
He is implying there is actually more than 8gb and that they can access the full amount with additional memory for the os. No idea why you think its going to be a gb this is just because ms are reserving more the rumours suggest 512mb.
Seems a bit much but we have heard from df that they are leaving all cpu/gpu resources available to use so it's not impossible that the os is seperate maybe running on the 'background' chip with its own pool of cheaper more energy efficient nemory.
Or of course this guy might just not really know what is happening.
Compute has access to the system's full set of "very expensive, very exotic" 8GB 256-bit GDDR5 unified memory.
There was also the article that said Compute could use all 8GB of RAM so maybe they really did add Memory for the OS.
http://www.gamesindustry.biz/articl...-simple-experience-for-developers-and-players
Valve's "togl" does the same for DirectX->OpenGL (https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/Porting Source to Linux.pdf). It allows the developer to write the rendering code on DirectX for Linux/Mac platform. The layer translates the calls to OpenGL equivalents (thin layer with fully inlined functions, so it has no overhead), and they also have a HLSL->GLSL shader converter. The presentation also has external links to open source HLSL->GLSL converters. It seems that most game developers want to write their games on top of DirectX instead of OpenGL, and are willing to spend extra time to create a layer / translator themselves, instead of just using OpenGL (that is mostly cross platform by default). So it doesn't surprise me that someone has done the same for PS4 (as the code reuse benefits are pretty big).Not sure if this the right thread or was this even posted already, but someone actually managed to do a C++ compatible D3D11/DXGI/D3DCompiler API on top of PS4's low-level graphics API (supposedly new version of libGCM)
http://n4g.com/news/1226625/paradox-and-yebis-running-on-ps4-with-direct3d11-layer
Reserved, as other's have mentioned. On PS3, OS functions had to be loaded/unloaded. PS4 will have OS in memory that the game cannot access. This might be hardware protected or not, but that's immaterial to the quote and featureHe mentioned system memory being "ring-fenced". I'm interested in this, what does it mean?
That's talking about the GPU data movement, not the system RAM.
They were talking about system architecture, wherein compute (the whole APU) has direct access to a unified pool of 8 GBs. The actual system implementation then walls off 1 GB or whatever for the OS. The OS can still use compute in that memory space.There was also the article that said Compute could use all 8GB of RAM so maybe they really did add Memory for the OS.
http://www.gamesindustry.biz/article...rs-and-players
PS3 and 360 would have had this too I think.
Just in those cases, not enough. The 32MB on 360 wasn't enough to hold the dash, only the guide.
Now presumably the consoles will have 1GB+ so it can hold the whole thing. Another advantage of lots of RAM, a much bigger than generational leap in OS RAM (>10X in both cases likely).
Whats the pro's of using a bsd os? I have googled it and read a bit about it but would rather have some of our more learned friends school me in layman terms.
Well, it's a known kernel, so programmers already have a handle on the BSD API. But I would not base a console on the BSD kernel myself (and I say this with almost twenty years of experience with BSD). It's not a real time OS. The kernel is monolithic and very heavyweight.Whats the pro's of using a bsd os? I have googled it and read a bit about it but would rather have some of our more learned friends school me in layman terms.
Reserved, as other's have mentioned. On PS3, OS functions had to be loaded/unloaded. PS4 will have OS in memory that the game cannot access. This might be hardware protected or not, but that's immaterial to the quote and feature
"Encoder and video decoder hardware-based is on the tip of a separate. Therefore, resources needed for the game I do not use any and all bandwidth and CPU as well. Memories never (to capture the video play) using any" and SCEA explains.
Encoding video of game play will not be performed in the APU. I can look at all the work, and takes place in a completely different chip.
Then, this shows that there is a possibility that the screen output from the APU is output to the display via the secondary tip once. That is, in the PS4, display output interface instead APU, may have been mounted on the secondary chip. I put out via the secondary chip screen output, you will also be easy to encode the screen buffer inside the secondary chip. If you were thinking of course, but can also be sent to the secondary chip on a separate line with output from the APU, and the output from the secondary chip I'm more natural.
If we take such a configuration, even the secondary chip, a certain amount or more of memory and a microcontroller or processor cores is required. I put the CPU cores along the reverse, can be performed only by a secondary chip various things you will be easier. Although costs piling up into chips and external memory, because it is difficult to limit the foundry also in eDRAM, hard to guess what's going on.
Well, it's a known kernel, so programmers already have a handle on the BSD API. But I would not base a console on the BSD kernel myself (and I say this with almost twenty years of experience with BSD). It's not a real time OS. The kernel is monolithic and very heavyweight.
Without massive architectural changes, it's not really suited to a primarily gaming device. It certainly indicates that Sony is aiming for a more general purpose box.
That doesn't sound right, UVD and whatever they called the encoder block in GCN doesn't take squat space from the chip, what could be the point of moving it offchip?Some info (and speculation) on the Secondary Custom Chip from Watch Impress.
The translation is a bit sketchy, but it would seem that the secondary custom chip and the video encode/decode hardware sit outside of the APU and it looks like it doesn't write the video to main memory.