*spin-off* Ryse Trade-Offs

Status
Not open for further replies.
Why do you have to do this? :cry:
I (and many Xbox fans as well) was hoping this method could even the playing field.

i would say the playing field has already been evened out by the hardware dynamic scaler. unless developers want to devote time to create a software version for PS4 version of games, xbox one versions might have a better framerate for games. like if both games are native 1080p, when alot of stuff is going happening on screen. there might be framerate drops on the ps4 version that on the xbox version would stay the same but instead the resolution would drop dynamically. hud stays 1080p while the rendering drops momentarily.
 
PS3 has display planes too, although 2 instead of 3. Display planes with hardware scaling are a common feature, and mobile chipsets include far better functionality than either consoles. The unknown for PS4 is whether the UI can be on the second plane and preserved while the game layer dynamically scales, or whether the second plane is purely for OS UI overlays.
 
PS3 has display planes too, although 2 instead of 3. Display planes with hardware scaling are a common feature, and mobile chipsets include far better functionality than either consoles. The unknown for PS4 is whether the UI can be on the second plane and preserved while the game layer dynamically scales, or whether the second plane is purely for OS UI overlays.

EDITED: I thought the primary benefit of the display planes was to always render the foreground HUD natively and then dynamically alter the resolution of the title in the background. Native res instrumentation and guns etc in 1080p should minimize the distraction and degradation in IQ to the player since those are the things that benefit from sharp edges and fine detail. With only 2 DP i think the purpose is clearly different, solely the user experience/interaction with the OS. Adding the 3rd seems to be more in service of the types of dynamic res/scaling techniques we've been talking about.
 
Last edited by a moderator:
PS3 has display planes too, although 2 instead of 3. Display planes with hardware scaling are a common feature, and mobile chipsets include far better functionality than either consoles. The unknown for PS4 is whether the UI can be on the second plane and preserved while the game layer dynamically scales, or whether the second plane is purely for OS UI overlays.

but that link says one of ps4's is used by the os and the other one is given to the game. but that confuses me because i remember cerny speaking on how simply rendering huds at 1080p can consume alot of bandwidth and that's something they worked on to avoid with ps4. or i imagined that or maybe he meant the os ui.
 
With only 2 DP i think the purpose is clearly different, solely the user experience/interaction with the OS.
That's possibly true, but, unless the OS is spamming content to the foreground, that'd be a terrible waste of a display plane. So it may be that the game HUD can share this layer with OS, especially if handled in the system API. It's easy enough arrange writing to HUD and then letting the OS draw on top of that afterwards. With HUD and game draws handled separately, you don't need to render the HUD last on top, so can draw that first, then draw the game while the OS is adding whatever on top. If handled in API, one could even support dynamic HUDs that adapt to OS content, although that's wild speculation on my part.
 
That's possibly true, but, unless the OS is spamming content to the foreground, that'd be a terrible waste of a display plane. So it may be that the game HUD can share this layer with OS, especially if handled in the system API. It's easy enough arrange writing to HUD and then letting the OS draw on top of that afterwards. With HUD and game draws handled separately, you don't need to render the HUD last on top, so can draw that first, then draw the game while the OS is adding whatever on top. If handled in API, one could even support dynamic HUDs that adapt to OS content, although that's wild speculation on my part.

I don't think the OS needs to necessarily be spamming the foreground in order for there to be a valid use case for the planes to exist. All it they really need to have is a use case where there is a HUD and a then a notification or something from the OS. If what you say is true, why have any DPs, and why does the XBO have 3 and what are the use cases for them?
 
i would say the playing field has already been evened out by the hardware dynamic scaler. unless developers want to devote time to create a software version for PS4 version of games, xbox one versions might have a better framerate for games. like if both games are native 1080p, when alot of stuff is going happening on screen. there might be framerate drops on the ps4 version that on the xbox version would stay the same but instead the resolution would drop dynamically. hud stays 1080p while the rendering drops momentarily.

Not to mention X1's offloading the CPU of audio and higher clock and most games being CPU limited anyway...;)
 
Any source on the relative CPU allocations besides the 2 Core/2 Core that's been rumored?

Just thinking about it, Kinect 2 is going to require resources regardless of wether you use it or not, and It needs to be running constantly. I can't give you hard numbers but it should be obvious that it reserves more.
 
It's highly unlikely the display pains will be used like that. As described by MS they are handy for rendering your game at an arbitrary or dynamic resolution and overlaying a native resolution HUD over top for free, plus Snap windows and OS notifications. Trying to render distant objects at a lower resolution will just make them appear chunkier and more aliased.

This is done all the time in modern games as is. Backgrounds are just that, areas where players don't focus on at all. There is no loss in visual fidelity by lowering res when modern games have lower res backdrops in the first place.

And any attempt to merge two unlike resolutions will likely produce ugly, hard transitions that will look like screen tears or mipmap transitions.

They are doing precisely this as is on X1. You don't get screen tearing or awkward transitions. You're just making stuff up. :/

By virtue of having them in different plains there is no practical manner I can think of to blend them together, beyond something like a parallax background for 2D games.

They are processed independently and composited at the output. We forunately have both the patents for the display planes and the VGLeaks info. It's not just res either, it also can adjust framerates between the planes and color depth.
 
That's possibly true, but, unless the OS is spamming content to the foreground, that'd be a terrible waste of a display plane. So it may be that the game HUD can share this layer with OS, especially if handled in the system API. It's easy enough arrange writing to HUD and then letting the OS draw on top of that afterwards. With HUD and game draws handled separately, you don't need to render the HUD last on top, so can draw that first, then draw the game while the OS is adding whatever on top. If handled in API, one could even support dynamic HUDs that adapt to OS content, although that's wild speculation on my part.

I had speculated about that a while back. That OS plane taking the place of the HUD would need to be able to update alongside the gameplay though. Not sure how doable that is. Here is the display planes article for anyone interested. There is also a patent that I don't have the link to anymore.

http://www.vgleaks.com/durango-display-planes/
 
i still don't understand how that works. how will the display planes know whats background and whats foreground? were do the various 1/4 resolution renders go? if youre using a deferred render does mean some of the render targets have to be rendered twice on each display plane?
 
Sweet, can I get a link to a video or picture of a game using this, or are you assuming this?.

I already linked to the VGLeaks info which outlines in some detail how the compositing works. It's not some video or game implementation since it's not a software implementation at all. It's all in hardware.

There is some really fascinating research MSR did on a very tightly related area though. There they used a foveated fall off in visual fidelity for a test bed game environment. They used a high speed eye tracker to see where the focal point was that users were looking at on screen and then they used independently processed display planes to diminish fidelity radially. Once the compositing was tweaked with focus group testing they found a massive savings in rendering/processing requirements before users could no longer tell the difference. It was something like 5-6 fold decrease in processing requirements for producing imagery that users couldn't distinguish from a full screen, high fidelity image. You can dig into that if you want more info on the related MSR research there.
 
I already linked to the VGLeaks info which outlines in some detail how the compositing works. It's not some video or game implementation since it's not a software implementation at all. It's all in hardware.

I think you are reading too much into it. It says two game and one OS layer. If the UI is one plane, the other (bottom) is the actual rendered game buffer.
 
I think you are reading too much into it. It says two game and one OS layer. If the UI is one plane, the other (bottom) is the actual rendered game buffer.

You can split them up (the 2 for apps/games I mean) however you like. It doesn't have to be HUD + game. It can also be foreground + background in theory.
 
Just thinking about it, Kinect 2 is going to require resources regardless of wether you use it or not, and It needs to be running constantly. I can't give you hard numbers but it should be obvious that it reserves more.

In honor of the truth of this discussion is good to point out that Kinect 2 will have zero impact on X1 performance, as it has a lot of dedicated hardware for audio and video.
It is not just a simple hd camera + microphone (sold separately).

bkillian can tell you more, but, anyway right now there are a lot of info about it around.
 
In honor of the truth of this discussion is good to point out that Kinect 2 will have zero impact on X1 performance, as it has a lot of dedicated hardware for audio and video.
It is not just a simple hd camera + microphone (sold separately).

AFAIK that is not entirely true.
- there is dedicated hardware for the audio inside the SoC.
- there is dedicated hardware for the ToF camera inside Kinect2, which (I believe) also does 'registration' on the video and maybe clever bits with the microphones etc. (this chip is clearly a bit special).

However, AFAIK, the Kinect2 skeletal analysis occurs within the CPU/GPU (the Yukon document had a separate resource for this, but it is not present in any of the Durango documents). This almost certainly relates to the rumoured/leaked '10% GPU reservation'.
 
AFAIK that is not entirely true.
- there is dedicated hardware for the audio inside the SoC.
- there is dedicated hardware for the ToF camera inside Kinect2, which (I believe) also does 'registration' on the video and maybe clever bits with the microphones etc. (this chip is clearly a bit special).

However, AFAIK, the Kinect2 skeletal analysis occurs within the CPU/GPU (the Yukon document had a separate resource for this, but it is not present in any of the Durango documents). This almost certainly relates to the rumoured/leaked '10% GPU reservation'.

We already know that though. It's not new info or anything. PS4 will also have a similar CPU/GPU reserve for the OS. We are pretty sure the CPU reserve is 2 CPU cores, as that's exactly what GG reserved for their KZ tech presentation outline. Without any additional info to discuss it seems like a stretch to suddenly argue that PS4 will has less OS resources dedicated to it in that manner.

System Allocations

On Durango, from the POV of allocations, the NUI architecture is split into two parts.
Core Kinect functionality that is frequently used by titles and the system itself are part of the allocation system, including color, depth, active IR, ST, identity, and speech. Using these features or not costs a game title the same memory, CPU time, and GPU time. These features also provide advantages. For example, the identity system will run across application switches because it is handled by the system, not individual applications, and avoids having to re-engage and sign-in repeatedly.
Functionality used less often has its allocation managed in a pay-per-play model. For example, registering color to depth and active IR (or the other way around) as an infrequently used operation will cost the title some small amount of CPU time.

http://www.vgleaks.com/durango-next-generation-kinect-sensor/
 
Status
Not open for further replies.
Back
Top