*spin-off* Ryse Trade-Offs

Status
Not open for further replies.
We already know that though. It's not new info or anything. PS4 will also have a similar CPU/GPU reserve for the OS. We are pretty sure the CPU reserve is 2 CPU cores, as that's exactly what GG reserved for their KZ tech presentation outline. Without any additional info to discuss it seems like a stretch to suddenly argue that PS4 will has less OS resources dedicated to it in that manner.

When the KZ2 document was discussed, someone suggested that reserving <2 cores might be a bad idea. (I think it was to do with 'shared L2 caches' or something?) Anyway, in terms of CPU the 2 consoles are believed to be identical/similar. In terms of GPU the XB1 is almost certainly "a bit higher". (skeletal analysis is a very complex task, finding the middle of green blob in a 1080p image isn't).

But I'm not sure it makes much difference in the overall scheme of things - at this point "the proof of the pudding is in the eating" :).
 
I don't think the OS needs to necessarily be spamming the foreground in order for there to be a valid use case for the planes to exist. All it they really need to have is a use case where there is a HUD and a then a notification or something from the OS. If what you say is true, why have any DPs, and why does the XBO have 3 and what are the use cases for them?
XB1 is, I understand, expected to have more on screen at once, such as PIP and docked apps. XB1 is intended to have more than just the game showing, so a display plane for that makes sense. That leaves two for the game, one for HUD and one for dynamically scaled game. PS4, as I currently understand it, will only have sporadic OS elements drawn. Existing PS3 notifications aren't that common during a game, and if they are only has high resolution as the game (which'll be 720p minimum in all likelihood), that's good enough. I'm not sure pin sharp 1080p notifications when a friend logs on is worth a display plane. It may be, but personally I'd look to expand use of that hardware into something more versatile. Hence my idea described later in this post...

This is done all the time in modern games as is. Backgrounds are just that, areas where players don't focus on at all. There is no loss in visual fidelity by lowering res when modern games have lower res backdrops in the first place.
I first read Brad's post as you seem to have, but he has some legitimate points that caused me to terminate my reply. If the background is decided by a depth test, and everything beyond a certain distance is drawn in the low-res buffer, then you'll have a blurry to sharp transition that'll be abrupt at the transition threshold, if implemented in that way. If the threshold is far enough, like skybox or a mile away, then it won't matter, but if the distance is set very far than the savings will be minimal. Of course, one could render foreground with a transparency mask covering a range of transition depth, eliminating the hard transition.

I had speculated about that a while back. That OS plane taking the place of the HUD would need to be able to update alongside the gameplay though. Not sure how doable that is.
If I were designing the system (and as such, whatever I suggest is not going to be how it is, because my clever ideas never match with reality ;)), I'd have a UI API with vector and bitmap based UI components. Rather than devs having to make their own, they'd just plug into the OSUI layer. The OS would update the UI layer 60 fps regardless, allowing for smooth animations and transitions. It'd draw all game HUD stuff first and OS stuff over the top, optimised with suitable culling of course. You could provide a standard OS experience alongside customised game UIs, save developers effort, and maintain a QOS independent of the game. eg. Game invites could have a common interface (avatar lists, notifications) that a game just skins, instead of the current PS3 version where devs fetch whatever data from PSN and present it however the devs choose, or the system splats obtrusively over the top.
 
I first read Brad's post as you seem to have, but he has some legitimate points that caused me to terminate my reply. If the background is decided by a depth test, and everything beyond a certain distance is drawn in the low-res buffer, then you'll have a blurry to sharp transition that'll be abrupt at the transition threshold, if implemented in that way.

You already have that in games anyways. Also, the backgrounds are still being scaled PRIOR to compositing them. See the link I posted.

Here's an example of what I mean when I say games do that anyhow:

http://s.pro-gmedia.com/videogamer/...hadow_fall/screens/killzone_shadow_fall_3.jpg

It's a bit awkward as the camera is at a tilt, but you can easily see the very definite demarcation between foreground/background via notably differing image clarity just behind the debris there. We aren't talking about a scenario where you have nothing to lay on top of the background, thus exposing the demarcation line (that'd be lacking a foreground in which case that part of the screen would just be rendered in the foreground plane as is). If devs are already doing DoF stuff anyhow, then lowering the res shouldn't be problematic for aliasing as those pixels, even if they were visible, would be blurred over anyhow.

If the threshold is far enough, like skybox or a mile away, then it won't matter, but if the distance is set very far than the savings will be minimal. Of course, one could render foreground with a transparency mask covering a range of transition depth, eliminating the hard transition.

This is kinda what I was getting at, depending on the scene. I'd actually be interested in knowing what kinda resolution drops one could get away with on backgrounds as a function of distance from the camera before it looks bad. I'd imagine you could REALLY cull some pixels/processing potentially.

If I were designing the system (and as such, whatever I suggest is not going to be how it is, because my clever ideas never match with reality ;)), I'd have a UI API with vector and bitmap based UI components. Rather than devs having to make their own, they'd just plug into the OSUI layer. The OS would update the UI layer 60 fps regardless, allowing for smooth animations and transitions.

Eh, you'd be wasting those small resources on that kinda framerate most likely. HUD's can be low fps without issue. You could push a tad further to max out that 10% GPU OS reserve probably.

My own idealization that I'd like to see devs play with is for the gun, vehicle, arm/hand, etc in an FPS. Call that part of the "HUD" plane and see how that works. OR do the more distant stuff like you mentioned (for relatively minimal savings) OR do something less distant like the pic I posted of KZ. It can easily be on a case by case basis too (hell, even frame by frame basis if need be). Of course then it comes down to dev input.

My only concern with what you suggest by putting HUD and OS together is what happens when you need to use that OS overlay for something like say, Skype? Or better yet, snapping the Twitch app? You still have the game running in 75% of the screen, but what happens to its HUD now that the app is running on those OS plane's GPU resources?

I do like the concept of customized game HUD's though, and maybe that can be a more common thing on X1 now as is, but it may go against the theme MS has put forward regarding running apps side by side with games. Then again, perhaps devs can run that as a companion app or something? Seems a bit convoluted to me.
 
I first read Brad's post as you seem to have, but he has some legitimate points that caused me to terminate my reply. If the background is decided by a depth test, and everything beyond a certain distance is drawn in the low-res buffer, then you'll have a blurry to sharp transition that'll be abrupt at the transition threshold, if implemented in that way. If the threshold is far enough, like skybox or a mile away, then it won't matter, but if the distance is set very far than the savings will be minimal. Of course, one could render foreground with a transparency mask covering a range of transition depth, eliminating the hard transition.

If all you intend to do is render an already low fidelity skybox at a lower resolution you'll just make it look bad and save almost no performance. It's a total pipedream.

You already have that in games anyways. Also, the backgrounds are still being scaled PRIOR to compositing them. See the link I posted.

Here's an example of what I mean when I say games do that anyhow:

http://s.pro-gmedia.com/videogamer/...hadow_fall/screens/killzone_shadow_fall_3.jpg

It's a bit awkward as the camera is at a tilt, but you can easily see the very definite demarcation between foreground/background via notably differing image clarity just behind the debris there. We aren't talking about a scenario where you have nothing to lay on top of the background, thus exposing the demarcation line (that'd be lacking a foreground in which case that part of the screen would just be rendered in the foreground plane as is). If devs are already doing DoF stuff anyhow, then lowering the res shouldn't be problematic for aliasing as those pixels, even if they were visible, would be blurred over anyhow.

The display panes don't interact. You can't do a high quality depth of field effect at the higher resolution to blur out the lower resolution pane. So you're choosing between a low quality effect that will be pixellated compared to the sharper foreground objects and an enormous hassle of setting this absurd scheme up in the first place (and even then it breaks down anytime your foreground objects need to interact with the background), or just doing it regular style and saving performance in a normal fashion.
 
If all you intend to do is render an already low fidelity skybox at a lower resolution you'll just make it look bad and save almost no performance. It's a total pipedream.



The display panes don't interact. You can't do a high quality depth of field effect at the higher resolution to blur out the lower resolution pane. So you're choosing between a low quality effect that will be pixellated compared to the sharper foreground objects and an enormous hassle of setting this absurd scheme up in the first place (and even then it breaks down anytime your foreground objects need to interact with the background), or just doing it regular style and saving performance in a normal fashion.

In regards to your previous remark about mipmap transitions is that because you might select a lower mipmap when rendering at a lower resolution thus causing a jarring effect at the seam of the composition of the two buffers (lower and upper) if different mip map levels are chosen for each.

Im sorry if this makes no sense graphics aren't exactly my forte.
 
Shifty Geezer said:
Rather than devs having to make their own, they'd just plug into the OSUI layer.
Vast majority of game UIs are NOT an independent floating layer above the "game world".
UI is also an integral component of UX, which typically means heavily interleaved with application-state(or even being in direct control of app-state), making it a poor fit for the async-refresh part too.
Finally OS-level APIs are only a win for platform exclusive developers, so short of TRC mandates this would never see widespread adoption even if it did work.
 
In regards to your previous remark about mipmap transitions is that because you might select a lower mipmap when rendering at a lower resolution thus causing a jarring effect at the seam of the composition of the two buffers (lower and upper) if different mip map levels are chosen for each.

Im sorry if this makes no sense graphics aren't exactly my forte.

When I referenced mipmap transitions I was thinking of the old days before trilinear filtering where bilinear filtering would produce hard seams between each level. You would run down a hall and there would be like three different seams of demarcation that would move down the hall with you.

Here's the best image example I could find:

w0W9aLF.jpg


Some of the theories I've seen floated about the display panes included ideas like rendering the center of the screen at a higher resolution than the periphery, but when you try to composite that together you'd just end up with a hard, visible transition between the center box (or diamond) and the rest of the frame. Even if you use the same quality assets and filtering, the clarity and detail difference would be apparent, and I think a distracting artifact. I can't think of a practical way to solve that.

Even if you just do a matte-style overlay of some foreground object I think it would probably result in a weird green-screen effect based on the differing resolutions.
 
I figured this fit the theme of this thread more then the game thread.

5sQrRwJ.jpg

crytek-ryse.jpg


Whats going here? there seems to a 65k triangle count change. Anyone got a good explanation for it ?.
 
I figured this fit the theme of this thread more then the game thread.

5sQrRwJ.jpg

crytek-ryse.jpg


Whats going here? there seems to a 65k triangle count change. Anyone got a good explanation for it ?.
Interesting... I wouldn't know the truth even if it slapped me in the face from a different dimension, but I wonder where the changes come from and the date of those changes. Who was first? Like... Which came first, the chicken or the egg? The picture at the top or the screenshot below?

Marius look more real in the downgraded picture at least. Perhaps they found a more intelligent implementation to draw the same character with less polygons.
 
maybe the top was running on pc/devkits and the bottom number is running on actual hardware? i doubt people will notice the difference.

Interesting... I wouldn't know the truth even if it slapped me in the face from a different dimension, but I wonder where the changes come from and the date of those changes. Who was first? Like... Which came first, the chicken or the egg? The picture at the top or the screenshot below?

Marius look more real in the downgraded picture at least. Perhaps they found a more intelligent implementation to draw the same character with less polygons.

its from this http://venturebeat.com/2013/09/25/c...etween-cinema-and-game-with-ryse-on-xbox-one/

so it's fairly recent.
 
My guess is someone pointed out that the first slides' tri count was wrong and it was thus updated in the second.

The question is, which one was the first slide?
 
My guess is someone pointed out that the first slides' tri count was wrong and it was thus updated in the second.

The question is, which one was the first slide?

It makes much more sense that the first slides number was wrong and then updated to the proper number on the second. As someone said the move from dev kit to final hardware doesnt make any sense because the other numbers would have scaled with it not to mention the hardware spec has been final for a good while. In addition devkits most likely had an amd 7790 or up gpu which all have dual geometry engines just like the xbox one.
 
Theory on GAF is the "full armor" model uses that may triangles where the one without in the scene below was less triangles. Seems odd to have 2 slides for different outfits but id guess armor could be very triangle heavy.
 
Theory on GAF is the "full armor" model uses that may triangles where the one without in the scene below was less triangles. Seems odd to have 2 slides for different outfits but id guess armor could be very triangle heavy.

Wouldn't you mention that though ? 150 is massively more and looks far more impressive
 
Status
Not open for further replies.
Back
Top