Display planes use

Alucardx23

Regular
This might be a stupid question, but here I go. Taking this comment from the VGleaks article about Durango display planes.

“The GPU does not require that all three display planes be updated at the same frequency. For instance, the title might decide to render the world at 60 Hz and the UI at 30 Hz, or vice-versa. The hardware also does not require the display planes to be the same size from one frame to the next.”

Could this be used to decouple gameplay framerate from game framerate? Taking a first person shooter as an example, could the first plane consist of the gun and hud running at 60 frames while the rest of the game on the second plane runs at 30 frames, or as high as the game can display it? This would mean that the game response would not be affected by framerate slowdown, at least not as much as in a traditional game where all is running on the same frame buffer. I use a first person shooter as an example, but I think this could be used for any type of game.

http://www.vgleaks.com/durango-display-planes/
 
Last edited by a moderator:
Could this be used to decuple gameplay framerate from game framerate?
For many games this is already the case, and the display planes don't really help with this.
 
Many games already have a fixed internal refresh rate for simulation, input handling, etc. The "display" refresh rate is like a rapid succession of snapshots of these internal status.

Further, if you want to draw the gun at 30fps and others at 60fps, you can already do that now. You just draw the gun to a offscreen texture and update it at a slower fps. Of course, without display planes, this is not likely to be very beneficial. However, IMHO display planes are more likely to be used for specific effects, such as showing a video (sort of like the good old "overlay"), and allowing the system to show some system UI (e.g. if someone sends you a message through Xbox Live) without interfering the game too much.
 
This sounds pretty interesting, to say the least, never thought of this before.

I also know this will probably sound pretty bizarre, but what do you think about display planes being used to intelligently lower resolution of a game in a not too drastic way at the very moment that an impressive explosion goes off, which the developer may decide they want, for that brief period of time, to have extra GPU horsepower committed to making that initial explosion as impressive as humanely possible?

As opposed to display planes being used as a jack of all trade tool, where resolutions are dynamically jumping all over the place multiple times for all kinds of different reasons, is it possible for this to be done sparingly or few and far between in ONE specific instance, such as the more impressive explosion example that I listed?

Irrelevant of how well it would work, is it, simply put, just impossible to do?
 
They don't provide any functionality that makes this easier.
Basically decoupling gameplay from rendering just requires you double or tripple buffer application render state, and separate rendering and game logic threads. PC games have been doing this since as far back as the original Quake. Hell some even interpolate gameplay frames if gameplay is the bottleneck, much like a multiplayer client.
The downside is it increases latency and we already have lots of that.

In your example of an FPS, you still need to turn to aim, rendering the gun separately probably isn't a win, though it might be if you used dynamic resolution for everything else.

The concept of rendering parts of the scene at different rates isn't a new one, many Amiga games used the trick, for example Blood Money moves the player craft at 60fps regardless of the rendering rate of the rest of the game.

You could also look back at the Talisman work, where parts of the scene could be updated much less frequently without significant loss of visuals.

I'm sure some games will come up with inventive ways to use it, but the obvious one is Hud/Game, with additional value if the game is using dynamic resolution, in fact I'd guess enough of a win that lots of games will use dynamic resolution that probably wouldn't have otherwise.
 
They don't provide any functionality that makes this easier.
Basically decoupling gameplay from rendering just requires you double or tripple buffer application render state, and separate rendering and game logic threads. PC games have been doing this since as far back as the original Quake. Hell some even interpolate gameplay frames if gameplay is the bottleneck, much like a multiplayer client.
The downside is it increases latency and we already have lots of that.

In your example of an FPS, you still need to turn to aim, rendering the gun separately probably isn't a win, though it might be if you used dynamic resolution for everything else.

The concept of rendering parts of the scene at different rates isn't a new one, many Amiga games used the trick, for example Blood Money moves the player craft at 60fps regardless of the rendering rate of the rest of the game.

You could also look back at the Talisman work, where parts of the scene could be updated much less frequently without significant loss of visuals.

I'm sure some games will come up with inventive ways to use it, but the obvious one is Hud/Game, with additional value if the game is using dynamic resolution, in fact I'd guess enough of a win that lots of games will use dynamic resolution that probably wouldn't have otherwise.

Thanks for the explanation ERP, but on my experience I haven't seen or felt any game doing this, at least on console games where I play the most.

"In your example of an FPS, you still need to turn to aim"

Didn't get this part :?:. Taking Call of Duty as an example, a game that tries to run at 60 frames but goes down to 30 something fps when things get hectic, you can feel the controller response getting sluggish right away when slow down happens. Why it would not be a "Win" to not feel that and always have the controller response locked at 60 frames, so no matter what is happening on the second display plane, you can always align the gun with the same stable controller response. It would be the same for a fighting game for example, where you could have the background running at 30 frames with better graphics, why render the whole screen at 60 frames when you can only affect your character with the controller? I know I might be missing something. :smile:
 
Last edited by a moderator:
Didn't get this part :?:. Taking Call of Duty as an example, a game that tries to run at 60 frames but goes down to 30 something fps when things get hectic, you can feel the controller response getting sluggish right away when slow down happens. Why it would not be a "Win" to not feel that and always have the controller response locked at 60 frames, so no matter what is happening on the second display plane, you can always align the gun with the same stable controller response. It would be the same for a fighting game for example, where you could have the background running at 30 frames with better graphics, why render the whole screen at 60 frames when you can only affect your character with the controller? I know I might be missing something. :smile:

In an fps you can move the whole environment so that is not practical. A fighter might work better but you would still have lots of limitations about moving the "camera".
 
. The use of multiple screen rectangles can reduce memory and bandwidth consumption when a layer contains blank or occluded areas.

this is the more interesting part

The concept of rendering parts of the scene at different rates isn't a new one, many Amiga games used the trick, for example Blood Money moves the player craft at 60fps regardless of the rendering rate of the rest of the game.

blood money, shadow of the beast, impossible mission etc...
 
Last edited by a moderator:
from what i remember parallax scroll was @50fps and caracter a lot lower, maybe 4-5 fps, the central backgroung for me is about half the parallax, ~25...

It was all locked to the same frame, yes there weren't enough animation frames for the character to update that frequently, but it was still "updated".
The central parallax max have been moving <1 pixel per frame that I don't remember, but again it was updated at the same rate as everything else.

It was 50fps in PAL territories, NTSC version actually ran at 60fps, but have some of the graphics changed to meet the restriction.
I actually did the genesis port, and had access to all of the original source code, everything was updated every frame.

By comparison Bloodmoney updated the player position in an interrupt, since it was a sprite, it's position updated instantaneously while everything else was drawn/updated at whatever rate it could manage.

All off topic of course.
 
In an fps you can move the whole environment so that is not practical. A fighter might work better but you would still have lots of limitations about moving the "camera".

When you refer to camera movement, do you mean some type of lag or disjointed look, with different planes running at different framerates? I'm trying to visualize what you're talking about.
 
Why it would not be a "Win" to not feel that and always have the controller response locked at 60 frames, so no matter what is happening on the second display plane, you can always align the gun with the same stable controller response.
How do you update the gun at 60fps but not the view? If you turn to the right, the gun is effectively stationary and the view updating 30 fps. Redrawing the gun twice as fast isn't going to improve responsiveness. The closest you'd get in an FPS might be rendering moving clouds at a lower speed, or, for something weird, drawing the scenery at 30 fps and characters at 60. I expect that'd look remarkably odd.

But to draw this thread to a close, the display planes don't enable anything new. There's nothing stopping any developer rendering and compositing at whatever framerates they want. The Display Plane Unit just frees the GPU/CPU from doing the composite. I have seen 15 fps reflection updates in a last-gen racer as a real-world example. It might make things a little easier to promote use of decoupled rendering, so, for example, Borderlands HUD could have been 60 fps super smooth instead of juddering along with the rest of the game, but nothing new is enabled by this hardware.
 
It was all locked to the same frame, yes there weren't enough animation frames for the character to update that frequently, but it was still "updated".
The central parallax max have been moving <1 pixel per frame that I don't remember, but again it was updated at the same rate as everything else.

It was 50fps in PAL territories, NTSC version actually ran at 60fps, but have some of the graphics changed to meet the restriction.
I actually did the genesis port, and had access to all of the original source code, everything was updated every frame.

By comparison Bloodmoney updated the player position in an interrupt, since it was a sprite, it's position updated instantaneously while everything else was drawn/updated at whatever rate it could manage.

All off topic of course.


Wow, unbelievable, you developed the first genesis game I ever picked out on my own as a kid. I was 7 at the time. I picked it up, thinking it had a connection to the game that came with my Genesis, Altered Beast.
 
How do you update the gun at 60fps but not the view? If you turn to the right, the gun is effectively stationary and the view updating 30 fps. Redrawing the gun twice as fast isn't going to improve responsiveness. The closest you'd get in an FPS might be rendering moving clouds at a lower speed, or, for something weird, drawing the scenery at 30 fps and characters at 60. I expect that'd look remarkably odd.

Lest's put it this way,

Example 1, you have a simple game where you have to move a crosshair to 5 different black dots on the screen and then press the RB button, the faster you do it, the more points you get. In this example the game struggles to maintain 60 frames and it goes from 15 to 60 fps. Everyone that has played a game with unstable framerate knows this can be disorienting, and makes it difficult to know where you are on the screen, since you are not receiving enough visual information of where the crosshair is when the framerate is low.

On the second example you have the same game but the crosshair is on a separate plane than the 5 dots, the plane where the crosshair is runs at 60 frames, and the second plane where the 5 dots are has the same erratic framerate. This is the basic idea, the first plane where the crosshair is will ensure that you always get the same crisp controller response every time.

So with that example in mind, let’s look at this image from Call of Duty. How much better will the gameplay experience be when you are trying to point to someone with the crosshair running at 60 frames solid, versus the way the game runs now with an unstable framerate.
shot0071.jpg
 
Last edited by a moderator:
Durango actually has a little bit of oldskool 16-bit console/computer flavor, with all this wonky, semi-useful fixed-function stuff... Not that I really see the benefit of these planes; you can pretty much do all of what is mentioned in the OP link already with current hardware. It's like they're including this stuff for the hell of it, because it's probably taking up just a teensy-tiny bit of silicon on the die, so why not, right? If it can help in some corner case, all the better...

It was all locked to the same frame, yes there weren't enough animation frames for the character to update that frequently, but it was still "updated".
So the central character wasn't a set of hardware sprites then? I always assumed it was, since it used so few colors.

Interesting side-note: whenever the player turned around from left to right, the game slowed down enormously, probably because there wasn't the RAM to hold mirror images of the sprites, and neither sprite hardware or the Amiga's blitter could flip imagery. So it seems they did the flipping on the main CPU instead, and it was NOT fast... :D

The central parallax max have been moving <1 pixel per frame that I don't remember, but again it was updated at the same rate as everything else.
Was probably no other practical way to handle all the parallax layers than to draw everything regardless of if a layer actually moved or not that frame. The Amiga had a version of these display planes - they called it dual playfields - and I was actually thinking of this old Amiga hardware feature as I read the thread title, but dual playfield use had some pretty major restrictions in color fidelity since bitplanes were split between each playfield instead of doubled up. Resolution had to be the same of both playfields also I believe.

I actually did the genesis port
W00t. I never owned a Mega Segadrive (sorry! ;)), but I did play Shadow of the Beast A LOT. Wasn't a very good game, but it sure was pretty. Fascinating to hear you were involved with it, even though it was just peripherally...

Fascinating side-note for 16-bit Anoraks: there was actually also a sidescroller called Unreal on the Amiga, released maybe a year later, which claimed awesome graphics (like its PC namesake a decade later) AND better gameplay than the rather simplistic SotB. I only played a demo of that game, it didn't have super-parallaxed backgrounds, but it was still very pretty for its time.

All off topic of course.
Bah. Oldskool computers and games are never OT! Whippersnappers of today don't understand what they missed by not having being born yet! :LOL:
 
Neat, I like this!

I'm hoping this would remove the need for lame TCR's regarding safeframes for UI's. Allow the developers to finally use the whole screens' realestate for UI (pushed to the edges), and have the UI scaling options in the OS. In fact if you have the developers split up their UI into 4 quadrants, you can do some smarter shifting (rather than scaling) of those UI layers as they go out/in from the edges.

Oh, and of course 120hz minimaps, finally (jk)
 
Back
Top