Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
I just thought of something that would be fun to see in a game.

You know how whenever you run across a TV in game it always shows some looped, generally low resolution video or series of images?

Combined with the HDMI in on Durango, it'd be possible for those TVs to display a live video feed of the user's choosing. So, in theory everytime you ran across a TV set in a game, it could be displaying a real TV stream that is currently happening. That would go a long ways towards making a game world that much more immersive.

Regards,
SB
 
I'd wait to see the final hardware before declaring MS engineers incapable of solving this issue...

Not incapable, no. It appears difficult to utilize all features at the same time at full, most engine-developers may pick only a sub-set in line with their strategic engine plans, and guided by their budgets related to "optimization vs. portability". This isn't bad per-se, you're not forced at gun-point to use them all really. I believe 2D-engines fe. would be quite happy with some of the features.
It may turn out as a successfull jack-of-all-trades platform, and I'd hope it can be positively understood as that, instead of possibly downplaying it as silicon-wasteland in the upcoming vs.-discussions and reviews.
 
I think we are starting to know what's inside the Durango. It seems to me that there are a lot of dedicated chips, just like the Super Nintendo. And you know how it turned out....
 
Aren't the display panes function revolve around MS version of google glasses. A pane for each eye?

Well... The HUD plane could be "thrown" to a SmartGlass display for example.

Since the HUD layer is separate, presumably they could also tweak the theme or even combine them where the OS sees fit (for in and out of app/game use).
 
EDIT:
If my guess is correctly, then people probably can't see the real benefits of these display planes until they see the Durango OS running for real. I think resource saving is secondary (and there are substitutes using software), these display planes should be able to enable new and consistent user experience in the final OS. ^_^

You should read the patent. It is pretty clear what the intended/expected uses of this tech are from that document I think.

http://www.faqs.org/patents/app/20110304713

[0001] It is generally thought that video content items, such as rendered graphics in video games, are of higher quality when displayed at relatively high resolutions with relatively high refresh rates. However, when device hardware is strained by complicated rendering, refresh rates may suffer. While resolution may be sacrificed in order to maintain a desirable refresh rate, rendering at lower resolutions may result in an unfavorable viewing experience if the content appears noticeably degraded (e.g., pixelated). By contrast, other video content items, such as text overlays and graphical user interface (GUI) elements, are known to suffer quality degradation when rendered at lower resolutions.

...

[0012] As introduced above, in some cases resolution may be sacrificed in order to maintain a desirable update rate for rendered content, yet maintaining a high display refresh rate so as to avoid flickering. As an example, video may be rendered at a slightly lower resolution to decrease the effective pixel rate, which allows the graphics processing unit (GPU) of a device to render at a faster refresh rate. However, different types of content may be noticeably more or less affected than other types of content when resolution is decreased. For example, a video game displaying fast moving game content (e.g., a battle scene) may not yield noticeable visual artifacts when the resolution is lowered to maintain a desired refresh rate, whereas a heads up display (HUD) having detailed and relatively static content, such as text, a chat window, etc., may become noticeably pixelated at a lower resolution.


It is pretty clear they are looking at dynamic res/fps/HDR/etc on the 2 application (games) planes. The goal, presumably, would be to have the HUD plane be 1080p always with dynamic fps. The game plane would be dynamic res with locked 30fps (or 60fps perhaps). By the sound of your theory there it would seem to make one of the frames wholly worthless, which is pretty clearly not the case.

My question is what exactly can devs choose to put in these planes? Could the HUD plane also include the hand/gun in an fps? Could devs find ways to layer the game world so one plane is displaying the world's foreground and the other the background? In the patent it has images that seem to be depicting a racing game setup, which includes the steering wheel in the HUD plane. So what about the rest of the car's interior for instance? I mean if they seem to indicate a steering wheel...:?:

Here is the image I am referring to: http://www.faqs.org/patents/imgfull/20110304713_02

There are also applications for 3D gaming and streaming to other devices.
 
Oops! Ignore my comment about that image depicting the steering wheel as part of the HUD plane. It's actually part of the game plane. Still re-reading the patent's details. Sorry about that! :/
 
merge hardware feature is classic in the console world.
for example GT5 1280x1080 rendering is (hardware) upscaled and (hardware) merged with 1920x1080 HUD, it's hardware job not software

another example, the merge circuit in the PS2 doc:


merge1.jpg

merge2p.jpg

merge3.jpg

merge4.jpg




Durango merge circuit is an enhancement of this
 
Last edited by a moderator:
Quaz51, you are talking about the vertical scaler in RSX for your GT5 example ? For merging, do you mean GPU blend or something else (something in the scanout engine ?)
 
More info from the patents...

[0029] Turning now to FIG. 2, FIG. 2 illustrates a method 50 of outputting a video stream. At 52, method 50 includes retrieving from memory a first plane of display data having a first set of display parameters. It can be appreciated that "plane" as used herein refers to a plane (e.g., layer) of a 2D memory buffer, and thus is distinct from a plane in the traditional sense with respect to a plane of a 2D image. Planes may correspond to (e.g., be sourced by) application-generated display data or system-generated display data, resulting from, for example, frame-buffer(s) produced by the graphics core and/or other system components. Further, planes may be associated with various sources such as main sources, HUD sources, etc. and thus, the first plane may be any such suitable plane.

[0030] For example, the first plane may be an application main plane comprising an application-generated primary-application display surface for displaying primary application content (e.g., main screen of a driving game). As another example, the first plane may be a system main plane comprising a system-generated primary-system display surface for a computing system (e.g., a window displaying system messages).

[0031] The first plane has an associated first set of display parameters. Such display parameters indicate how display data of the plane is to be displayed. For example, display parameters could include resolution, color space, gamma value, etc. as described in more detail hereafter.

[0032] Further, the first plane may be retrieved in any suitable manner, such as by direct memory access (DMA). As an example, the DMA may retrieve front buffer contents from a main memory. As such, a system-on-a-chip (SoC) may be designed to deliver a favorable latency response to display DMA read and write requests. The memory requests may be issued over a dedicated memory management unit (MMU), or they may be interleaved over a port that is shared with the System GPU block requesters. The overhead of the GPU and SoC memory controllers may then be taken into account in the latency calculations in order to design a suitable amount of DMA read buffering and related latency hiding mechanisms. Display DMA requests may be address-based to main memory. All cacheable writes intended for the front buffers may optionally be flushed, either via use of streaming writes or via explicit cache flush instructions.



[0038] The video scaler may further provide for dynamic resolution adjustment based on system loading for fill limited applications. As such, the resampler may be configured to support arbitrary scaling factors, so as to yield minimal artifacts when dynamically changing scaling factors. Further, resampling may be independent on each of the sources of the planes. In such a case, a high quality 2D filter such as a high quality non-separable, spatially adaptive 2D filter may be desirable for a main plane, whereas non-adaptive, separable filters may be used for HUDs.



[0050] By performing such post-processing on a per-plane basis, attributes of the sources (e.g. color space, size, location, etc.) can change on a frame by frame basis and therefore can be appropriately buffered to prevent bleeding/coherency/tearing issues. Thus, all display planes may be updated coherently.



[0058] At 76, method 50 includes outputting the blended display data. In some embodiments, the blended display data may be output to a video encoder. However, in some embodiments, content that is formatted and composited for output may be written back into memory for subsequent use, including possible video compression. The source may be taken from any blending stage, for example, to include or exclude system planes. Alternatively, for fuller flexibility, a separate set of blenders may be added. Such outputting to memory also provides a debug path for the display pipeline.

Read more: http://www.faqs.org/patents/app/20110304713#ixzz2KeFveMEu


 
Just from reading and my quite modest understanding, the Durango is starting to remind me of the Sega Saturn and it's small army of processors. If the PS4 is released first and, with it's more straightforward, potentially more capable design, becomes the lead platform, I can see all these little "custom" pieces becoming a lot of extra effort to maintain parity across the 2 machines. I wonder how much the devs will push to achieve it. Look at the difference in effort this gen maximizing the capability of Cell's nuances between 1st and 3rd party devs.
 
Just from reading and my quite modest understanding, the Durango is starting to remind me of the Sega Saturn and it's small army of processors. If the PS4 is released first and, with it's more straightforward, potentially more capable design, becomes the lead platform, I can see all these little "custom" pieces becoming a lot of extra effort to maintain parity across the 2 machines. I wonder how much the devs will push to achieve it. Look at the difference in effort this gen maximizing the capability of Cell's nuances between 1st and 3rd party devs.

Sega Saturn will rise again

-Huge SS fan.
 
You should read the patent. It is pretty clear what the intended/expected uses of this tech are from that document I think.

http://www.faqs.org/patents/app/20110304713


[/COLOR][/LEFT]

It is pretty clear they are looking at dynamic res/fps/HDR/etc on the 2 application (games) planes. The goal, presumably, would be to have the HUD plane be 1080p always with dynamic fps. The game plane would be dynamic res with locked 30fps (or 60fps perhaps). By the sound of your theory there it would seem to make one of the frames wholly worthless, which is pretty clearly not the case.

My question is what exactly can devs choose to put in these planes? Could the HUD plane also include the hand/gun in an fps? Could devs find ways to layer the game world so one plane is displaying the world's foreground and the other the background? In the patent it has images that seem to be depicting a racing game setup, which includes the steering wheel in the HUD plane. So what about the rest of the car's interior for instance? I mean if they seem to indicate a steering wheel...:?:

Here is the image I am referring to: http://www.faqs.org/patents/imgfull/20110304713_02

There are also applications for 3D gaming and streaming to other devices.

My gut feel, which can of course be wrong, is: It's for enabling new (and consistent) user experience.

Resource saving is secondary because in the first patent quotes you highlighted, we see the word "sacrificed". Dynamic resolution is activated when certain visual elements such as resolution has been compromised to hit higher framerate. This is different from optimization techniques that improve or do not impact visuals (e.g., culling).

The invention allows the devs to soften the blow by minimizing/hiding the impact to HUD. Durango is certainly not designed for compromises as their first priority. They probably have other worthy goodies in mind that may take away some of the resources under certain scenarios (e.g., running something else together with gaming, or perhaps to enable OS animated thumbnail mode, etc.).

Your steering wheel HUD example is ok. You don't have to follow the patent example to the tee. For games that are SmartGlass friendly, it may make sense to have the steering wheel and dashboard rendered to a separate plane (even together with the HUD).

Most of the other points you listed from the patents can be done using software on Durango or Orbis too.

I am more interested in the workflow, such as the virtual texture workflow, in Durango. If they do enough work during bake time, they may be able to load more details and pre-calculated data into the 8GB RAM.
 
Nothing about this setup prevents you from doing culling, or any optimizations. It may or may not make dynamic resolution easier, but by no means makes it automatic either. This hardware is there for QoS guarantees, and to provide some simple workarounds for visual issues, like HUD and overlay resolutions in non-native resolution games.
 
It should/would simplify development while safeguarding some basic user experience.

Developers will optimize their software for the target GPUs (Durango, Orbis, PS3, 360, WiiU) as usual. The display planes' scaling will happen in parallel on-demand. A real time OS should support various prioritization schemes and policies.

e.g., Besides "protecting" game HUD resolution, the OS may also reduce the resolution of DVR videos or drop frames when a game is playing.

Without the display planes, these adjustments will be done in an app specific manner (like on Mac/PC). For instance, PS3's Tourne DVR will drop frames "automatically" when there is insufficient resources.

With display planes, they are handled more uniformly and consistently, in line with OS policies and use cases.
 
On the proposition that the LZ-decoding DME would be used to decompress LZ-packed DXT, it seems to be the most efficient way to use it, however, even LZ-packed, DXT is only suitable for characters and other small and repeatable objects, but not for environment - compression ratio of any JPEG-like method is much higher than of LZ-DXT. Strange that they did not implement JPEG XR decoder as it seems it's not really much more complex than plain JPEG and doesn't have blocking artifacts. Strange as well that JPEG decoder doesn't output in DXT, though that allows possiblity to do some deblocking post-processing and then pack to DXT using GPGPU.
 
I just thought of something that would be fun to see in a game.

You know how whenever you run across a TV in game it always shows some looped, generally low resolution video or series of images?

Combined with the HDMI in on Durango, it'd be possible for those TVs to display a live video feed of the user's choosing. So, in theory everytime you ran across a TV set in a game, it could be displaying a real TV stream that is currently happening. That would go a long ways towards making a game world that much more immersive.
I suppose using the HDMI in, the could render the TV signal to a texture. The composition planes won't help because they aren't 3D mapped to geometry. But then why not fetch video over the internet? that way devs could maintain content style, which is important. You don't want an episode of 24 or Smallville in a game set in the 1960s or a fantasy game.
 
Status
Not open for further replies.
Back
Top