Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Yes, but look at all the other pools of memory that on the Durango. AMD's more general gpus get by with 2, it doesn't seem necessary that adding a pool of scratchpad memory on gpu requires 4 more.

It's not 4 more, it's 2 more. And a small scratch pad is exactly what you'd want more for in comparison to a GPU with a large pool of VRAM where most of the assets and buffers can stay where they are.
 
It's not 4 more, it's 2 more. And a small scratch pad is exactly what you'd want more for in comparison to a GPU with a large pool of VRAM where most of the assets and buffers can stay where they are.

Where does it says that AMD stripped the DMAs out of the memory controller and mapped their functionality to the DMEs?
 
The DMAs are their own blocks, and the first two DMEs in Durango are described as doing what the default DMA engines do in GPUs, and they appear to sit in the same place of the architecture as a the default GCN DMA engines.

The other two have specialized hardware. Even if the regular data loads might not need four DMEs, having them on additional units would allow for less contention if the other two are busy handling compression or JPEG decoding.
 
Its too bad both of the console companies couldn't have gotten amd to redesign the tablet jaguar cores.
For example in Xbox you see macro redesign changes in introducing jpeg and swizzle units but no modifications to cores themselves.
Wouldn't it have been feasible to increase the l2 cache from half rate to full clockspeed?
 
The DMAs are their own blocks, and the first two DMEs in Durango are described as doing what the default DMA engines do in GPUs, and they appear to sit in the same place of the architecture as a the default GCN DMA engines.

The other two have specialized hardware. Even if the regular data loads might not need four DMEs, having them on additional units would allow for less contention if the other two are busy handling compression or JPEG decoding.

I thought the GCN DMAs sit within the memory controller, which is what I read from CI documents.

Has the diagrams changed because I thought all the DMEs sat on an I/O bus? And wouldn't that expose normal GCN DMA operations to bus contention (I am assuming that only one DME can master the bus at a time)?
 
Its too bad both of the console companies couldn't have gotten amd to redesign the tablet jaguar cores.
Many things are possible, but not many practical. Given the time frame of Jaguar's own introduction and the console launches, even that might not have been possible.
The cores themselves are complex OoO designs with dynamic voltage and clock management.
Complex high-end designs can take up to 5 years, and a big contributor is the implementation and validation of the design.
Jaguar might not be as complex as the big cores, but several years would be needed.

Measurably altering them would reset the core timeline to at least a later part of the design stage, and then the validation of the design would begin anew.
The Xbox architect interview put a 2-3 year time window on when they designed Durango, so resetting Jaguar might have had them with an SOC without a processor.
Microsoft went in-order with the Xbox 360 for similar timing reasons. They asked IBM about OoO, but were told designing and verifying would have taken too long

It also would have cost them dearly in money as well as time, assuming AMD had the resources for a do-over.
Beema and Mullins have cores that are microarchitecturally the same as Jaguar, but they have turbo.
This functionality was likely always mostly already there, but AMD's being stretched so thin and the complexity shows they couldn't bug-fix or validate all of the designed capabilities for the Jaguar core.

Wouldn't it have been feasible to increase the l2 cache from half rate to full clockspeed?
This is further out from the core, but the Jaguar cache hierarchy is itself new and rather complex. There would have been expense, time, and bug risks involved.

The interface is full-speed, just the arrays are half-clocked for power reasons. The upside would be limited and the console makers were cognizant of power concerns.
 
For example in Xbox you see macro redesign changes in introducing jpeg and swizzle units but no modifications to cores themselves.

This is my biggest WTF for Xbox One customisations. Why? I've never read a dev interview who complained about the time taken to decode and display jpegs :???:

What am I missing?
 
This is my biggest WTF for Xbox One customisations. Why? I've never read a dev interview who complained about the time taken to decode and display jpegs :???:

What am I missing?

I think the goal is to use hw compression to save bandwidth when moving data from one pool to the other.
Your typical textures are DXT which are hw based for a long time.
 
I thought the GCN DMAs sit within the memory controller, which is what I read from CI documents.

Has the diagrams changed because I thought all the DMEs sat on an I/O bus? And wouldn't that expose normal GCN DMA operations to bus contention (I am assuming that only one DME can master the bus at a time)?

My intrepetation is that they hung off the same hub, where the PCIe express interface is.
Perhaps I've misinterpreted the documentation. Do you have an excerpt of the description of their placement?

This is my biggest WTF for Xbox One customisations. Why? I've never read a dev interview who complained about the time taken to decode and display jpegs :???:

What am I missing?

It wouldn't be for single images displayed as-is to the screen, rather it would be for assets being compressed in RAM that could be unpacked for use as source data for GPU rendering instead of having to stream more often from the hard drive.

I think the goal is to use hw compression to save bandwidth when moving data from one pool to the other.
Your typical textures are DXT which are hw based for a long time.

The throughput for the decompression hardware is measured in the hundreds of MB, however.
 
My intrepetation is that they hung off the same hub, where the PCIe express interface is.
Perhaps I've misinterpreted the documentation. Do you have an excerpt of the description of their placement?

http://www.behardware.com/articles/848-4/amd-radeon-hd-7970-crossfirex-review-28nm-and-gcn.html

Here is a diagram where the DMA engines sit on either side of the L2 block with pathways between it and L2, ACEs, graphics command processor and the memory controllers.

The CI ISA explains that its the memory controller itself that acts as a DMA controller.

The ISA mentions two ways a cpu can write into gpu memory

1. Request the GPU’s DMA engine to write data by pointing to the location of
the source data on CPU memory, then pointing at the offset in the GPU
memory.

2. Upload a kernel to run on the shaders that access the memory through the
PCIe link, then process it and store it in the GPU memory.

I wondering if the DMA engines are strictly for handling data transfers between the gpu and i/o devices. If the memory controllers interface with both DRAM and eSRAM does the gpu need to use the DMA engines?
 
Reading tea leaves and reading patents aren't as far from one another as you appear to believe, most patent filings are defensive and usually premature to ensure exclusivity (particularly in s/w). Your theory as to the display planes doing all sorts of funky stuff beyond what MS have repeatedly stated they were for (Game, UI, O/S) is interesting but far from 'proven'. Until a game dev comes out and says we did X,Y & Z using display planes the preponderance of evidence suggests they are as dull as ever.

All I know is that (1) They described all display planes specifications/functionality on the patent (2) The patent owner is Microsoft (3) Display planes are real and they are working in every XB1 (4) Display planes initial use is for "Games,UI,OS" and they filled a patent just for this scenario (one of inventors is Andrew Goossen, a technical fellow at Microsoft and one of the architects for the XB1) just a few days after the original patent.

So this patent isn't only for defensive reasons. You can say that it's possible that they will never going to do more than UI overlays with display plans and I can accept this because this is one of the possibilities. But I can't accept someone saying that display planes are only for UI overlays or they are not created to do more than that from the beginning. Microsoft never said that they are only for "Games,UI,OS" scenario.

We have something for this on desktop computers (and no doubt the consoles too ) it's called DMA , the amount of DMA controllers that are in your desktop probably number around or at least the ten - tens, the next gen consoles probably have a similar amount for stuff like audio chips , secondary processors , networking cards.

We are talking about main SoC/APU not secondary processors, right? I was talking about using compressed data on main APU using DMEs (or 2 specialized DMAs).

Somethings like this or textures:
The canonical use for the LZ decoder is decompression (or transcoding) of data loaded from off-chip from, for instance, the hard drive or the network.
http://www.vgleaks.com/world-exclusive-durangos-move-engines/


That's a standard, craptastic patent disclaimer, saying, "this patent not only covers what we've thought of, but also other things we haven't thought of that can be applied to other people's invention." It's meaningless in understanding the tech. The patent itself is just talking about blending two (specifically two) planes into a single display (something no doubt covered in dozens of other patents).

A really decent bit of hardware would have lots of definable rectangles you can scale, rotate, and translate across the screen for animated effects and full-screen effects. If the intention of the scaling hardware was to provide useful features beyond just UI+3D+OS, it's really weak HW. If the intention of the scaling hardware is to provide just UI+3D+OS, it's perfect hardware.

Clearly you're going to remain unconvinced and expect to see some funky (and highly implausible, even impossible) VR/AR/2D3D applications. The best, likely non-standard application of the display planes would be foveated rendering in a VR headset. Actually scrub that - you'd need four display planes (peripheral and main renderings for both eyes).

You didn't read the patent properly. Actually patent talks about four planes, two of them are for system and the other two are for apps (games). Also I'm not talking about other dozens of patents, I'm talking about the only patent that Microsoft filled regarding display planes that describes what you can find on VGleaks (except for numbers of planes for system or number of sub rectangles for each plane which is different from XB1). I like to see foveated rendering on XB1/Kinect (if possible, without another Add-Ons like AR-Glasses with eye-tracking capabilities) but my main point wasn't this.

I was trying to say that Multitasking isn't the reason for having 3 display planes on XB1 (as Brad stated). You could have Multitasking with only 2 display planes and there could be more capabilities in them than only overlaying UI on top of games. They aren't definitive hardware for all kind of stuff (they are weak from your point of view) but at the same time they are more interesting than their equivalents on other gaming GPUs.


I didn't doubt their credibility, but they have never claimed the information they posted was a complete and exhaustive account of the hardware capabilities in each system. You can't use them to claim something is absent from a system, which is what you were trying to do. Let's not forget everyone used them to "prove" PS4 lacked any audio DSPs until the TrueAudio stuff came out confirming their presence. Oh, and companies file patents they never use ALL THE TIME.

And yeah, I use a lot of "ifs and maybes" because I recognize I'm speculating, whereas you seem to treat your own speculations as fact.



It's just the primary reason and I don't disagree they are there for a good reason. What I disagreed with originally was the insinuation that their addition conferred some kind of advantage relative to competitors' designs, when that is simply not the case. They are there to facilitate a function necessitated by larger architectural choices.

That's what I said at the beginning. Customization is not a virtue in isolation. Their value is determined by context. In the Xbox One's case that context was accommodating larger economic and strategic goals for the system for the most part. They thought first about how they would market the device, then what components would best help them accomplish that, and the customizations done were all about making those disparate goals coexists as best as possible.

That is in contrast to the PS4 which drew all its philosophical design goals from an architect who writes game code for a living. Cerny wasn't thinking about how to solve bandwidth deficits, virtualized OSes, TV or Kinect overhead. The customizations in the PS4 are mostly about pushing forward what is possible in next gen game software using Compute in one of the first high performance HSA APUs.

I suggest you to read page 37 and 38 of the linked thread. Sony said that ACP is a specialized DSP and other users suggested that there is no true "TrueAudio DSPs" in PS4.

I said that because I saw VGleaks copy pasted official documents on their articles several times. All of their XB1 articles and some of their PS4 (14 CUs) stuffs was copy pasted from the official documents. So I highly anticipate that they copy pasted DisplayScanOut Engine (DCE) information on their site right from official document, too (from the ame doc that ACP stuff cames from). Also you can see different stuff from Eyefinity/DisplayPlanes like " Two cursor overlays (up to 64×64 each, 3-D support)." on DisplayScanOut specification list which is unique to it.

Some of modifications that are on XB1 will help multitasking like a separate plane just for OS, 8GB of flash memory for apps, more DMAs (also useful for games) or 3 OSs. But there are other modifications that are for gaming side.
 
We are talking about main SoC/APU not secondary processors, right? I was talking about using compressed data on main APU using DMEs (or 2 specialized DMAs).

Somethings like this or textures:

http://www.vgleaks.com/world-exclusive-durangos-move-engines/

Some of modifications that are on XB1 will help multitasking like a separate plane just for OS, 8GB of flash memory for apps, more DMAs (also useful for games) or 3 OSs. But there are other modifications that are for gaming side.

But you can achieve that easily if you don't have multiple incoherent memory pools, the way the ps4 does it, by having a LZ decompress unit that can read / write to main memory, the only reason it has to be part of the dme is because if it did it in the single pool way it would waste bandwidth having too move stuff around twice.

The majority of the stuff that is 'useful for games' is only useful for games in the context of having non unified memory, i doubt a lot of it would be that useful in the PS4.
 
http://www.behardware.com/articles/848-4/amd-radeon-hd-7970-crossfirex-review-28nm-and-gcn.html

Here is a diagram where the DMA engines sit on either side of the L2 block with pathways between it and L2, ACEs, graphics command processor and the memory controllers.
This seems to be simplifying things to focus on an compute shader's perspective.
It does so to the point of even putting the PCIe bus on the same arrows as the memory controllers.

A more comprehensive marketing diagram of the whole GPU shows the interface hanging off of a hub of lower-bandwidth clients that interfaces with the rest of the memory subsystem.
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/4

This arrangement goes back as far as RV770.

That simpler diagram is also somewhat out of date with the Sea Islands family, as the DMA engines have been given their own command queues and no longer have to depend on the other front ends for all their control needs.


The CI ISA explains that its the memory controller itself that acts as a DMA controller.

The ISA mentions two ways a cpu can write into gpu memory

1. Request the GPU’s DMA engine to write data by pointing to the location of
the source data on CPU memory, then pointing at the offset in the GPU
memory.

2. Upload a kernel to run on the shaders that access the memory through the
PCIe link, then process it and store it in the GPU memory.
I don't think those two options require the DMA engines to be the memory controller. DMA offload works by giving the secondary hardware pointers to the source and destinations, which is what option 1 does. The second option abstracts away what is involved in having a shader operate over the PCIe interface, since the DMA is how almost all traffic goes over the expansion bus.

I wondering if the DMA engines are strictly for handling data transfers between the gpu and i/o devices. If the memory controllers interface with both DRAM and eSRAM does the gpu need to use the DMA engines?
DMA engines offload copy work between memory and ESRAM so the shader cores don't have to. It's a matter of being a low-cost alternative that is mostly good enough, especially since the DMEs have bandwidth that's on the order of a PCIe expansion bus, and can be beaten by a CU dedicated to moving memory around.
 
I suggest you to read page 37 and 38 of the linked thread. Sony said that ACP is a specialized DSP and other users suggested that there is no true "TrueAudio DSPs" in PS4.

What do you think you are disproving? Before TrueAudio was announced everyone bandied about the "fact" that PS4 lacked any audio DSPs based on a narrow reading of what VGLeaks had posted up to that point. We now know for certain the PS4 audio hardware does in fact include at least one DSP. Absence of Evidence is not Evidence of Absence. That was my point in the beginning.

Some of modifications that are on XB1 will help multitasking like a separate plane just for OS, 8GB of flash memory for apps, more DMAs (also useful for games) or 3 OSs. But there are other modifications that are for gaming side.

OK, why don't you identify them?
 
(3) Display planes are real and they are working in every XB1
And every PS4

(4) Display planes initial use
Intended use.

You didn't read the patent properly. Actually patent talks about four planes, two of them are for system and the other two are for apps (games).
The claim of the patent, the thing it's trying to secure, is for two display planes. But that's neither here nor there.

I like to see foveated rendering on XB1/Kinect (if possible, without another Add-Ons like AR-Glasses with eye-tracking capabilities) but my main point wasn't this.
Foveated rendering can't happen without eye tracking, which requires a peripheral that doesn't exist yet. What are the chances of MS putting in hardware on the hopes they'll invent an eye-tracking camera and that devs will then opt to make two versions of their games on XB1, one with foveated rendering for those solo-only players who buy the peripheral?

I was trying to say that Multitasking isn't the reason for having 3 display planes on XB1 (as Brad stated). You could have Multitasking with only 2 display planes and there could be more capabilities in them than only overlaying UI on top of games.
It is the reason! They can be repurposed, but they went in for the multitasking. The 3 planes to provide 1 for 3D at lower resolution (as per MS's patent), 1 for UI at native resolution, and one for OS, which is why you need three.

But at the same time they are more interesting than their equivalents on other gaming GPUs.
It has one more plane. We've no idea on the specifics of PS4's display planes. the able to select rectangles in XB1's display planes is handy for drawing rectangular apps on the screen, or rectangular UI frames elements. You're seeing a lot more potential in XB1's display planes versus the rival than is actually materially there. They are slightly different and more capable, at least regards one extra plane, but the purpose of that is pretty obvious. Only by disabling the OS overlays could a dev use 3 display planes in their game.


What do you think you are disproving? Before TrueAudio was announced everyone bandied about the "fact" that PS4 lacked any audio DSPs based on a narrow reading of what VGLeaks had posted up to that point. We now know for certain the PS4 audio hardware does in fact include at least one DSP. Absence of Evidence is not Evidence of Absence. That was my point in the beginning.
Recharge the console audio threads. True Tensilica DSPs in PS4 was categorical disproven IIRC. They may be responsible for the audio decoding, but that's it. AMD TrueAudio in PS4 doesn't exist.
 
But you can achieve that easily if you don't have multiple incoherent memory pools, the way the ps4 does it, by having a LZ decompress unit that can read / write to main memory, the only reason it has to be part of the dme is because if it did it in the single pool way it would waste bandwidth having too move stuff around twice.
The majority of the stuff that is 'useful for games' is only useful for games in the context of having non unified memory, i doubt a lot of it would be that useful in the PS4.
ESRAM is only visible to GPU and can't be used by other clients, and every data on DRAM could be coherent with CPU caches. So there is no difference in this aspect between PS4/XB1. Do developers need to copy texture on eSRAM before using it on GPU?
The canonical use for the LZ decoder is decompression (or transcoding) of data loaded from off-chip from, for instance, the hard drive or the network. The canonical use for the LZ encoder is compression of data destined for off-chip. Conceivably, LZ compression might also be appropriate for data that will remain in RAM but may not be used again for many frames—for instance, low latency audio clips.
Read more at: http://www.vgleaks.com/world-exclusive-durangos-move-engines/
OK, why don't you identify them?
As I said before:

  • XB1 has 30GB/s coherent bandwidth between CPU and GPU ( and other 15 additional engines).
  • XB1 CPU has 30GB/s bandwidth.
  • XB1 has 8GB of unified RAM and every data could be coherent between CPU and GPU on it.
  • XB1 has eSRAM which could have some benefits for XB1 in long term.
  • XB1 has SHAPE.
  • XB1 has 4 DSPs which 2 of them are for speech recognition on Kinect and 1-2 years from now part of them could be accessible for developers. Games can use speech recognition for gameplay right now.
  • XB1 has two highly customized graphics command processors which could help developers to have better usage of GPU.
  • XB1 has 4 DMEs with higher bandwidth than normal DMAs in AMD GPUs, 3 of them are always for games 1 of them could be used for JPEG/LZ decompression another one for LZ compression. The fourth DME/DMA could be used in a time slice for system or DirectX.
  • XB1 has 2 Display planes for games.
  • XB1 has faster CPU than competition (but it has some overhead compared to competition right now).

And every PS4

Are you saying that display planes on PS4 and XB1 are the same? Did you read AMD Eyefinity specifications?

Intended use.
Ok, initial intended use. You can't say there wouldn't be another intended uses for them.

The claim of the patent, the thing it's trying to secure, is for two display planes. But that's neither here nor there.
Read it again, at first it explain two plane functionality and later says:

12. The method of claim 11, where the first plane comprises an application-generated primary-application display surface, and where the second plane comprises an application-generated overlay for positioning over the first plane.

13. The method of claim 12, where the first plane comprises a system-generated primary-system display surface for a computing system, and where the second plane comprises a system-generated overlay for positioning over the first plane.

.
.
.

20. The method of claim 19, where each of the first plane, the second plane, the third plane, and the fourth plane includes a left eye perspective and ....

Read more: http://www.faqs.org/patents/app/20110304713#ixzz31mRZrPY2
Foveated rendering can't happen without eye tracking, which requires a peripheral that doesn't exist yet. What are the chances of MS putting in hardware on the hopes they'll invent an eye-tracking camera and that devs will then opt to make two versions of their games on XB1, one with foveated rendering for those solo-only players who buy the peripheral?
Isn't it possible to render far distances at lower resolution and near distances at higher resolution? or for games like RYSE use a lower resolution background while there is an executions in process? Or using this method for the pre-race camera in NFS Rivals? Or using it according to object distance from games in-game camera instead of human eye filed of view/eye-tracking functionality?

As I said before there are some Eye-tracker out there using last gen Kinect, so it should be possible with XB1 Kinect theoretically. Also I think if Microsoft wants foveated rendering being used in games they have to put it at system level for different uses like what they did with UI/Game display planes. So there shouldn't be much trouble for developers.

I started this discussion before MS ditches Kinect, so I don't know where this discussion will go from now. but there are some possibilities to explore.

It is the reason! They can be repurposed, but they went in for the multitasking. The 3 planes to provide 1 for 3D at lower resolution (as per MS's patent), 1 for UI at native resolution, and one for OS, which is why you need three.
There are some sub Full HD games on PS4 too, but Sony didn't uses second DPs for Higher res resolution UI/HUD. MS used it so when there are some games with dynamic resolution HUD remain the same----> More game/user friendly. this isn't about Multitasking. MS could be like Sony. Also if you take a look at lunch games many of them didn't use second DP for UI (except exclusive games), since it wasn't that important for 3rd party game developers.

It has one more plane. We've no idea on the specifics of PS4's display planes. the able to select rectangles in XB1's display planes is handy for drawing rectangular apps on the screen, or rectangular UI frames elements. You're seeing a lot more potential in XB1's display planes versus the rival than is actually materially there. They are slightly different and more capable, at least regards one extra plane, but the purpose of that is pretty obvious. Only by disabling the OS overlays could a dev use 3 display planes in their game.
MS Research used 3 layers for their research. They can use resources on XB1 differently from their initial research. They did it on a Nvidia GPU (GTX 580) which I think hasn't 2 display planes for games. Display planes aren't vital for foveated rendering but they could be beneficial.
 
Last edited by a moderator:
Are you saying that display planes on PS4 and XB1 are the same?
They're similar enough in operation to serve the same job (UI overlays).

Ok, initial intended use. You can't say there wouldn't be another intended uses for them.
Realistically we can. there can be other uses, but these weren't intended. eg. If Kinect is dropped and devs get access to the DSP and use it for audio, we can say that though the DSP can be used for audio, they were only intended for Kinect voice control. Similar, regardless what the display planes get used for, we can say with considerable confidence that they are intended for 3D+2D+UI as that's how they were described by MS and the official documentation. They clearly aren't intended for foveated rendering or anything else because they'd be talked of that way and even need extra hardware that the console lacks, even if one day the display planes are used that way.

Besides which, what does it matter?? For rendering a low-res background and a high-res foreground, the display planes are hardly saving a tonne of resources. It's not like that extra display plane is worth a considerable amount of GPU power to help equalise the differential. None of the '15' engines are (many of which are standard components in PS4 and PC).

Read it again, at first it explain two plane functionality and later says:
Without going into details about patents, this still doesn't matter. MS's interests in using this patent as described in XB1 are clearly nil because they didn't include four display planes. If they were interested in stereoscopic rendering in their console, they'd have included the hardware. This is a defensive patent covering stuff not all of which is applicable to XB1

Isn't it possible to render far distances at lower resolution and near distances at higher resolution?
Yes, as we've discussed when talking about this tech, but that's not foveated rendering. That's just rendering different resolution passes, which you can do any GPU anyway. Display planes would just save a composition pass.

Or using this method for the pre-race camera in NFS Rivals? Or using it according to object distance from games in-game camera instead of human eye filed of view/eye-tracking functionality?
Yes, you can do that as long as you don't mind your UI being affected by the scaling, or you don't allow the OS layer and use all three planes for the game. But if such effects were the intended use of the display planes, why didn't MS put in something more substantial like a decent 2D image engine?

As I said before there are some Eye-tracker out there using last gen Kinect, so it should be possible with XB1 Kinect theoretically. Also I think if Microsoft wants foveated rendering being used in games they have to put it at system level for different uses like what they did with UI/Game display planes. So there shouldn't be much trouble for developers.
You can't read eyes with either having a camera up close or using a fancy tech like reflected IR*. There's nothing in XB1 as is and sold that can enable foveated rendering. Ergo, it'd be stupid of MS to add hardware features to a view of enabling foveated rendering at a later date, unless they have a plan for it.

* Unless someone comes up with some amazing new tech using Kinect or something, which is a possibility, but one clearly MS hasn't got working yet or they'd have patented it and launched with it, or at least showcased it.
 
Realistically we can. there can be other uses, but these weren't intended. eg. If Kinect is dropped and devs get access to the DSP and use it for audio, we can say that though the DSP can be used for audio, they were only intended for Kinect voice control. Similar, regardless what the display planes get used for, we can say with considerable confidence that they are intended for 3D+2D+UI as that's how they were described by MS and the official documentation. They clearly aren't intended for foveated rendering or anything else because they'd be talked of that way and even need extra hardware that the console lacks, even if one day the display planes are used that way.

Besides which, what does it matter?? For rendering a low-res background and a high-res foreground, the display planes are hardly saving a tonne of resources. It's not like that extra display plane is worth a considerable amount of GPU power to help equalise the differential. None of the '15' engines are (many of which are standard components in PS4 and PC).

I get what you are trying to say but the problem is that we don't know what was their intents while they were making display planes and what is their schedule for executing those intents.

I never said that they can save tonne of resources or considerable amount of GPU power with display planes, but they are useful, and at least save some resources. As I said in my last post they are not vital for everything (like foveated rendering) but if you have them probably you can do more. For example on PS4 you have one display plane for games which could reach to 1080p (if we accept VGleaks) or 1920x2160 like other AMD scalers. and MS officially stated that XB1 supports both native 4K resolution and upscaling to 4K and 3D. But Sony said that PS4 games wont support 4K gaming this should be a hardware limitation so PS4 can't hit that (I mean 4k, not 3D).

The scalers in first-generation 4K panels had a peak resolution of 1920x2160, which is exactly half the resolution of a 4K LCD. At a typical gaming refresh rate of 60 Hz, these scalers simply did not offer enough bandwidth to display a 3840x2160 image 60 times per second.

To address a 4K panel’s full resolution at 60 Hz, two scalers were used simultaneously. Each scaler powered half (1920x2160) of the LCD at 60 Hz, and the signals from these scalers were interleaved within a single DisplayPort™ cable, where an AMD Radeon graphics card would interpret the signal as two separate “tiles” that could be automatically stitched together into a single large surface (figure 2).

http://community.amd.com/community/amd-blogs/amd-gaming/blog

So is it a good reason or intent to put 2 display planes for games?! The architectural optimizations to support 4K resolutions?

Without going into details about patents, this still doesn't matter. MS's interests in using this patent as described in XB1 are clearly nil because they didn't include four display planes. If they were interested in stereoscopic rendering in their console, they'd have included the hardware. This is a defensive patent covering stuff not all of which is applicable to XB1.

Patent talks about 2 DPs for stereoscopic rendering (for apps/games) they don't need 4 of them for this purpose.

Yes, you can do that as long as you don't mind your UI being affected by the scaling, or you don't allow the OS layer and use all three planes for the game. But if such effects were the intended use of the display planes, why didn't MS put in something more substantial like a decent 2D image engine?

It should has a native resolution for second layer and 4 rectangles & cropping are still available, so it's possible to have native UI (game HUD) and do more on 3D rendering. I mean you can have both HUD and a part of the 3D game content on different rectangles of second plane at the same time. They put enough resources on XB1 for developers to do this kind of uses. What's the point of putting something special on a GPU that no one gonna use? If a developer wants to use this kind of techniques they can do it on other platform/GPUs too, as foveated rendering which could be done on current GPUs, but maybe XB1 could do this kind of rendering more efficiently.
 
I

http://community.amd.com/community/amd-blogs/amd-gaming/blog

So is it a good reason or intent to put 2 display planes for games?! The architectural optimizations to support 4K resolutions?

I can't speak to the rest of your post but I can categorically state that the display planes have nothing to with 4K and the problem of nasty cheap monitor ASICs. That problem is going away over time, there are already high end 4K screens with 4k @ 60hz support, and isn't relevant for the consumer market anyway. The consumer HDMI 1.4 standard only supports 3840×2160 (4K Ultra HD) at 24 Hz/25 Hz/30 Hz or 4096×2160 at 24 Hz (thanks Wikipedia!). So your PS4 and your tv are just not fit for 4K gaming right now.

For 4k @ 60hz you need a good monitor w/DisplayPort 1.2 and you can have a single rendering surface that is your entire display. As of now we deal with the tiling by faking 2 displays using a dual monitor setup that is streamed over a DisplayPort MST which is a nightmare as no-one remembered to add a 'hint bit' to MST that says 'this display is upper left' so on reboot a MST array is likely to spray your desktops everywhere, very annoyi
 
Status
Not open for further replies.
Back
Top