Xbox One November SDK Leaked

There is a page on a split render target sample, but unfortunately it doesn't have much info. The picture shows about 70% of the render target in ESRAM and 30% in DRAM. That would be the top 20% of the screen and the bottom 10% of the screen in DRAM. They say there typically is not a lot of overdraw in sky and ground. Seems like z, g and colour buffers are all split, but are able to be split differently as needed. Mentions HTile should reside in ESRAM, but I don't know what that is (assuming it has something to do with tiled resources. The tile buffer?).

Well, from here:
http://amd-dev.wpengine.netdna-cdn....012/10/Inside-XBox-One-Martin-Fuller_new.ppsx
there's this mention of HTile:

ESRAM can handle concurrent read/writes:


  • Increasing effective bandwidth above 109 GiB/s
Operations that can take advantage of this:

  1. Read modify write operations
    1. Depth buffer / HTILE update
    2. Alpha blending
So it sounds like something to do with "Hardware Tiling" so it's probably talking about the current render-target tile being worked on ...?

Interestingly, it looks like you could DMA completed tiles out while beginning work on the next with no performance hit, unlike on the 360.

From the above presentation I also liked this, simply because I like anything that makes everything better.

  • ESRAM makes everything better
    • Render targets
    • Textures
    • Geometry
    • Compute tasks
 
Do you mean that Early-z might far more effective if z is in the esram?

It appears that the esram is low latency after all, so it would make sense if that were the case. You could clear pixel shading jobs that woeren't needed very quickly and keep the rendering pipeline busy...
 
Hierarchical Z?

Looks like you're right.

7.3 HTILEbuffers

The HTile buffer is a separate surface that holds the meta-data for compression and hierarchical optimizations. An HTile is a 32-bit word that represents the compression and hierarchical information for an 8x8, 4x8, 8x4 or 4x4 region of the screen as specified by HTILE_WIDTH and HTILE_HEIGHT. Each DB has an 8k htile cache (8k htiles, not bytes). 8x8 mode with FULL_CACHE=1 and 4 DBs provides 2 million pixels (8*8 pixels * 4 DBs * 8192 htiles). By contrast 4x4 mode with FULL_CACHE=0 and 4 DBs provides 262144 pixels (4*4 pixels * 4 DBs * 8192 htiles * 1/2 cache). On evergreen/cayman only 8x8 mode is supported.

http://amd-dev.wpengine.netdna-cdn....013/10/evergreen_cayman_programming_guide.pdf


Appears to mean they recommend keeping the entire HTILE cache in ESRAM.
 
So not only is the Xbone running the cpu at a higher clock, it also has 1 core more available for the games, that can be utilized up to 70%
Very neat job, I wonder if Microsoft dropped the yield gain from having 7 instead of 8 cores as a response to the ps4 reveal. Seems like a great compromise.
 
Not quite sure what you're asking, but the Bone has all 8 cores active. 8 th core is system reserved, but now 7 is partially available for games depending on developer choices.
 
Not quite sure what you're asking, but the Bone has all 8 cores active. 8 th core is system reserved, but now 7 is partially available for games depending on developer choices.
Ohh, well afaik the ps4 has 6 cores for games and 2 reserved for os. I was speculating (not the only one doing that) if it actually had one core as spare for better yields. Now we know that it's definitely not the case for Xbone.

And the cpu advantage is even bigger than just the clock upgrade.

What are the chances that older games get patches that takes advantage of the new SDK?
 
Ohh, well afaik the ps4 has 6 cores for games and 2 reserved for os. I was speculating (not the only one doing that) if it actually had one core as spare for better yields.

If the PS4 only had 7 operational cores, they would not advertise (note the press event and the press release) and spec it as 8 cores - that would be fraud. We can safely say the PS4 has 8 operational cores and released developer material confirms that developers have access to 6 of those cores. Maybe devs have acces to more, but there's nothing to support that.
 
Turning off CPU cores to boost yields is not a good choice. It is better to leave redundancy for yields in the largest object on the die, the GPU core, and Sony is already doing that.
 
how the heck MS able to do all xbox stuff with only 1 and a half core.. while sony needs 2 full cores for stutter-ey slugish and very barebone multitaskting and OS.
 
I think the ps4 uses 1 core for the os with another reserved for 'future plans' which always seemed to suggest to me if Kinect being central to everything works we'll need resources available to make it work for us too.

Now that's been pretty much ruled out, I'm sure Sony will follow suit at some point.

The whole idea of restrictions for optional features seems stupid to me anyway, let devs use what they want and tell the user that voice commands won't work during the game. Might piss 5% off the userbase off but the rest will be happy with better games.
 
Last edited:
Well that certainly is an interesting read. I have voiced concerns in the past about potential contention issues when using a shared memory pool for both CPU and GPU, and how esram was the solution. Bourne out by the docs...

"
  • On Xbox One the effect of DRAM contention is far greater than it was on Xbox 360, with the result being that it is much easier for the CPU and GPU to slow each other down.
Optimizing to reduce memory bandwidth is a key strategy for Xbox One. We strongly recommend that, where applicable, titles consider adopting data-oriented design because this can have the single largest effect on title performance. ESRAM is the single most effective means of reducing DRAM contention, but it is available only to the GPU."

It seems that having (and using!) Esram is critical to allowing the CPU and GPU to run concurrently without stalling one another. With the December sdk adding improved functionality for the management of esram, things are looking good for future xbox one titles.
 
I think the ps4 uses 1 core for the os with another reserved for 'future plans' which always seemed to suggest to me if Kinect being central to everything works we'll need resources available to make it work for us too.

Now that's been pretty much ruled out, I'm sure Sony will follow suit at some point.

The whole idea of restrictions for optional features seems stupid to me anyway, let devs use what they want and tell the user that voice commands won't work during the game. Might piss 5% off the userbase off but the rest will be happy with better games.


I dont know, playing on XBL Destiny in chat I hear a lot of people using their voice commands, like "Xbox record that, "Xbox snap party", and so forth.

You can do that stuff with buttons now, but some people probably like the voice commands better.

30% of one core isn't enough to worry too much about...
 
if the game itself have awesome voice command (imagine dragon age with voice command "xbox, inquisition quest", "xbox, teleport to Skyhold", "xbox, go to war-room") but the dev need the extra resource, can they choose to disable the system voice command?
 
if the game itself have awesome voice command (imagine dragon age with voice command "xbox, inquisition quest", "xbox, teleport to Skyhold", "xbox, go to war-room") but the dev need the extra resource, can they choose to disable the system voice command?

Dragon age did have awesome voice commands. I was using " summon mount" and "quest map" regularly. From my understanding if a title wishes to offer that sort of nui functionality, they don't get to use any of the 7th core.
 
The whole idea of restrictions for optional features seems stupid to me anyway, let devs use what they want and tell the user that voice commands won't work during the game. Might piss 5% off the userbase off but the rest will be happy with better games.

PS3 universal voice chat says hello :D
 
IIRC, some Xbox function would be unpractical if you remove the system voice command, like recording.
 
Back
Top