Xbox One November SDK Leaked

Same as Xbone, OS uses them. Presentation of KZ:SF around PS4 launch showcased that developers then had access to 6 full CPU cores.

As for yields, PS4 GPU has 20 GPU CUs [same as 7870] but 2 are shut down for yields. GPU is very big, and chance that error will land on GPU is much larger than error hitting CPU.
http://www.extremetech.com/wp-content/uploads/2013/11/ps4-reverse-engineered-apu.jpg

Yep, I know that games only have access to six cores on PS4, I was just wondering if, of the two cores not available for games, one was marked for redundancy.

Without snap and the NUI it's pretty clear that MS could make do with only one reserved CPU core. As Sony have neither snap nor Kinect, it seems that they should be able to make do with only one core, leading me to wonder about the possibility of a redundant core.
 
"ESRAM resource management. Different virtual address ranges for the color, Fmask and Cmask planes (or Depth, Stencil, and Htile buffers) now enables developers to cleanly separate those planes into ESRAM vs DRAM"

Neat.
 
"New Shader Semantic Hash"!!! YES!!!

I have no idea what this means, like most of the graphics section of the document.
 
"ESRAM resource management. Different virtual address ranges for the color, Fmask and Cmask planes (or Depth, Stencil, and Htile buffers) now enables developers to cleanly separate those planes into ESRAM vs DRAM"
Neat.

I don't know why .. but ... I've just been assuming that developers could do this. Otherwise you'd have little control over putting your most BW heavy buffers (and sections of buffers) in the esram. You'd end up wasting esram BW, and saturating main ram. And consequently feeding the internets with tears and jeers ...
 
3 display planes. 1 reserved for system, 2 for the Game OS. Game OS planes are HUD and background (3D view). Mystery solved. Default scaling is 6-tap Lanczos horizontal. Other scaling modes are Bilinear, 4-tap SINC, 4-tap Lanczos, 6-tap Lanczos, 6-tap Lanczos (soft, low-pass filter), 8-tap Lanczos, 10-tap Lanczos. Filter type can be changed on the fly.

Well, the HUD could be 3D rendered as well, but you know what I mean.
 
Last edited:
Anyone have an image comparison of those filters? :p It's been a while...

It doesn't seem like there is any performance difference between the filters. It's really an "artistic" choice. Seems like the higher the taps, the more high-frequency detail is preserved at risk of creating artifacts. Maybe that's the "sharpening" filter people talked about on release. Maybe devs were just picking high-tap filters. There doesn't seem to be anything in the document about other filters that can be applied. Whatever was going on in COD: Ghosts and KI was either post-processing or just artifacts from a high-tap filter.

There are some recommendations for a few of the filters, not much, but this shitty viewer I'm using won't let me copy text ... :(

You can also change the resolution of each plane and the update rate on the fly. You could have the HUD at 30Hz, the game at 60Hz and change the resolution of the game on the fly.
 
Game OS has 91.5% GPU, NUI has 4.5% and System has 4%. Opting out of NUI gains you the 4.5% (Kinect IR and depth), with colour camera and speech still available. You can only disable NUI during gameplay, not in menus. So if you pause the game to go to a menu, or lobby screen, the NUI turns back on. I guess they want to encourage devs to use NUI for menu navigation? Speech is always available, whether NUI is on or off.

The other 4% is the system reserve for snapped apps and basic system features. If I"m understanding this correction, the amount of that you can pull in to games is flexible, but you have to handle the situations where the system takes the full 4% by snapping an app or something like that. They recommend adjusting the resolution of your game in the case of a snapped app instead of downscaling to the smaller screen space.
 
I don't know why .. but ... I've just been assuming that developers could do this. Otherwise you'd have little control over putting your most BW heavy buffers (and sections of buffers) in the esram. You'd end up wasting esram BW, and saturating main ram. And consequently feeding the internets with tears and jeers ...

Well, actually this was neat:

If you’re running with 2× MSAA, consider asking ATG for sample code that shows how to support 4× MSAA with the first two fragments of every pixel in ESRAM and the last two fragments in main memory. The last two fragments are accessed infrequently due to compression, so the GPU overhead is typically quite low.
 
Last edited:
Seems like the higher the taps, the more high-frequency detail is preserved at risk of creating artifacts. Maybe that's the "sharpening" filter people talked about on release. Maybe devs were just picking high-tap filters. There doesn't seem to be anything in the document about other filters that can be applied. Whatever was going on in COD: Ghosts and KI was either post-processing or just artifacts from a high-tap filter.

Everyone prefers the beer-tap anyway. :3
 
  • Like
Reactions: NRP
There has been a lot of discussion at B3D in the past about MS launching the system before the software side was ready. I guess now we know after all this time.
It also makes you wonder if all the resignations over the last year had more to do with things like this than with TV.
 
There has been a lot of discussion at B3D in the past about MS launching the system before the software side was ready. I guess now we know after all this time.
It also makes you wonder if all the resignations over the last year had more to do with things like this than with TV.
We are a bit off-topic but could they really be blamed? They didn't want a reversal situation of what 360 did to PS3.

As I made some quiet mention in the past, MS pulled all their OS guys and all their engineers onto Xbox before launch. The OS was largely nowhere close to ready right up to nearly day 1 (I'm paraphrasing my source, his response was much more engineer under stress like lol)

The real question is how much further this carries forward with respect to the TV and kinect side of things. If it's all slash and burn on features to deliver quickly and deliver the rest later MS might have a much more comprehensive all in one entertainment vision than originally lead to believe.
 
We are a bit off-topic but could they really be blamed? They didn't want a reversal situation of what 360 did to PS3.

As I made some quiet mention in the past, MS pulled all their OS guys and all their engineers onto Xbox before launch. The OS was largely nowhere close to ready right up to nearly day 1 (I'm paraphrasing my source, his response was much more engineer under stress like lol)

The real question is how much further this carries forward with respect to the TV and kinect side of things. If it's all slash and burn on features to deliver quickly and deliver the rest later MS might have a much more comprehensive all in one entertainment vision than originally lead to believe.
Don't take me the wrong way. I love the fact the system gets better every month.
I just wonder how it all went down. They needed a change in leadership in the Xbox division. They had to go back on some things they planned, but it really is a decent all in one system. If they keep the improvements up on both the developer side and user side they have the potential to make it a great all in one box. Thumbing thru the linked documents from above it seems they have a full res screenshot feature on the dev side (which I am sure is a common featute). Wonder when they will bring it user side?
 
That's particularly damning lol. The community has been waiting for that screen shot feature for some time.
 
3 display planes. 1 reserved for system, 2 for the Game OS. Game OS planes are HUD and background (3D view). Mystery solved. Default scaling is 6-tap Lanczos horizontal. Other scaling modes are Bilinear, 4-tap SINC, 4-tap Lanczos, 6-tap Lanczos, 6-tap Lanczos (soft, low-pass filter), 8-tap Lanczos, 10-tap Lanczos. Filter type can be changed on the fly.

Well, the HUD could be 3D rendered as well, but you know what I mean.
Was the mystery on what the 3 display planes did (as I thought we did narrow it down to OS, HUD + Game?) or was it the 2 graphics command processors and their purpose with managing the 3 display planes? Is there anything in the SDK documentation about those GCPs?
 
There has been a lot of discussion at B3D in the past about MS launching the system before the software side was ready. I guess now we know after all this time.
It also makes you wonder if all the resignations over the last year had more to do with things like this than with TV.

For the current gen, they all were substantially incomplete. Wii U massively so, PS4 still can't suspend to RAM.
The Xbox One had a very abrupt 180 on its always-on connectivity requirement which had to be reengineered at a time when the platform should have been solidifying.

One side note, the VMEM documentation indicates a GPU L1 miss can take 50+ clock cycles to serve, an L2 to DRAM miss can take 200+ cycles to serve, while a miss to ESRAM can take 75+ cycles. This in the context of GPU units, so it looks like this is GPU cycles.
This seems to put 250+ for a miss to DRAM and 125+ for a miss to ESRAM, or roughly half the latency. In a different portion, there is an expectation of a texture access realistically taking at more than 100 cycles and possibly around 400 if there is a miss.
 
hm... so what would the implication be for framebuffer ops etc?
Those would be on a separate path, since SIMDs contend for an export bus that leads to ROPs that work outside of the vector memory system.
In the Durango thread I tried cross-referencing these numbers for the latencies seen by the CPU subsystem, which seems to give 40-60 CPU cycles for off-die access (20-30 in GPU terms).

It's not so clear because of how complicated the last-level miss process may be for the CPU side, and how much preparation for DDR3 is hidden by the extra delays injected by the remote cache snoop.
40-60 Jaguar clock cycles is seems decently in the range of DRAM latencies that Sandra reports for other desktop CPUs. The seemingly worse contributors are the long latencies piled on from everything else.
DRAM's worst cases are likely much worse than the ESRAM would experience, however.
The ESRAM would seemingly be longer-latency than DRAM (150 CPU cycles versus 60), but I am uncertain as to what else is being lumped into their latency contributions.

Whatever the GPU memory pipeline would be doing if a big chunk of it weren't being skipped is where the biggest win seems to be coming from.
I'm not sure exactly where the ROPs would fit in this, unfortunately.
 
mm... Thanks. So for shaders & tex ops, does it come down to higher utilization (fewer gaps) with the lower latency?

Maybe I'll move this...
 
Back
Top