Xbox One November SDK Leaked

A lot of money but long-term cost play a role in console parts more than with PC parts. PCs are brute force, consoles try to do more with less. They could have gone Original Xbox if brute force was the goal. I think the Xbox design will get some wins as time goes on. Microsoft is playing the long game.
 
Having read a ton of articles over the years I have noticed that Microsoft has been designing chip architectures for sometime now. I believe they started around 2004 with a small group and expanded to over 200 engineers. When they introduced the SoC for the 360 at Hot Chips it seemed a lot of the work was done in-house. RRoD was caused by a change one of their people made. Even one of the AMD engineers said Microsoft came to them with a roadmap for die shrinks and the eventual combining of the CPU and GPU. I wouldn't underestimate what Microsoft does in-house. They spend close to ten billion a year on R&D and a huge chunk is in the Devices division.


Microsoft have some very impressive silicon brains ... even their ambitions with refitting their millions of datacentre servers with their own co-processors and SoC's has been publicly talked about for a long time..

They have these FPGA's that currently sit ontop of intel CPU's .. and they plan on moving to FPGA's sitting ontop of customized SoC's probably utilizing Doug Burgers E2 cores...

MS have some great minds over there!
 
I've read most of the recent post and was wondering about the feasibility of a few theories. I've read the talk of there seeming to be extra parts and their potential use. There have been rumors ...
The law of rumours is most of the time, they are untrue. In this case, many of the rumours were started by an Xbox fanatic who'd post extreme theories to get the attention of other Xbox fans. They are founded on the clueless delusions of a nutter. Ergo, ignore the rumours and just go with the details that are revealed through trustworthy sources - the Architects' panel at launch, VGLeaks, and the SDK leak.

For this reason, new members posting this guys theories are looked on with suspicion. They tend to get booted out after a few madness posts lacking reason, and then they sometimes try again with second accounts. Yes, that means I'm thinking you could be one of recently axed troublemakers... ;)

Is a Hyper V cluster possible with a tablet containing a modified Beema APU?
To achieve what? So the game can seamlessly parallel process on tablet and console without the devs having to code for this?
 
The law of rumours is most of the time, they are untrue. In this case, many of the rumours were started by an Xbox fanatic who'd post extreme theories to get the attention of other Xbox fans. They are founded on the clueless delusions of a nutter. Ergo, ignore the rumours and just go with the details that are revealed through trustworthy sources - the Architects' panel at launch, VGLeaks, and the SDK leak.

For this reason, new members posting this guys theories are looked on with suspicion. They tend to get booted out after a few madness posts lacking reason, and then they sometimes try again with second accounts. Yes, that means I'm thinking you could be one of recently axed troublemakers... ;)

To achieve what? So the game can seamlessly parallel process on tablet and console without the devs having to code for this?

Unless you went to business school in the 90's and learned they are normally 95% true. ;)
Or was that the "grapevine"? I still have that book for some odd reason, I will go look at it next time I am in the attic. Outside of tech talk I am sure...

I think a certain site has caused some rumors/notions to stay out there longer than they normally would have. We already know they had several silicon bake off's and no doubt one or more of those could have been more in-line with leaked 720 documents, or that faked (or old?) email with specs (I think it had the Xbox Surface listed).

There are a few things I am curious on, but nothing like 50 hidden co-processors or stacked anything. For tech talk, I would love for their to be special sauce, it would be fun to pick apart and talk about. The good news is, there are still some things that are not fully understood, and in time I am sure we will learn more about these optimizations.

Brad Wardell called out Azure in his blog for not making use of GPU's - "Right now, this isn’t doable because cloud services don’t even have video cards in them typically (I’m looking at you Azure. I can’t use you for offloading Metamaps!)"

Yeah I go to misterxmedia, so ban me.

But why there is no disscussion of xdma, two different L2 caches: both 512k with one 4 8-way module. However, the other 512k l2 is 16 way and is per GPU. Why no discussion of the 16 pixels per vsp per cycle with each pixel being 32bpp or 32 bits. THIS ALL FROM THE XDX.

XDMA could just be a copy and paste thing, as other parts of the XDK you can find verbatim in other MS documents (server, etc - Let me go check some of the XDK's that I have). I guess I missed the two L2 caches?
 
Last edited:
Yeah I go to misterxmedia, so ban me.

But why there is no disscussion of xdma, two different L2 caches: both 512k with one 4 8-way module. However, the other 512k l2 is 16 way and is per GPU. Why no discussion of the 16 pixels per vsp per cycle with each pixel being 32bpp or 32 bits. THIS ALL FROM THE XDX.

We did talk about the XDMA. Go back like several pages. XDMA is the high bandwidth PCI version of an external connector used to link GPUS into Crossfire mode. That's great it's used to transfer larger completed frames greater than 1440p back to the master GPU for display. Except it's entirely pointless in this scenario for the main reason that X1 has unified ram so there is no reason for that copying or that bridge to exist.

You claim there are two GPU do you also now claim that there are 16 GB of ram? Can you show me where that is? Because even with 2 GPUs it's still going back to that 8GB DDR3. If that's what's happening then XDMA is useless. The SDK is also not up to date either did you mention that as well? There is tons of information that has no relevancy to Xbox one. The summary still indicates that the Xbox GPU is running at 800Mhz and the processor at 1.6Ghz. But we already know that to be false.

So what's more likely here, documentation from older SDKs that was never culled and updated everywhere? Or a hidden GPU?

Do you know how scientific method works? We build upon what we know. Correlation is not causation. The Xbox one is built upon countless amounts of technology and evidence before it. No magic. We can determine the composition of a star millions of light years away because of the light spectrum it produces. Are you telling me that we can't take a proper X-ray of a silicon chip? MS has provided all the information and all the architecture. They showed the dual GCP, they showed the dual render pipe. They've said flat out there is no dual GPU.

The prospect of discovery is clouding your ability to think critically. It's like me making the claim that giants, hobbits and Dragons exists. Only
You can't see them or feel them, but I have the bones of midgets, dinosaurs and cavemen to prove it.

Proper discovery is about working with the evidence that is displayed before you, working with that and replicating the results. That is discovery, sometimes it creates things that we didn't know. When people were playing with electricity they didn't know it would lead to computers. We have a situation with dual GCP and render pipes but we aren't hoping and using wishful thinking that it becomes a dual
GPU. We are using our knowledge of what we know about the chip, based on GCN, looks like a Bonaire, its output performance levels, power output, the power output and performance of similar devices and using that as a basis for a measuring stick for this new device. We can look at its competitor which is based on the same processor, based on the same GPU and have similar memory setups to PC and we can look at its performance and make very direct comparisons in performance profile. You have no reason to believe that dx12 will unlock some mysterious silicon that doesn't exist. Why wait until dx12? They certainly didn't wait to allow esram and DMA engines to be used. Why the slow down on dual tender pipes and GCPS and the dual GPU?

I'll let you in in our secret. They aren't. They've likely always been in use always. That dual GPU doesn't exist. DirectX12 will help improve things for Xbox one, maybe make better use of it, but there is nothing behind the curtains.

Stop chasing something that doesn't exist.
 
Last edited:
It's not cache coherent
That doesnt mean it is not coherent. As far as I understood it, ONION walks through PCI, and that's coherent. If you are using GPU over UC memory, its fully coherent, whereas GARLIC bus won't.
ONION+ added the possibility to maintain coherency for cached content in the GPU by selectively flushing it, or something like that.


I think the hypervisor is the third OS.

VM Enter/Exit does not come for cheap, and can kill your performance outright. Somewhere it was reported that the VM enter/exit from a masked interrupt was hindering performance as well..
From a security POV, also, it is highly unlikely someone could have been high so much to even think to make an OS in an hypervisor.

Managing hardware in the hypervisor is quite expensive, so it doenst look like it is a good idea.

(OT: even after one year we have to read about hidden silicon et cetera??)
 
Sorry, but VGLeaks has to be untrusted too.
Not on US numbers. They get the same NPD numbers as the rest of us and just record them. We know they change their speculations to real data when its available, and US data is reliably (well, to GAF/Internet standards!) supplied.
 
Well instead of being mocking, be brave, check misterxmedia to see where mrc get these things on the xdk, use your xdk to double check the source, and then come with a proper explaination. And do not forget the 16 pxels per cycle per vsp.

If all this is true why is the XB1 bring beaten by the PS4? everything you mention is a hardware issue, requiring no software / drivers to work and yet will be 'enabled' by a update?.
 
Well instead of being mocking, be brave, check misterxmedia to see where mrc get these things on the xdk.
Instead of being 'brave', be intelligent. Look at MisterXMedia's track record. He's being talking unsubstantiated fanboy bollocks for years. With years of being proven wrong, why believe he has any insight now? Then apply his theories logically. "XB1 has this amazing super power from amazing custom hardware." Uh....so why's it playing the same games as PS4 at lower resolutions/framerates? "Because it's not yet available. Because devs need to learn the ESRAM. Because DX12!"

We had the same bollocks with PS3 and teh power of teh Cellzorz. At the end of the gen, most cross platform games were lucky to be on par with XB360. All the wonderful theories in the world couldn't make up for a comparatively poor GPU. The proof of the pudding is what's on screen, and that tallies precisely with what was described by the Architect's Panel and VGLeaks. A few uncertain specs can't change that, so must be interpreted within the scope of what's known. That's the difference between science and religion.

The last thing we need here is more frickin' MisterX crap. Go hassle some other place with your misguided faith.
Yeah I go to misterxmedia, so ban me.
Done!

NOTE - Visiting MXM is not a bannable offense, although it is damaging to one's health and completely discouraged. However, raising his views in a B3D discussion, especially when already discussed/ignored, and persisting with them, is.
 
There's a dev that we can listen for now. He give some informations about Dx 12 that may help to understand a little bit more what we've got with the SDK : He is the CEO of Stardock and explain the non NDA part of DirectX 12 : http://forums.littletinyfrogs.com/460524.
"Guess how many light sources most engines support right now? 20? 10? Try 4. Four. Which is fine for a relatively static scene. But it obviously means we’re a long long way from having true “photo realism”. "

This guy sounds pretty clueless too! Deferred rendering? Forward+? The limitations of Stardock's contribution to the DX12 conversation are already noted, and they are far from being a point of authority.
 
If you were to install Windows on a VMware server, does it really kill that much performance? We're not talking about running a VM using an emulator inside VirtualPC or something like that on a Windows installation. I'd thought that VMware and other systems like that had become very performance optimized because of hardware virtualization in CPUs and GPUs. The Xbox guys also mentioned their Host OS is highly optimized because they know the exact specifications of the two VMs it has to host. It's not like a traditional setup where you could have any number of VMs with vastly different specifications.

Right now it does not appear that Xbox One is at a massive disadvantage because of the VM setup. Now that the SDK is being sorted out for things like the API, we're starting to see performance that falls more in line with the disparity in GPU power for PS4 (900p vs 1080p) etc. At least, it does not appear that the VM setup is a significant performance hindrance, but it's pretty hard to tell without more specifics.
 
We are talking about hardware informations here, nothing about the number of sold stuff. From the beginning they had some good data about PS4, but not from Xbox One. They defined that the stuff inside the X1 might be the same and they thought the GPU is something like a 77xx AMD card. But is there any real data anywhere (and not a rumor launched by no one) that proved it ?

So no, all and everything from VGLeaks have not to be trusted. It have to be read because it's interresting, but without proof, it's maybe a wrong intel.

http://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview

"Every one of the Xbox One dev kits actually has 14 CUs on the silicon. Two of those CUs are reserved for redundancy in manufacturing."

We know it's at 853MHz, we know it uses 12 CUs. Just a basic GPU comparison between a 7770 GHz and the Xbox One:

Xbox One:
768:48:16 853MHz
1.31Tflops, 40.9GT/s, 13.6GP/s

7770GHz:
640:40:16 1000MHz
1.28Tflops, 40GT/s, 16GP/s

Would you say that's "something like a 77xx GPU"?
 
We are talking about hardware informations here, nothing about the number of sold stuff. From the beginning they had some good data about PS4, but not from Xbox One. They defined that the stuff inside the X1 might be the same and they thought the GPU is something like a 77xx AMD card. But is there any real data anywhere (and not a rumor launched by no one) that proved it ? Again on that thread some informations are taken for truth without physical evidence.

So no, all and everything from VGLeaks have not to be trusted. It have to be read because it's interresting, but without proof, it's maybe a wrong intel.

But let's go back to the real stuff. And after the article wrote by Brad Wardell, I think we should think at what we have in that SDK in another way ! He explain the differences between Dx 11 and Dx 12 and in a light way how CPUs are gonna talk to GPU.

For example, for Dx 11, only one core of a x cores CPU can send data to one thread of the GPU at the same time. All the data the CPU send to the GPU are stacked in a thread. With Dx 12, all the CPU cores will be able to send data the the GPU cores so, rather that having one thread, you will have x threads, depending of the CPU' cores or the way the devs do the game.

So when MS said the Xbox One is balanced, and when you compile that data with Dx 12, you start to understand what MS is doing.

The question I have, which is out of the SDK context I recognize, is what about Open GL ? If MS did not do that before, why OpenGL did not think about that before ? Now they know so they also can do the stuff and the PS4 (and opengl games of course) will benefit from that too. That why the CEO of Stardock said it's a win win for everybody (Pc, X1 and PS4).

No lets do one better than listening to Brad Wardell. Lets go to amazon.com and purchase a book called Introduction to Game Programming with DirectX 11. Lets go to location 3315 on Kindle. And read the following excerpts about how deferred contexts and cores and immediate contexts work in DX11. Then we can compare it to what Microsoft says how it should works on DirectX12.

We don't need to take what he thinks how it works. What you need to do is your own research. Period. There are tons of source books out about DirectX11, there is _no secret here_. Just pick up a damn book! Go to a library! Go get a e-reader and e-pirate some books. Who cares! The fact is, the reference material is out there. Just like how there will be reference material for DX12.

Researching and extrapolating what a person says and writes is religion and faith, not science.

But lets get better than that. We have members here at B3D that are defintiely deveopers. Some of them work on Order1886 and are DirectX/XNA MVP since January, 2009. [sorry I just sorta pulled this ;P]

Hey look, he even wrote a BOOK right here: http://www.amazon.ca/Practical-Rend...=sr_1_2?s=books&ie=UTF8&qid=1421422778&sr=1-2

A book! On directX11. You'd think he'd know some things about it, likely a lot more than Brad Wardell when it comes to graphics technologies. So let's just stop with the name dropping already, these members will not engage in these stupid discussions because there are lower class citizens like myself that will filter out the dumb before they really need to step in. That and he's likely busy! Crunch time for him since Order is coming out very soon!

DirectX12 is awesome. I'm sure there are a lot of positive subtle things about it. It's not going to reveal alien architecture hidden in the Xbox. Everyone could benefit from some better designed APIs, and additional featuresets to support different types of techniques, and more flexibility for programmers to make things work they want it to work.

You know how to do research. If you don't, I suggest you learn.
 
Last edited:
http://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview

"Every one of the Xbox One dev kits actually has 14 CUs on the silicon. Two of those CUs are reserved for redundancy in manufacturing."

We know it's at 853MHz, we know it uses 12 CUs. Just a basic GPU comparison between a 7770 GHz and the Xbox One:

Xbox One:
768:48:16 853MHz
1.31Tflops, 40.9GT/s, 13.6GP/s

7770GHz:
640:40:16 1000MHz
1.28Tflops, 40GT/s, 16GP/s

Would you say that's "something like a 77xx GPU"?

Addendum: Durango has two geometry pipes (ala Bonaire, Pitcairn/Tahiti configs). Cape Verde has just the one (7750/7770)
 
Right ! The virtualization is not the thing that make the difference in game between the 2 consoles. But 900/1080p is a bad choice for your example : can you find any open world game like Forza Horizon 2 on PS4 (1080p/30 fps, 4x msaa). Yes you can find some 1080p/30fps easily, but open world with 4x msaa too ?
We can take ACU as example to but it doesn"t proved anything. Only that for both consoles the best have to come. The question you might ask is "Do I want a beautifull but empty games or a nice game with life and IA ?"

I don't really understand what you're trying to tell me. This really doesn't have anything to do with what I was posting. The GPUs are known in both consoles. All I'm saying is virtualization does not seem to have a significant performance impact, looking at the performance of games on both consoles. The difference seems to be in line with the differences in the GPUs.
 
That doesnt mean it is not coherent. As far as I understood it, ONION walks through PCI, and that's coherent. If you are using GPU over UC memory, its fully coherent,
The GPU can be treated as an IO device, but cache coherence does not work if it applies in one direction. This case is basically saying that the bus is cache coherent when not using the cache.

ONION+ added the possibility to maintain coherency for cached content in the GPU by selectively flushing it, or something like that.
Onion+ prevents the GPU from keeping data from accesses to coherent memory pages in its caches. This means there is no need for selective flushing, because the data will not be resident.

Sony customized a volatile flag (or possibly customized an existing volatile flag, the SDK mentioned cache lines labeled as volatile with different behavior) that allowed for selective flushing of compute data in the caches. I did not see it required that the data be of a CPU coherent type, and the bypassing the cache is going to bypass cache flags.

From a security POV, also, it is highly unlikely someone could have been high so much to even think to make an OS in an hypervisor.
The OS maker said the third OS is what allocates resources to the other two.

Managing hardware in the hypervisor is quite expensive, so it doenst look like it is a good idea.
The system has three customized operating systems that are designed with a paravirtualization scenario in mind.
The resource splits are pretty static (memory is fixed, processing resources shift at specific points), and the big performance concern is the GPU, which has done things like duplicate its graphics front end. The Xbox designers discussed rearchitecting the GPU driver model to cut down the number of interrupts, which were incurring a hefty performance cost for a reason.
 
This case is basically saying that the bus is cache coherent when not using the cache.
Yes, that is the point. Garlic bus, by its nature, cannot be coherent as it should go directly to the dram controllers bypassing the memory controller.

...they had to cut down the number of interrupt because of the need for them to be processed first in the hypervisor - which incurs in the VM Enter/Exit costs.
 
Back
Top