Xbox One November SDK Leaked

Yes, that is the point. Garlic bus, by its nature, cannot be coherent as it should go directly to the dram controllers bypassing the memory controller.

After thinking on it, I'm not entirely sure that Onion can properly behave in the case of UC, either.
From the CPU side UC still requires cache snooping, in the event that the attributes on a page are modified and cached lines need to be evicted.

The GPU L2 cannot be snooped, which could mean that while Onion could use UC memory, it wouldn't necessarily do it properly.
The workaround may very well be that as a guest client in the IOMMU setup, making such a change would either happen when the page is not shared with the GPU, or there is a very heavily synchronized update to the guest TLB (this is a global stall).
This sounds more like it's considered IO coherent.
 
Case study guys ! I want you're knowledge to brainstorm on that cause I think you will find something and have a better understanding of the MS vision. Maybe I'm wrong, but I think you look to close of the existing stuff and not on the possibly personilized stuff for both console builder.
What if everyone brainstorms and comes up with the same answers they already have? What if their findings and interpretations aren't the ones you want to hear? Will you accept they're wrong, or just ask repeat for everyone to stop thinking inside the box and try and reinterpret everything the way you want?
 
I found this interesting ..

Seems like MS changed the Dx11 api's itself and gave us 2 new "Contexts" specifically for DMA and COMPUTE work -

B7dhcm_CUAABNS2.png:large


The Interfaces for these new contexts are stripped down specific purpose, so for example the Compute context does not have any "Draw" related api's, purely "Dispatch" related ones ...

B7digfECAAAKICW.png:large


And another important piece of info is..

For the March 2014 XDK, all members of this structure must be 0. Only one compute context can be created at a time. Future XDKs will expose more than one compute context at a time, submitting to different hardware queues. The compute context workloads run at low priority compared to graphics immediate context workloads. In a future release, you will be able to adjust priority for each compute context independently, and change a context's priority dynamically.


I hope Dx11.3(or whatever comes with Dx12) has these 2 new Contexts, as from the sounds of it these will be standard going forward (Compute, DMA , Deffered (existing) , Immediate (existing))
 
This thread is getting trolley again? WTF can a MOD also ban that Lud0 dude?
Yeah, I agree. This thread is a magnet for MisterXMedia disciples, it seems.

Getting back to the meat, what's the new stuff we are sure of? Actually in the docs and not in someone's fanciful misinterpretation of them? We have two GCPs instead of one, with no clear function of the second, and some suggestions from debug flags that the chip uses the XFire bus although that could be copy/paste docs. Otherwise MS have repurposed it for something?
 
Getting back to the meat, what's the new stuff we are sure of? Actually in the docs and not in someone's fanciful misinterpretation of them? We have two GCPs instead of one, with no clear function of the second, and some suggestions from debug flags that the chip uses the XFire bus although that could be copy/paste docs. Otherwise MS have repurposed it for something?
If the design does use it, perhaps some way to hand off pixel data from HDMI-in the compositor?
Perhaps some other source of frame output data could go through there like recording replay?
 
Yeah, I agree. This thread is a magnet for MisterXMedia disciples, it seems.

Getting back to the meat, what's the new stuff we are sure of? Actually in the docs and not in someone's fanciful misinterpretation of them? We have two GCPs instead of one, with no clear function of the second, and some suggestions from debug flags that the chip uses the XFire bus although that could be copy/paste docs. Otherwise MS have repurposed it for something?


Thinking completely orthogonal and out of the box ... could it be that these SoC's will eventually find their way in the datacenter and as such sit xfired ...

MS have made it clear that it does want to possibly move to a SoC + FPGA future in it's datacenters ..
 
The XDMAs might as well be used for the data move engines in some way or another. At least looking at it purely from a laymans point of view, without any data to support this claim.
 
So I will apologize, but it's not the way I want, it's the way it could be. But why everybody that don't think like them have to be from mrx ? It's not because the most part think that one thing is ok that it have to be (not sure I'm clear, but that expression from my country is hard to translate).

My account was ban and my ip blacklist whitout let me the time to explain . Thank's for that so open mind. I never want to spam and I apologize if you think so. I will not post anymore here, but if you accept to take a look at MY point of view, please mp me, I would be honored.



Thank's to have a so open minds !
You have to understand that it isn't personal. It also isn't about having an open mind. A lot of members do have an open mind. It just that we have an unbelievable amount of information about the Xbox One out in the open. Probably more than we have had for any other console. To believe in dual GPUs goes against everything we have read from official MS docs and interviews. There are things that we as a forum don't completely understand about the Xbox One yet, but there have been so many people posting info from other sites that has been proven unreliable not just recently mind you. It has been going on since early 2013. So you should be understanding of the mods reluctance to allow the discussion to be brought up over and over again.
 
So I will apologize, but it's not the way I want, it's the way it could be. But why everybody that don't think like them have to be from mrx ? It's not because the most part think that one thing is ok that it have to be (not sure I'm clear, but that expression from my country is hard to translate).

My account was ban and my ip blacklist whitout let me the time to explain . Thank's for that so open mind. I never want to spam and I apologize if you think so. I will not post anymore here, but if you accept to take a look at MY point of view, please mp me, I would be honored.

Thank's to have a so open minds !

Well, at least you managed to get pass the IP blacklist. Do you know what duck typing is? In Python variables are duck typed. That means, if it walks like it duck, swims like a duck, makes a sound like a duck, that bird will be called a duck.

So when we see a GCN GPU, with the same power envelope at a 7790, the same power gating as a 7790, nearly the same number of CUs as a 7790, with the same features of a 7790, we're likely to call it a 7790. The same can be said with the PS4 and the 7870 GPU. And guess what comparatively both are within the ballpark of where we expect them to be.

There can be subtle differences and customization, we don't deny that, and Shifty has mentioned that. But these minor alterations will not amount to massive changes in performance, minor changes lead to minor differences.

And, lastly, you're still chasing. Stop chasing and just accept the evidence as it is. You don't have to fight real evidence and no one will think less of you for believing official sources.
 
....
Do you know how scientific method works? We build upon what we know. Correlation is not causation. The Xbox one is built upon countless amounts of technology and evidence before it. No magic. We can determine the composition of a star millions of light years away because of the light spectrum it produces. Are you telling me that we can't take a proper X-ray of a silicon chip? MS has provided all the information and all the architecture. They showed the dual GCP, they showed the dual render pipe. They've said flat out there is no dual GPU.
....
Proper discovery is about working with the evidence that is displayed before you, working with that and replicating the results. That is discovery, sometimes it creates things that we didn't know. When people were playing with electricity they didn't know it would lead to computers. We have a situation with dual GCP and render pipes but we aren't hoping and using wishful thinking that it becomes a dual
GPU. We are using our knowledge of what we know about the chip, based on GCN, looks like a Bonaire, its output performance levels, power output, the power output and performance of similar devices and using that as a basis for a measuring stick for this new device. We can look at its competitor which is based on the same processor, based on the same GPU and have similar memory setups to PC and we can look at its performance and make very direct comparisons in performance profile. You have no reason to believe that dx12 will unlock some mysterious silicon that doesn't exist. Why wait until dx12? They certainly didn't wait to allow esram and DMA engines to be used. Why the slow down on dual tender pipes and GCPS and the dual GPU?

1. Technically what we have seen so far from chipworks are photographs of the SoC die most likely using a polarized microscope. This type of photographs give us the top-down view of the die photo (not xray) http://images.anandtech.com/doci/7546/diecomparison.jpg . We have not truly seen the x-ray of both SoC unless you have seen ones that most of us haven't. A more appropriate picture would look something like TechInsights did for the Xbox One 8GB NAND, which is stacked. Unfortunately no one has the x-ray of the PS4 and XB1 SoC with similar resolution as below. Maybe you might get if you buy it for $2500 from TechInsights, which none of us can afford.

Xray of Xbox 8GB NAND
SKHynix-eNAND-Flash-XRay-ZView.jpg


2. There is no evidence to show a physical dual GPU, so I am inclined to believe that such thing does not exist. We can see that from Chipworks die photo. Is it however possible that the one physical GPU functions similarly to the Intel HyperThread technology where one physical core can have two logical threads running concurrently? This goes back to MS claims that the XB1 GPU has "two independent graphics context" where one physical GPU can have two logical graphics contexts. The SDK is very lacking about this second graphics contexts so it's either this is used by the system only or that the SDK is still incomplete.

3. Quote "They certainly didn't wait to allow esram and DMA engines to be used. Why the slow down on dual tender pipes and GCPS and the dual GPU?" The SDK was a mess and many features such as "Descriptor Tables" have only been enabled since October 2014 SDK updates. Likewise, the feature Tiled Resources have only been finalized in the same October 2014 SDK updates. I am not saying the XB1 has dual GPU, but to answer this question may be rather simple from a software and tools perspective. The tools to use these "dual tender pipes..." are simply not ready. Without the eSRAM, the XB1 would be even more crippled due to limited DDR3 bandwidth, so they would have to prioritize the eSRAM and DMA over other features. And most likely, if these "dual" stuffs exist, they probably depend on eSRAM and DMA to work first. You can't run before you crawl and walk.
 
1. Technically what we have seen so far from chipworks are photographs of the SoC die most likely using a polarized microscope. This type of photographs give us the top-down view of the die photo (not xray) http://images.anandtech.com/doci/7546/diecomparison.jpg . We have not truly seen the x-ray of both SoC unless you have seen ones that most of us haven't. A more appropriate picture would look something like TechInsights did for the Xbox One 8GB NAND, which is stacked. Unfortunately no one has the x-ray of the PS4 and XB1 SoC with similar resolution as below. Maybe you might get if you buy it for $2500 from TechInsights, which none of us can afford.

Xray of Xbox 8GB NAND
SKHynix-eNAND-Flash-XRay-ZView.jpg


2. There is no evidence to show a physical dual GPU, so I am inclined to believe that such thing does not exist. We can see that from Chipworks die photo. Is it however possible that the one physical GPU functions similarly to the Intel HyperThread technology where one physical core can have two logical threads running concurrently? This goes back to MS claims that the XB1 GPU has "two independent graphics context" where one physical GPU can have two logical graphics contexts. The SDK is very lacking about this second graphics contexts so it's either this is used by the system only or that the SDK is still incomplete.

3. Quote "They certainly didn't wait to allow esram and DMA engines to be used. Why the slow down on dual tender pipes and GCPS and the dual GPU?" The SDK was a mess and many features such as "Descriptor Tables" have only been enabled since October 2014 SDK updates. Likewise, the feature Tiled Resources have only been finalized in the same October 2014 SDK updates. I am not saying the XB1 has dual GPU, but to answer this question may be rather simple from a software and tools perspective. The tools to use these "dual tender pipes..." are simply not ready. Without the eSRAM, the XB1 would be even more crippled due to limited DDR3 bandwidth, so they would have to prioritize the eSRAM and DMA over other features. And most likely, if these "dual" stuffs exist, they probably depend on eSRAM and DMA to work first. You can't run before you crawl and walk.

This is a good response and I thank you for it for the insanity of what was earlier. I'll try to respond to each of your points.

I can't debate point 1, maybe someone else can, but I will concede the chipworks xray shows nothing (from the sides), and the rest are die shots provided by MS in this case (I assume) however from what we see on it (at least what we can make out) lines up with their technical specs.

As for point 2. Yes this as we wrote earlier in the thread as possible scenarios for the operation of the 2nd GCP if you back pedal a couple of pages.

As for point 3. You are correct that it's possible it just wasn't ready yet. But having no documentation about it, and it operating on it's own according is something entirely different than letting the developer have control over the GCPs. The scenarios range from 'the feature isn't ready' all the way to it's not usable by the game title. And everything in between. Having said that both command processors are identical, at least from our uunderstanding There is documentation about the GCP, just not what both do. And that's okay too. Mosen believes that it enables the X1 to run two independent graphics contexts at the same time. I think it works like a dual clutch, which allows for instantaneous context switching. It could be anything in between it could be entirely something else, but with the limited information we have, at this point in time we can only sit and wait. Jan 21 might reveal more, it may reveal nothing. We may have to wait to GDC for anything substantial. We may have to wait for DX12 documentation to be released, or the next Xbox SDK. And this forum is comfortable with that. There's no agenda, we're just waiting for the evidence to reveal itself. It will happen in due time. There's too much fuss about it to not eventually have an answer.
 
This is a good response and I thank you for it for the insanity of what was earlier. I'll try to respond to each of your points.

I can't debate point 1, maybe someone else can, but I will concede the chipworks xray shows nothing (from the sides), and the rest are die shots provided by MS in this case (I assume) however from what we see on it (at least what we can make out) lines up with their technical specs.

As for point 2. Yes this as we wrote earlier in the thread as possible scenarios for the operation of the 2nd GCP if you back pedal a couple of pages.

As for point 3. You are correct that it's possible it just wasn't ready yet. But having no documentation about it, and it operating on it's own according is something entirely different than letting the developer have control over the GCPs. The scenarios range from 'the feature isn't ready' all the way to it's not usable by the game title. And everything in between. Having said that both command processors are identical, at least from our uunderstanding There is documentation about the GCP, just not what both do. And that's okay too. Mosen believes that it enables the X1 to run two independent graphics contexts at the same time. I think it works like a dual clutch, which allows for instantaneous context switching. It could be anything in between it could be entirely something else, but with the limited information we have, at this point in time we can only sit and wait. Jan 21 might reveal more, it may reveal nothing. We may have to wait to GDC for anything substantial. We may have to wait for DX12 documentation to be released, or the next Xbox SDK. And this forum is comfortable with that. There's no agenda, we're just waiting for the evidence to reveal itself. It will happen in due time. There's too much fuss about it to not eventually have an answer.

Quoted for Truth Fact! (in bold)
 
Can i ask why 4 lights ?
Gpu's have supported 8 lights in hardware for many years or are these some different type of light ?
 
Can i ask why 4 lights ?
Gpu's have supported 8 lights in hardware for many years or are these some different type of light ?
I believe the reference is to shadow casting lights, the number of draw calls for a high geometry scene and lots of shadow casting lights is taxing if I understand correctly.
 
This guy sounds pretty clueless too! Deferred rendering? Forward+? The limitations of Stardock's contribution to the DX12 conversation are already noted, and they are far from being a point of authority.

Just pointing out that Brad Wardell may be the CEO of Stardock BUT he's also part of the Oxide team working on the Nitrous Engine .. And they (Oxide) have talked numerous times in the last year along side MS discussing next gen graphics API's Mantle/Dx12 ..

You're statement is true, that maybe Stardock isn't qualified to talk about DX12, BUT with his Oxide hat on I'm guessing he does have useful knowledge to share!
 
Back
Top