Xbox One November SDK Leaked

Out of curiosity if Xbox one vgleaks diagrams are not allowed how come they are gospel if showing PS4 ?

Surely the same skepticism should be given to both vgleaks documents or you would have to conclude the Xbox dual GPU is true as it's from the same source ?
 
Wrong way to look at it mikee. It's not gospel. It's just that ps4 having 2 GCPs no longer makes xbo unique. The reality is a simpler OS (PS4) prioritizes a complete second gcp for just OS. While it may not be how xbo accomplishes it we can leverage an established pattern which has worked thus far about how each console works. We have never ruled out the possibility that the second gcp wasn't system related and this ps4 diagram makes it more plausible.

There isn't much to discuss still, even without this diagram it's not like we could have commented more. Lanek from the other thread already indicated that whatever that 2nd GCP did, it wasn't about increasing GPU performance. That's hard to deny, the Both PS4 and Xbox OS dashboards are required to pop into view and back into game mode near instantly. There must be some way they are accomplishing this feat. This could be part of the answer.

They write the following: sounds somewhat familiar.
PS4 GPU has a total of 2 rings and 64 queues on 10 pipelines

– Graphics (GFX) ring and pipeline

Same as R10xx
Graphics and compute
For game


– High Priority Graphics (HP3D) ring and pipeline

New for Liverpool
Same as GFX pipeline except no compute capabilities
For exclusive use by VShell


Read more at: http://www.vgleaks.com/orbis-gpu-compute-queues-and-pipelines
 
Out of curiosity if Xbox one vgleaks diagrams are not allowed how come they are gospel if showing PS4 ?

Are VGLeaks diagrams not allowed? As far as I'm aware there's never been an issue using them since they've been very accurate from the start about the technical specs of the consoles.

Surely the same skepticism should be given to both vgleaks documents or you would have to conclude the Xbox dual GPU is true as it's from the same source ?

When have VGleaks ever suggested that the Xbox One has "dual GPU"? They've only ever shown that the Xbox One has a 12 CU GPU as discussed numerous times before.
 
Oh OK cool, I stand corrected, on a previous post it was mentioned that vgleaks were unproven and should not be referred to

On the Xbox Diagrams from vgleaks it shows each CU or SC as they refer to it having it's own L1 cache but on the PS4 dev slides shows it's shared between 3 CU units, what advantages or disadvantages does this have or could it point to those CU having come from another range of cards. That might be where some of the R&D money went to and not the dual threading ? just an idea.
 
Oh OK cool, I stand corrected, on a previous post it was mentioned that vgleaks were unproven and should not be referred to

On the Xbox Diagrams from vgleaks it shows each CU or SC as they refer to it having it's own L1 cache but on the PS4 dev slides shows it's shared between 3 CU units, what advantages or disadvantages does this have or could it point to those CU having come from another range of cards. That might be where some of the R&D money went to and not the dual threading ? just an idea.
That also may have been confusion with VGChartz (sales data) and VGLeaks.

I imagine at the very least there would be a cost differential in manufacturing. Could you link the documents for both X1 and PS4?
 
That the 2nd GCP is for the OS system makes sense. Occam's Razor and all . . . .

Sony stated in it's (leaked) PS4 documents that 2nd GCP is exclusive to OS (VShell) and they didn't mentioned it's existence on internet at all (read Cerny comments and compare them with VGleaks articles about PS4 GPU). On the other hand, Microsoft never said such a thing about XB1 and it's 2nd GCP. They only stated that Xbox One hardware supports two concurrent render pipes (two independent graphics contexts). Is it similar to PS4 or GCN? If it's what vanilla GCN can do, then I can accept this idea.
 
It seems you guys have decided what you believe the purpose is for the dual GCP is. I was curious if it could be used by another OS other than the apps OS. If it's true that multiple cores will be able to talk to GPU with DX12, would you be able to run one game OS that handles the objects like the player characters and collision detection and things. Then a second could run a persistent world for those things to interact with.

Say the world OS is calculated and built by the cloud and loaded to the RAM or the embedded memory on the chip. The other runs everything else involved in creating the scene. One set of cores using one GCP for it's OS, the other GCP used by the rest for the remaining cores.

When a collision occurs, the local OS does all of the immediate physics work and sends a report of what happened to the cloud. The cloud calculates the results and sends it back to the cloud OS on the console. Latency may cause a delay of a frame or two buy if the local OS creates enough smoke and particle effects to hide the impact, it might give the cloud enough time to work out the proper physics to make the explosion look more realistic. A magicians smoke and mirrors.
 
It seems you guys have decided what you believe the purpose is for the dual GCP is. I was curious if it could be used by another OS other than the apps OS. If it's true that multiple cores will be able to talk to GPU with DX12, would you be able to run one game OS that handles the objects like the player characters and collision detection and things. Then a second could run a persistent world for those things to interact with.

Say the world OS is calculated and built by the cloud and loaded to the RAM or the embedded memory on the chip. The other runs everything else involved in creating the scene. One set of cores using one GCP for it's OS, the other GCP used by the rest for the remaining cores.

When a collision occurs, the local OS does all of the immediate physics work and sends a report of what happened to the cloud. The cloud calculates the results and sends it back to the cloud OS on the console. Latency may cause a delay of a frame or two buy if the local OS creates enough smoke and particle effects to hide the impact, it might give the cloud enough time to work out the proper physics to make the explosion look more realistic. A magicians smoke and mirrors.

We haven't fully decided, but it's worth noting that we can pull back a bit on the concept of GPU 'hyper threading' a bit, and start to consider it as being a system reservation as more of a reality. I certainly havent put a stake into the ground yet, there are some subtle nuances on how both diagrams work. But despite their differences they could be looking to achieve the same thing.

The game VM will handle everything it needs within itself, from my understanding it can never leave it's own VM, it's how the xbox protects itself, and well, from the view of it Windows 10 as well. The GCP is required schedule and send work into the working parts of the GPU and ensure that those items are drawn onto the scene as they are meant to be, from a game code perspective the GCP does nothing. You can view the GCP conceptually as a good scheduler. You want to draw items like an artist draws a painting, from background to foreground.

As for cloud processing, the easiest way to view cloud is to let the server handle all, or none at all. For instance, it's easier for the player to continually transmit the vector location of the player at all times, and the server does the physics and processing for the entire level and sends back the vectors of the remaining objects on the screen. This is your typical multiplier dedicated server game, like Battlefield.

You wouldn't do things on both server and locally, if that makes sense cause that would create some really awkward inconsistencies AFAIK.

And also, the last couple of posts, we are getting off the topic of the SDK. If you want to ask questions not about the SDK there are 2 other xbox related threads.
 
Thanks for the reply. I guess my lack of knowledge makes my questions seem off topic but my question was about the SDK and wondering about how the parts and configuration could be used. I'll continue reading this thread as it has been extremely informative. But I'll hold my questions for more speculative threads. Thanks again.
 
You wouldn't do things on both server and locally, if that makes sense cause that would create some really awkward inconsistencies AFAIK.
You can absolutely do that. The first system you described is an authoritative server. You leave the server to handle all the game operation and it tells the clients what's happening where for them to draw. The problem there is the input latency of pressing a button to contacting the server to getting a response. Alternatively, you have the clients compute locally and inform each other what they're doing. The server relays messages, but also will need to check on outcomes to make sure everything lines up and nobody's cheating.

The idea of running a cloud OS is just adding crazy complexity to all this with zero benefit. ;) Writing two executables to run on two different OSes and synchronise their visuals, with one of those OSes having to handle latencies that might spikes into the tenths of seconds, would be a miserable job for a developer!
 
You can absolutely do that. The first system you described is an authoritative server. You leave the server to handle all the game operation and it tells the clients what's happening where for them to draw. The problem there is the input latency of pressing a button to contacting the server to getting a response. Alternatively, you have the clients compute locally and inform each other what they're doing. The server relays messages, but also will need to check on outcomes to make sure everything lines up and nobody's cheating.

The idea of running a cloud OS is just adding crazy complexity to all this with zero benefit. ;) Writing two executables to run on two different OSes and synchronise their visuals, with one of those OSes having to handle latencies that might spikes into the tenths of seconds, would be a miserable job for a developer!

Yes the bold is often my concern about client side computing. But I suppose for a single player game it's an irrelevant point.
 
Shifty my thinking with the separate OSs was freeing local resources for other things with the hypervisor handling the communication between the two. My not fully understanding what a hypervisor can do may have misled me. I do understand that it would add even more complexity that hopefully could eventually be overcome with better tools. Part of the benefit I imagined was with persistent worlds. When playing an RPG, it would be great if bodies didn't just disappear and damage was permanent. Saving the cloud OSs last setting would allow for that I assume. Or it could be as simple as loading the last settings to the HD. But ultimately it was mainly about freeing resources. That and wondering why so much embedded memory for an OS.
 
It gives a far more responsive game though, essential for fast multiplayer.
Sounds like something titanfall, quake, and the Call of Duty models are based on, hit scan style guns, with only a few weapon types with an actual velocity.

I'm hopeful as to what the future holds for assisted cloud computing as it becomes more feasible and widespread. Right now we have very little to see, but at least we know it's coming in at least 1 game title in the near future <24 months. We're probably off-topic from it though, unless this SDK has specific assisted remote computing API calls in it.
 
I have the link to both CU's at work, will post tomorrow the xbox diagram is on vg leaks the ps4 was from a conference for developers, the one with the cpu and gpu bandwidth graph I think.

I do thank you guys for pouring up with some of us less technical guys, some times the logic does not match up which is why we need to learn and ask questions :)
 
Back
Top