Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
It has to for OS level kinect functionality doesn't it?.
What OS level functionality...? There's no OS-level gesture support while running game software (it could clash with gameplay gestures for one thing), and why would you need 10% of GPU power to do voice recognition...?

Is there even ANY gesture support in xbone OS? I can't remember from the pre-E3 MS presentation, all I recall from that sleeping pill is voice commands (to control TVs... :p)
 
What OS level functionality...? There's no OS-level gesture support while running game software (it could clash with gameplay gestures for one thing), and why would you need 10% of GPU power to do voice recognition...?

Is there even ANY gesture support in xbone OS? I can't remember from the pre-E3 MS presentation, all I recall from that sleeping pill is voice commands (to control TVs... :p)


My off the top thoughts...

I don't think it is always at full tilt, but necessarily reserved for when snapping other apps while playing a game or when the game itself is running in a window while you navigate the OS screen. The Kinect itself when doing things like voice chat and video chat during gameplay is most likely about the OS requirements for planes deployment and management, etc. as the Kinect has its own hardware for heavy lifting with probably little CPU/GPU utilization.

The point was that as they mature the OS, they will find ways to get more of that reservation out for mainline use by probably figuring out more eloquent resource utilization methods in the OS via updates.
 
The Hot Chips presentation on the SoC had two graphics command processors noted in a slide.

The exact capabilities versus the Orbis VSHELL pipe isn't clear. Vgleaks indicated the secondary pipe couldn't issue compute commands, unlike the primary graphics one.

It's also not clear what goes into the compute command processors for Durango. If they follow Sea Islands, there are actually more hardware queues than just two blocks.
 
The Hot Chips presentation on the SoC had two graphics command processors noted in a slide.
You are right, forgot about this slide, so nothing new.
It's also not clear what goes into the compute command processors for Durango. If they follow Sea Islands, there are actually more hardware queues than just two blocks.
That's why I mentioned each one could have 8 queues (and some time ago I said, it could potentially be even MECs as Orbis has).
 
That's why I mentioned each one could have 8 queues (and before that I even said, it could even be MECs as Orbis has).

It seems odd when trying to reconcile the MEC structure versus the supposedly Sony-driven 64 queue customization (Kaveri too?) and then trying to fit in Durango's two ill-defined blocks.
If it was a customization for Sony, having that particular tweak appear in Durango would be unpleasant.
 
Important takeaway is that Durango also features two graphics pipes (or was this official before?), same as the PS4. So obviously XB1 has just has a few compute pipes less (two instead of eight ACEs with 8 queues each?), otherwise the frontend looks to be pretty much identical.

Is this referring to two triangles per clock? If so yeah, I thought that was common knowledge for ages.

Are they saying one entire setup engine is just for non-game stuff though?
 
Is this referring to two triangles per clock? If so yeah, I thought that was common knowledge for ages.

Are they saying one entire setup engine is just for non-game stuff though?
No, it has nothing to do with the setup. It has a second graphics pipe, i.e. a second task queue in hardware for rendering (that's where the command buffers made out of draw calls end up on the GPU to be executed). Usual GPUs have only one so far (which can also take compute kernels with dependencies on rendering tasks). The exceptions are XB1 and PS4.
In case of the PS4 the OS dedicated graphics pipe is slightly restricted in the sense that it can't accept compute kernels (maybe it reserves a few of it's 64 compute queues for the OS?), it's unclear if the same restriction applies to the XB1.
 
If it was a customization for Sony, having that particular tweak appear in Durango would be unpleasant.
I wouldn't be surprised, to see all supposedly Sony driven features mentioned in the Sea Islands ISA manual also in the XB1 (but the number of compute queues may be lower). As we know already, that Kaveri will also come with two MECs (8 pipes, 64 queues), the feature doesn't look that custom to me. ;)
 
It seems odd when trying to reconcile the MEC structure versus the supposedly Sony-driven 64 queue customization (Kaveri too?) and then trying to fit in Durango's two ill-defined blocks.
If it was a customization for Sony, having that particular tweak appear in Durango would be unpleasant.

I seem to recall 3dcgi suggesting that was most certainly not something done by Sony, but was something AMD had been working on already for their future products.
 
I seem to recall 3dcgi suggesting that was most certainly not something done by Sony, but was something AMD had been working on already for their future products.

Mark Cerny cited the compute queue count as something Sony requested for Orbis.
I suppose it is possible that both of AMD's customers asked for the exact same count, which would be interesting in terms of appearances.
 
It's also worth pointing out that this thread is pretty much come to a close, seeing as we have specific details on the machine from the people who designed it. I think there's a question of compute implementations and whether there are any major customisations. Otherwise there's nothing more to say, and any pointers to more or different hardware are going to be bunkum.
I don't know what else is left to know, aside from a new DF article -I don't think I can read it this weekend though- there are some details about specific parts of the hardware whose NDA hasn't been lifted til Windows 8.1 is released... Maybe that belongs to the DirectX 11.2 thread anyway. --thanks to Javisoft for the pics.

Jdw39BT.png


E7Djj1i.jpg
 
I don't know what else is left to know, aside from a new DF article -I don't think I can read it this weekend though- there are some details about specific parts of the hardware whose NDA hasn't been lifted til Windows 8.1 is released... Maybe that belongs to the DirectX 11.2 thread anyway. --thanks to Javisoft for the pics.

Jdw39BT.png


E7Djj1i.jpg

Lol, so much for the NDA and embargo *face pawn
Looks like the Virtual Texture does not need reside in RAM then? I think that's how Carmack implemented the clipmapping.
 
Lol, so much for the NDA and embargo *face pawn
Looks like the Virtual Texture does not need reside in RAM then? I think that's how Carmack implemented the clipmapping.
:smile: There would've mention for what is NDA now and what won't be after October 15th. There is still good times ahead knowing how texture caching will work -virtual textures not stored in RAM? I wonder how that's possible, very curious about it.

We are knowing the hardware of each console little by little. I wonder what DF does come up with in their next article. Full details of certain aspects of the hardware take more than one article to develop.
 
Lol, so much for the NDA and embargo *face pawn
Looks like the Virtual Texture does not need reside in RAM then? I think that's how Carmack implemented the clipmapping.
Only the part you need/use.

Here's couple of papers on subject.
http://cesium.agi.com/massiveworlds/downloads/Graham/Hardware_Virtual_Textures.pptx
http://developer.amd.com/wordpress/...dent Textures on Next-Generation GPUs.v04.pps

Also AMD has released quite nice documentation on their hardware, so we know quite lot of GCN.
http://www.botchco.com/agd5f/?p=58
Really hope to see more details on specifics on their console variants though.
 
Can someone help me understand if this is standard GCN architecture or customization!


"In addition to asynchronous compute queues, the Xbox One hardware supports two concurrent render pipes," Goossen pointed out. "The two render pipes can allow the hardware to render title content at high priority while concurrently rendering system content at low priority. The GPU hardware scheduler is designed to maximise throughput and automatically fills 'holes' in the high-priority processing. This can allow the system rendering to make use of the ROPs for fill, for example, while the title is simultaneously doing synchronous compute operations on the compute units."
-eurogamer article


Basically it appears the GPU can execute CU and ROP work concurrently!

So the "Title(GAME)" is processing via the CU and at the same time the "System(APP)" is processing thru ROP's.. All concurrently, this is how i'm interpreting the statement.



Also the statement "Two Concurrent render pipes" is confusing me. In OpenGL language a "render pipe" is a GPU. From the OpenGL documentation "... OpenGL context is created for each rendering pipe, i.e. for each GPU" .. So i interpret this as the Xbox One with its two concurrent render pipes, behaves like a dual GPU setup???

Can anyone shed light on what the Xbox One architect is saying here?!
 
Last edited by a moderator:
Basically it appears the GPU can execute CU and ROP work concurrently!

So the "Title(GAME)" is processing via the CU and at the same time the "System(APP)" is processing thru ROP's.. All concurrently, this is how i'm interpreting the statement.
What you describe here isn't special as all GPUs can do this. Even the Xbox 360.
 
Can someone help me understand if this is standard GCN architecture or customization!
It's not something shown for the original version of GCN. It seems to be something the coming consoles have brought up.

Basically it appears the GPU can execute CU and ROP work concurrently!
That doesn't need two front ends.

So i interpret this as the Xbox One with its two concurrent render pipes, behaves like a dual GPU setup???
There seem to be two front ends, but it seems like they're more isolated from each other.
 
Status
Not open for further replies.
Back
Top