Xbox One: DirectX 11.1+ AMD GPU , PS4: DirectX 11.2+ AMD GPU what's the difference?

The ACE's(Purple/Blue) are not the Move Engines (Green), and he is indeed correct. After all ACE stands for Asynchronous Command Processor and a second note, both the PS4 and the XBONE come with 2 Graphics Command Processors, the only difference in this regard is that the PS4 has 8 Compute Command Processors whilst the XBONE has 2.

Any source for:
1) PS4 has 2 gfx cmd processor, and
2) ACEs are the same thing as compute cmd processors?
 
Yea, at first everyone thought that XB1 only had one compute command, and one graphics command due to the vgleaks diagram showing it that way.
 
What would be the thought process behind (anyone) believing Sony would choose a current or older AMD architecture for PS4? Microsoft had an established relationship with AMD/ATI "console wise" since XB360 development... so one would think XB1 GPU was in the process of being developed first - "WAY BEFORE" Sony and AMD came to any agreement on working together. I would think Sony engineers would be smart enough to know/use something within AMD's roadmap that would suffice their needs beyond 2013. Sony is known for using exotic wares... except this time around, they wanted something developer friendly, but, robust enough to serve their purposes beyond 2013.

Considering that Microsoft has to work closely with ATI (and for that matter, Nvidia), is there any possible way that Microsoft would not have known about any feature AMD were implementing?

Microsoft is quite likely to know about any feature AMD wants to implement before AMD start designing their chips as AMD would want to have Microsoft in a position to implement any such feature into DirectX if it is something that will be exposed to programmers.

Basically Microsoft will have earlier access to knowledge about what AMD plans to implement than any company not named AMD. So the only thing that Microsoft wouldn't have access to is anything that Sony themselves designed or modified. And if that is the case, then it's highly unlikely that it would have made it into DirectX 11.2 without also being available early enough for Microsoft to include in the Xbox One.

Basically anything available in Bonaire, will likely have been available to Durango as Microsoft would likely have known about planned features before the final design of Bonaire was complete. Sony on the other hand likely would not have known about it prior to design feature lock in because without Microsoft showing interest in including it into DirectX at some point, it's unlikely to get implemented in a final design. Of course, things still slip though, but it's very rare that either IHV implements things that Microsoft hasn't shown an interest in with regards to DirectX. And most of the things that slip through are things that MS originally wanted for a certain level of DirectX, but got cut for that version due to the inability of both IHVs to implement it. Hence we end up with things like DirectX 10.1, DirectX 11.2, etc.

The only real question is whether or not those things were also available to Sony during their SOC design phase. [edit] With regards to either timing or whether it was a Microsoft initiated hardware change (hence only available in one shipping retail product that coincidentally just happens to match the Durango GPU).

Regards,
SB
 
As for number 1. http://www.vgleaks.com/wp-content/uploads/2013/02/gpu_queues.jpg

The second one is obvious so im not even going to cover that, AMD generally uses ACE to mean exactly that. People who don't want to see that just don't want to see it.

If you do a search on "compute command processor" you will find no mentioning of such term prior to hot chips conf, it seems to be contrary of what you had suggested that AMD had used them interchangeably in the past.
 
If you do a search on "compute command processor" you will find no mentioning of such term prior to hot chips conf, it seems to be contrary of what you had suggested that AMD had used them interchangeably in the past.

Thats because Microsoft changed the name, its a ACE. If you don't want to accept that go for it, but it is, tell me the sky is polka dotted all you want but it isn't
 
I would go inferring anything like that; I'm merely pointing out the capabilities of AMD's discrete GPU's and how that plays into the IP sets that are available. I'll leave others to consider how they implemented elsewhere.
That's half the fun! I'm sure I will be quite pleased with both consoles, whatever they may be. There will be plenty of fantastic games to occupy more of my time than is good for me. ;)
 
Tier 1 supports hardware based virtual texturing where an unified address space is unavailable, so translation of the PRTs are done with gpu page tables. PRT have to be copied over to the CPU.

Tier 2 supports hardware with a virtual unified address space, where translation takes place on shared tables and PRT pointers are passed between the CPU and GPU.

My best guess anyways.
 
what ACE does:

AMD-ACE.jpg
Any source for:
1) PS4 has 2 gfx cmd processor, and
2) ACEs are the same thing as compute cmd processors?
Did you look at that picture you just posted? Hint: "CP" usually stand short for "command processor". And seriously, reading the text (in the picture you posted) describing what an ACE is and does could also help as the text somehow appears to mention a thing called "command processor". :rolleyes:
 
Did you look at that picture you just posted? Hint: "CP" usually stand short for "command processor". And seriously, reading the text (in the picture you posted) describing what an ACE is and does could also help as the text somehow appears to mention a thing called "command processor". :rolleyes:

I know what the ACEs are and what they do, i am just not sure if they are the same and MSFT would call it differently just for the sake of it.

in fact, here's a ref:
http://pc.watch.impress.co.jp/video/pcw/docs/590/776/f04_p.pdf
 
Partner could mean that his bonus is tied in with the revenue sharing, like as a law firm's partner.

So I've read comments that somehow suggested that Sony made custom design to the GPU by bumping the ACEs from 2 to 8 (?), hence the compute command queue length was increased to 64. The ACE is still AMD's IP, not Sony's. On the other hand, the X1's GPU actually have 2 compute cmd processors + 2 gfx cmd processor, and I couldn't find similar design in other Radeon products. This would suggest that this could be MSFT/AMD joint IP.


It´s not a custom design per se, it falls in line with "sea islands" ip level (I know amd decoupled names with features level)-> Up to 8 aces, up to 8 queues per ace, hence 64 in total

How many aces/queues had pitcairn, and cape verde for that matter??
 
I know what the ACEs are and what they do, i am just not sure if they are the same and MSFT would call it differently just for the sake of it.

in fact, here's a ref:
http://pc.watch.impress.co.jp/video/pcw/docs/590/776/f04_p.pdf
If you know what it is and does, then you should recognize a slightly mislabeled block diagram (there's at least one more error on it, btw.) made by a 3rd party (Hiroshige Goto). I mean, everyone can draw such a diagram and put some words on some boxes, right? I would take a slide from the GCN architecture presentation by AMD itself over it every day (especially as the thing you asked for is spelled out there).

Btw., I think it's still not impossible that MS actually included two compute microengines (MECs). That would be the same as Sony (each MEC containing 4 "ACEs" [not exactly identical to the old ones] with 8 queues each). But as MS apparently labeled the two graphics command queues as two graphics command processors (while Sony appears to label it as just two queues in a single command processor), one could assume MS has less strict requirements for what it considers a command processor and states the number of queues (i.e. GCN1.0 style frontend with 2 ACEs [each having a single queue] augmented with a second graphics queue). Or the organizational differences between PS4 and XB1 frontends are larger than just the difference between GCN1.0 and 1.1 combined with variable counts of the components.
As these are (semi-)custom APUs (and the terminology used by MS is a bit fuzzy right now), all bets are off what Sony and MS have asked AMD to implement and if MS deemed it worthy to integrate more compute queues or not. I mean, AMD has probably suggested some stuff and said what options in a certain range they could offer (or what the customer has to pay extra for extending said range), but the decision was eventually done by MS and Sony itself (of course based on the input from AMD).
How many aces/queues had pitcairn, and cape verde for that matter??
Two ACEs, each with a single queue, the same as Tahiti. All first generation GCN GPUs had two ACEs.
 
But as MS apparently labeled the two graphics command queues as two graphics command processors (while Sony appears to label it as just two queues in a single command processor), one could assume MS has less strict requirements for what it considers a command processor and states the number of queues (i.e. GCN1.0 style frontend with 2 ACEs [each having a single queue] augmented with a second graphics queue).

If one's work upon this premise then I think the conclusion is pretty sound.
 
Actually, I think the more obvious reason for this new "hybrid" technology support in Win8.1 is to accommodate for solutions like NVidia optimus where any iGPU could be coupled with a dGPU. In Win-Desktop Apps the Nvidia driver handles the switching between iGPU and dGPU, but this solution did not work in the Metro-App space, hence this change.
 
I know the hardware isn't DX 11.2 hardware but it has DX 11.2 feature set.
I think both of them are DirectX 11.2 compatible and also both of them are Tier 2, just because they are from AMD and the PS4's GPU is a bit more modern.

However if one of them had to be Tier 2 and the other isn't/couldn't be then I think that the Tier 2 GPU would be Xbox One's.

It is made by Microsoft, nobody knew about the addition of Tiled Resources to the DirectX API except them, til very recently, :eek: and it's very clear how useful it is when your console has a fast small pool of eSRAM.

If anything, it is crystal clear, or clear as day that a Bonaire like GPU would benefit for something like it, I think, personally..... Oddly enough it was launched about the same time the Xbox One was about to be unveiled.

Why wouldn't Xbox One's GPU -I miss how GPUs had names in previous generations- be Tier 2 and totally compatible with the most recent iteration of PRT and Tiled Resources when Microsoft knew this was a most important feature in DirectX 11.2 and they were working on it?
 
I think both of them are DirectX 11.2 compatible and also both of them are Tier 2, just because they are from AMD and the PS4's GPU is a bit more modern.
Just curious what you actually mean by more modern gpu?
 
Back
Top