Xbox One: DirectX 11.1+ AMD GPU , PS4: DirectX 11.2+ AMD GPU what's the difference?

I think both of them are DirectX 11.2 compatible and also both of them are Tier 2, just because they are from AMD and the PS4's GPU is a bit more modern.

However if one of them had to be Tier 2 and the other isn't/couldn't be then I think that the Tier 2 GPU would be Xbox One's.

It is made by Microsoft, nobody knew about the addition of Tiled Resources to the DirectX API except them, til very recently, :eek: and it's very clear how useful it is when your console has a fast small pool of eSRAM.

If anything, it is crystal clear, or clear as day that a Bonaire like GPU would benefit for something like it, I think, personally..... Oddly enough it was launched about the same time the Xbox One was about to be unveiled.

Why wouldn't Xbox One's GPU -I miss how GPUs had names in previous generations- be Tier 2 and totally compatible with the most recent iteration of PRT and Tiled Resources when Microsoft knew this was a most important feature in DirectX 11.2 and they were working on it?


but isn't that also a feature of OpenGL 4.4 ?
 
Fully hardware supported, or just exposed by GL API even if some use if CPU for paging is required (ie, both tier 1 and 2 dx include PRT support, in what way does GL 4.4)?
 
This is gcn 1.1:

1. New Compute Unit instructions, including FLAT instructions for HSA memory model compliance
2. Full Address Translation Cache implementation, which allows both data and code to/from CUs to be addressed directly with the process address space (no pinning restriction), apart from the GPU's own virtual address spaces. This has to be backed by PCIe ATS 1.1, which is implemented by AMD in IOMMUv2.
3. Ability to access the coherent, system address space directly. In earlier architectures, that space can only be accessed after the particular region needed is mapped into GPU's virtual address space and with no coherency guarantees.


http://semiaccurate.com/forums/showpost.php?p=193664&postcount=2
 
but isn't that also a feature of OpenGL 4.4 ?

Only with extensions:

Extensions released alongside the OpenGL 4.4 specification include:

  • Bindless Texture Extension (GL_ARB_bindless_texture)
    Shaders can now access an effectively unlimited number of texture and image resources directly by virtual addresses. This bindless texture approach avoids the application overhead due to explicitly binding a small window of accessible textures. Ray tracing and global illumination algorithms are faster and simpler with unfettered access to a virtual world's entire texture set.
  • Sparse Texture Extension (GL_ARB_sparse_texture)
    Enables handling of huge textures that are much larger than the GPUs physical memory by allowing an application to select which regions of the texture are resident for ‘mega-texture’ algorithms and very large data-set visualizations.

http://www.opengl.org/documentation/current_version/
 
Just to confuse matters - what if Bonaire is AMDs design? It's quoted as being GCN1.1 - logically that would only be developed by the 'core design team', not the MS/Sony teams...

In that reality:
- Sony forked the 7850/7970M and built their APU. (possible from early/mid 2012).
- MS forked the 7770 and built their APU. (possible from early 2012?). The 7770 was considered an underperforming GPU (I usually buy the x770, but took the 7870 for this generation), so they wanted an upgrade.

Bonaire is an early implementation of AMDs GCN1.1, designed to replace the underperforming 7770 (released March 2013). If GCN 1.1 was finished in Feb/March, it may have been optimistic for the advanced XB1 APU changes to be "ported" and rigorously tested before production starts this year?

Is the crazy answer to this thread that the XB1 is "shipping with the wrong GPU"?
 
I'm not sure they would be forking of a specific PC GPU. Its more likey they're branching of specific points for the fundamental building blocks: CU, ROP, etc

both companies probably decided on a specific area of the die and power consumption for the GPU and the CU count and clock derived from there.
 
I still don't get why people think Sony implemented the GPU in their SOC later than Microsoft. If Sony are going to manufacture more chips and hence more consoles than MS, it is far more likely that they finalized their SOC earlier than Microsoft, thus being able to start production on final silicon earlier.

But the interesting thing is that they both finalized silicon at roughly the same time with working final retail kits being available at roughly the same time. That indicates that the SOCs were also likely designed within the same timeframe.

Regards,
SB
 
If Sony are going to manufacture more chips and hence more consoles than MS...
You can achieve that even designing later as long as your silicon can be fabricated faster. And we don't know for sure who has the most HW available for launch, let alone by how much, so the sales side is no use in understanding the HW tech levels.
 
You can achieve that even designing later as long as your silicon can be fabricated faster. And we don't know for sure who has the most HW available for launch, let alone by how much, so the sales side is no use in understanding the HW tech levels.

That's part of my point. There isn't enough information available to the public to know who finalized the design of their SOC first. The only thing we have is that final retail kits appeared for both Sony and Microsoft at roughly the same time. Which is somewhat to be expected if you want to have enough units to meet what appears to be a record console launch for both Sony and Microsoft in November.

Regards,
SB
 
I still don't get why people think Sony implemented the GPU in their SOC later than Microsoft. If Sony are going to manufacture more chips and hence more consoles than MS, it is far more likely that they finalized their SOC earlier than Microsoft, thus being able to start production on final silicon earlier.

But the interesting thing is that they both finalized silicon at roughly the same time with working final retail kits being available at roughly the same time. That indicates that the SOCs were also likely designed within the same timeframe.
I think the line of reasoning goes along the assumption that MS probably needed more time to get all the stuff including the heavier modifications of the memory subsystem implemented and verified. As Sony opted for an apparently simpler system (no eSRAM, less coherent bandwidth between the CPU clusters and the GPU), one could get this done faster. Even MS supposedly "complained" that Sony was using mainly standard building blocks in their SoC (which in my opinion is a bad argument as one can put together a powerful system without much fuzz, why should one customize more than necessary?; but coming from MS' point of view, I can understand it).
 
I'm not sure they would be forking of a specific PC GPU. Its more likely they're branching of specific points for the fundamental building blocks: CU, ROP, etc

I'd imagine you can't mix/match everything across product lines. So the first choice would be 'what family of South Islands do you want to start from'? Sony appears to have chosen Tahiti/Pitcairn whilst MS appear to have chosen Cape Verde.

Anyway, high-end 'Cape Verde' cards (especially the 7770) were considered uncompetitive. So, "Bonaire".

My question is whether Bonaire was created by AMD using tech from the XB1/PS4, or Bonaire was AMD designed... if AMD designed it, then that may have left MS with a very short window to integrate those changes into the XB1 APU.

If it's an AMD-design, then it's theoretically possible that "DX11.2 tier-2" was primarily created for a version of the XB1 that will never exist.
 
I'd imagine you can't mix/match everything across product lines. So the first choice would be 'what family of South Islands do you want to start from'? Sony appears to have chosen Tahiti/Pitcairn whilst MS appear to have chosen Cape Verde.

Anyway, high-end 'Cape Verde' cards (especially the 7770) were considered uncompetitive. So, "Bonaire".

My question is whether Bonaire was created by AMD using tech from the XB1/PS4, or Bonaire was AMD designed... if AMD designed it, then that may have left MS with a very short window to integrate those changes into the XB1 APU.

If it's an AMD-design, then it's theoretically possible that "DX11.2 tier-2" was primarily created for a version of the XB1 that will never exist.

Aren't they all AMD designs?
 
Aren't they all AMD designs?

If you picture AMD as 3 teams:
- AMD/Sony, modifying Tahiti for the PS4, then integrating with Jaguar into the PS4's APU.
- AMD/MS, modifying Cape Verde for the XB1, then integrating with Jaguar into the XB1's APU.
- AMD/AMD, working on AMD's GCN/HSA and working to the new 8/9-series.

After project completion, the 2 console teams get absorbed into the main team, or go on a well-deserved holiday or start designing chips for a crazy Arab Dictator.

Anyway, it's suggested that the AMD/MS team committed their changes to GCN, which was subsequently used by AMD/AMD as the basis for the Bonaire design.

But it may be more likely that AMD/AMD designed Bonaire, and AMD/MS had to try to integrate it into their APU at short notice. That would have created 2 competing APU designs for the XB1.

But I've got no knowledge of any of this, so it's probably nonsense :).
 
If you picture AMD as 3 teams:
- AMD/Sony, modifying Tahiti for the PS4, then integrating with Jaguar into the PS4's APU.
- AMD/MS, modifying Cape Verde for the XB1, then integrating with Jaguar into the XB1's APU.
- AMD/AMD, working on AMD's GCN/HSA and working to the new 8/9-series.

Do we know that information?
 
I'd imagine you can't mix/match everything across product lines. So the first choice would be 'what family of South Islands do you want to start from'? Sony appears to have chosen Tahiti/Pitcairn whilst MS appear to have chosen Cape Verde.

Anyway, high-end 'Cape Verde' cards (especially the 7770) were considered uncompetitive. So, "Bonaire".

My question is whether Bonaire was created by AMD using tech from the XB1/PS4, or Bonaire was AMD designed... if AMD designed it, then that may have left MS with a very short window to integrate those changes into the XB1 APU.

If it's an AMD-design, then it's theoretically possible that "DX11.2 tier-2" was primarily created for a version of the XB1 that will never exist.
When statements are made that MS and Sony customized the parts it doesn't mean they designed and implemented the changes. They hire AMD so they don't need to design everything from scratch and AMD can integrate changes much faster than an external party could. Customers are very involved in validating designs though.

Tahiti, Pitcairn and Cape Verde all come from the same code base so based on your example a customer might logically start from a specific chip, but technically they would all come from the same place.
 
I'd imagine you can't mix/match everything across product lines. So the first choice would be 'what family of South Islands do you want to start from'?
AMD's stance is that there is a range of IP that can be picked from, so the choices may be more like "pick a revision of command processor IP/ vector processor IP/special function layout you want to start with".
There are points in the architecture that serve to decouple parts of the design from the rest, which would help contain a cascade of interactions throughout the chip.
Even if not, there's a lot of reworking that is getting done anyway. Why should the specific command processor or rendering backend that appeared years ago be the basis? There might be internal tweaks and revisions that we never see mentioned, even within the same chip code name.

My question is whether Bonaire was created by AMD using tech from the XB1/PS4, or Bonaire was AMD designed... if AMD designed it, then that may have left MS with a very short window to integrate those changes into the XB1 APU.
Elements of the PS4's design influenced or directed IP that went into the IP pool that AMD's products are drawing from. The volatile flag was cited in a different thread as being a Sony-driven feature that we see is part of the main pool.
Perhaps there was some side benefit or arrangement where the console manufacturer can cede some of its feature set to common use--with the exception of the competing console.

Some driver comments for Sea Islands also talk about programmable DMA engines, which have functionality that overlaps well with the non-Zlib DMEs in Durango. It's not clear which came first for that one, however.
 
http://semiaccurate.com/2013/09/03/xbox-ones-sound-block-is-much-more-than-audio/

Then there are the GPUs themselves, the Achilles heel of the XBox One. While there is nothing wrong with them per se, they are a slightly older revision than used in the PS4 but the differences are small enough to be ignorable. What does matter is that the PS4 has about 50% more units at roughly the same clocks, 1152 at ~800MHz vs 768 at 853MHz, a massive difference. Couple this with the vastly more user-friendly 8GB GDDR5 memory design and you have a clean kill for Sony on performance.

Confirmation that the XB1 is using a slightly older GPU design... I guess that sheds more light on the subject at hand. ;)
 
Back
Top