Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Why use more tiles than needed? I'd think you'd want to minimize the number of tiles while maximizing the cache usage.
 
no way alpha kits had a 7970... a 7790 is basically what is in the xbone

The 7790 didn't exists when the alpha kits went out, only the 7970 did. There is no conspiracy, it is simply a timeline issue, when did the first GCN card go on sale and when did the alpha kits go out.
 
Please that post was a grammar disaster could you copy/paste the corrected one in your post (/edit)( I guess it is still not perfect but definitely better... :oops: ).

What I was saying was that even with the disparity in amount of ram (4gb versus 8gb) the technical capabilities of the competition was still more advanced than the One. So if the MS team was hinging the viability of the xbox one on Kinect and having more ram... they were being quite silly.

They had every resource to build the box they wanted and that is exactly what they did. They had foreknowledge about what the competition was doing. They just made the wrong bet so far.
 
The 7790 didn't exists when the alpha kits went out, only the 7970 did. There is no conspiracy, it is simply a timeline issue, when did the first GCN card go on sale and when did the alpha kits go out.

I didn't know that. Which other cards in the 7000 series were available?
 
What I was saying was that even with the disparity in amount of ram (4gb versus 8gb) the technical capabilities of the competition was still more advanced than the One. So if the MS team was hinging the viability of the xbox one on Kinect and having more ram... they were being quite silly.

They had every resource to build the box they wanted and that is exactly what they did. They had foreknowledge about what the competition was doing. They just made the wrong bet so far.

I would expect that the additional memory would probably have resulted in more obviously better visuals FWIW.
It's likely why 3rd party devs were so vocal about it.

I would also argue with "more advanced" even with RAM parity. It has less ALU's in the GPU, and it has slower main memory and a fast low latency memory pool.

I would bet there are at least artificial cases that can be constructed where the XB1 GPU will perform better.

Then you also have custom audio part, which could free up CPU resources.

I really don't think that the difference will be as large as many seem to be assuming.
 
It's a native hypervisor, the Title and System OSs are both hosted by the Host OS which controls all inter OS communication and access to hardware.

Interesting. 'Title' and 'System' - are they the official developer designations of the two operating systems?
 
I would expect that the additional memory would probably have resulted in more obviously better visuals FWIW.
It's likely why 3rd party devs were so vocal about it.

I would also argue with "more advanced" even with RAM parity. It has less ALU's in the GPU, and it has slower main memory and a fast low latency memory pool.

I would bet there are at least artificial cases that can be constructed where the XB1 GPU will perform better.

Then you also have custom audio part, which could free up CPU resources.

I really don't think that the difference will be as large as many seem to be assuming.

I agree, which is why Panello went on his specs are meaningless talk with DF. The things that are possible with ESRAM as MS envisions are probably new territory. The thing is was it actually worth it both architecturally and financially?

Would it have been easier to just re-configure a 7790 with ESRAM as opposed to a essentially a 7770 with ESRAM? It just seems like a lot of work for very little gain.
 
Would it have been easier to just re-configure a 7790 with ESRAM as opposed to a essentially a 7770 with ESRAM? It just seems like a lot of work for very little gain.

These are the types of statements I don't understand.
It uses components from AMD in a configuration similar to an existing AMD chip, but it isn't just a 7770.

Integrating embedded memory with the GPU was almost certainly a significant engineering undertaking, and it probably impacted several of the functional blocks.

And how are you measuring the "little gain"?
 
These are the types of statements I don't understand.
It uses components from AMD in a configuration similar to an existing AMD chip, but it isn't just a 7770.

Integrating embedded memory with the GPU was almost certainly a significant engineering undertaking, and it probably impacted several of the functional blocks.

And how are you measuring the "little gain"?

I am not understimating the engineering challenge that MS was faced with. I was just wondering aloud whether they really needed to. So when I look at the specifications for each card:

AMD Radeon™ HD 7770 GHz Edition Feature Summary
◾1000MHz Engine Clock
◾Up to 2GB GDDR5 Memory
◾1125MHz Memory Clock (4.5 Gbps GDDR5)
◾72GB/s memory bandwidth (maximum)
◾1.28 TFLOPS Single Precision compute power
◾GCN Architecture ◦10 Compute Units (640 Stream Processors)
◦40 Texture Units
◦64 Z/Stencil ROP Units
◦16 Color ROP Units
◦Dual Asynchronous Compute Engines (ACE)

versus

◾1000MHz Engine Clock
◾1GB GDDR5 Memory
◾1500MHz Memory Clock (6.0 Gbps GDDR5)
◾96GB/s memory bandwidth (maximum)
◾1.79 TFLOPS Single Precision compute power
◾GCN Architecture ◦14 Compute Units (896 Stream Processors)
◦56 Texture Units
◦64 Z/Stencil ROP Units
◦16 Color ROP Units
◦Dual Geometry Engines

◾128-bit GDDR5 memory interface


It feels like MS is aiming for 7790 performance but is really getting 7770 performance (1.28 TF, 72 GB/s bandwidth) plus ESRAM.

Would it have been simpler to just base your GPU part on 7790?

Don't the development costs of ESRAM essentially match the costs of going straight with a 7790 in the first place?

Is it that they had to engineer it toward DDR3 vice GDDR5 which seems to be the memory normally included with the discrete card?

These are just questions I have. I have no real judgment per se except that it seems most of the technical conversation surrounding XB one seems aimed at extracting more performance in order to create console parity.
 
I would wager, that when the ESRAM is used effectively, the performance of the Xbox One's graphics subsystem will far an away outstrip any of those discrete parts you mention.
 
But targetting more CUs and more ROPs means you need more bandwidth, which means you need GDDR5. Pretty sure they the reason they went with DDR3 was because it was the much safer bet to get to 8 GB, and that meant ESRAM, which limited what they could fit for GPU and CPU. It's not like you pick your memory at the last minute. They would have picked DDR3 and ESRAM as their design a couple years ago. Projecting the price and availability of GDDR5 in that time frame is most likely why they picked DDR3. In hindsight you could say it was the wrong choice, but they had to choose years ago. If their projections were 8 GB of DDR3 + ESRAM or 4 GB of GDDR5, then I can see how they ended up where they are now.
 
These are just questions I have. I have no real judgment per se except that it seems most of the technical conversation surrounding XB one seems aimed at extracting more performance in order to create console parity.

Because the only information people see is 18 vs 12.
What else are people going to discuss?

It would be interesting to get a real discussion about what the real differences are from a 3rd party whose used both extensively, it's unlikely to happen, and if it did most people would only accept the parts that agreed with their existing viewpoints.

Now I personally think Sony made the better hardware choices if you are looking purely at how good the hardware can be at playing games, and how "easy" it is to get that performance. But I don't think there will be a large visual disparity.
But I'm just guessing as to how much the ESRAM will help.

There is the OS issue, how good are the "drivers" on both sides, how much does the VM on the MS side impact performance?

Does the dedicated audio silicon on XB1 free up CPU resources, giving it an advantage in CPU limited scenarios?

There is also the CPU/Memory issue, Are the OS reserves similar, or does one have an edge?

Then you get into services etc etc etc.
 
I would wager, that when the ESRAM is used effectively, the performance of the Xbox One's graphics subsystem will far an away outstrip any of those discrete parts you mention.

"Far and away" is a high bar. I defer to guys like you, ERP, gubbi, sebbbi etc on these points because they are definitely above my head.

I guess a developer could make any hardware choke and a great developer can make any hardware sing.
 
It reminds me that history that MS told developers about the final GPU being similar in performance to a 680GTX.
We are talking about a console and that makes all the difference in the world when you take into account engineers have been working to customize every little detail.

I just hope that the specs and the final performance are good, especially for something that's supposed to stay around for awhile. I think PS4 is going to be generally a touch better in some games, while in some specific games/scenarios Xbox One might have an advantage because of its design, as someone else put more eloquently than I.

The 32meg buffer/cache can cancel some of the advantages of a -let's say- 680GTX, but it depends on specific details I don't know about.
 
Interesting. 'Title' and 'System' - are they the official developer designations of the two operating systems?

I know that's what they used to be called, but I think different names mentioned in the XB1 architectural roundtable?

....

Can we figure out where the 5 billion transistors are 'hiding' in XB1?

6T SRAM = ~1.6 billion
SHAPE block = ~400 million
CPU = ?
GPU = ?
Move engines, display planes, video encoder/decoders = ?
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top