Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
And also make heavy use of GPGPU when he is idle.
I strongly disagree with your kind of thinking a GPU is 'idle' just because it is not rasterizing at a certain moment. :D

But i think your worries are baseless. Even if Navi can better saturate CUs with rendering this does not mean there is less time left for pure compute.
It's still the developers decision how to use given resources. One part becoming more efficient can only help the other parts, indirectly.

I'm more worried RDNA could be eventually less powerful with compute than GCN in general. Benchmark results vary wildly.
Unfortunately i have no experience myself yet, so i don't know. (I could buy me 5500XT but not sure if RDNA2 will be closer to what consoles get. And having no time anyways i'll wait until we know more...)


The problem is that none of those are consistent with performance numbers we’re given. PS5 is supposedly 10TF or greater, and XSX is over 10TF, possibly 12TF. They’re doing that in less area than Navi 10 + Zen 2
Really stupid question: Could it happen they double the SIMDs per CU but not TMU, ROPs etc.?
Makes no sense in age of 4K, but maybe console devs really want TF? And this could explain rumored high TF numbers?
 
Nevertheless, it's not a comparison of Apples to Apples, as RX580 is the GPU and board alone, while the console includes CPU, Optical Drive, Hard Drive, etc...
Yes, that's why it's not logical to think that a RX 580 with 8GB GDDR5 will consume more watts than a full console with a CPU, HDD, 12GB GDDR5 and a similar GPU (albeit wider with less clocks). Testing with the same workload is paramount here before we make such a grand conclusion.
 
Yes, that's why it's not logical to think that a RX 580 with 8GB GDDR5 will consume more watts than a full console with a CPU, HDD, 12GB GDDR5 and a similar GPU (albeit wider with less clocks). Testing with the same workload is paramount here before we make such a grand conclusion.
Gears 5 is the most stressing case for XB1 known to date. That’s still less than this single instance of 580 example. Requesting an exhaustive dataset is disingenuous in my opinion. There is a somewhat extensive technical disclosure of directed efforts on Microsoft’s part to limit power consumption by aggressive voltage tailoring. That’s exactly the results we see.

Descending into pedantry moaning about lack of matching usage conditions (across platforms where that equalization of parameters is by design difficult and open to argument) or claiming gotchas because the actual usage TF of 5% higher than XB1X simply distract from what MS has demonstrated they can accomplish comparative to PC use cases.

I will also note in my power budget for XSX, I didn’t even claim the benefits of N7P over N7.

index.php
 
Last edited:
Gears 5 is the most stressing case for XB1 known to date. That’s still less than this single instance of 580 example.
The test was done on Gears 4, not 5.

Again, if you still don't understand how running a game in a console environment can limit power consumption (by virtue of V-Sync/ fps cap and CPU limitations), then that's your problem, the 580 power consumption figures you mentioned are done in a fully unlocked game scenario, no V-Sync, no fps cap, no CPU limitation.

(across platforms where that equalization of parameters is by design difficult and open to argument)
There is nothing difficult about running Gears 4 at equivalent settings at 1080p60 and 4K30 on a 580 then measuring power consumption.

There is a somewhat extensive technical disclosure of directed efforts on Microsoft’s part to limit power consumption by aggressive voltage tailoring.
This fantasy that an extensive voltage tailoring can reduce the consumption of a complete system (CPU +GPU) below that of a similar GPU to that system needs to stop, before you make that claim you need to provide the adequate data for it. Fact is there is NONE yet.
 
"We cannot quantify which console is more powerful because of SSD speed"?

Say what?

This is not this point, he just say the PS5 SSD is very fast. I post another comment before but I was not supposed to post it. I deleted it because it was by a major industry source. And the guy told the PS5 SSD is faster than any existing PC SSD.
 
This is not this point, he just say the PS5 SSD is very fast. I post another comment before but I was not supposed to post it. I deleted it because it was by a major industry source. And the guy told the PS5 SSD is faster than any existing PC SSD.
Is PS5 SSD in-game speed faster than “theoretical speed” of PC SSD?
 
Microsoft having a lower-end sku that simply exists because of the result of binned chips only really makes sense to me if Series X is using a "full chip"(ie no disabled CU's).

I personally don't believe this is the case and think there are two separate chips and that Lockhart was primarily designed for XCloud.
 
Yes, that's why it's not logical to think that a RX 580 with 8GB GDDR5 will consume more watts than a full console with a CPU, HDD, 12GB GDDR5 and a similar GPU (albeit wider with less clocks). Testing with the same workload is paramount here before we make such a grand conclusion.
The way it's done on consoles is to find a section of a any game that reaches the highest. It's been a reliable method, after testing a dozen games that peak rarely moves. It's the number being estimated here. From the wall.

How much do you think the rumored 12TF navi/zen2/gddr6 console will peak from the wall in gaming?

Your 280 for gpu alone brings that around 430W if we calculate the same way. Is that correct?
 
"We cannot quantify which console is more powerful because of SSD speed"?

Say what?

I read it as new tech in the consoles (SSD,RT,VRS) means number comparisons won't show on screen performance / results.

I also inferred that perhaps the consoles are diverging in their method for performance improvements, not simply both being x86 pc like consoles. The exotic may be returning.
 
The test was done on Gears 4, not 5.

Sure, but it’s a taxing game nonetheless that presents a relative maximum for the console. It’s not hard to find a comparable case for 580 or bound the problem. To insist on a specific game is pedantic and overly constraining, as MrFox alludes.

Again, if you still don't understand how running a game in a console environment can limit power consumption (by virtue of V-Sync/ fps cap and CPU limitations), then that's your problem, the 580 power consumption figures you mentioned are done in a fully unlocked game scenario, no V-Sync, no fps cap, no CPU limitation.
Locked framerate isn’t an issue here, games are optimized to a fixed hardware configuration to specifically maximize performance out of them. The console case would be more stressing than a PC with unconstrained variables. Of course, to suggest you try and de-embed the CPU from a console test is asinine.

This fantasy that an extensive voltage tailoring can reduce the consumption of a complete system (CPU +GPU) below that of a similar GPU to that system needs to stop, before you make that claim you need to provide the adequate data for it. Fact is there is NONE yet.

You constantly speak in hyperbole and refuse to concede any technical points. You clearly have no interest in actually debating the technical merits. You’re clearly an intelligent individual, so I assume it is just pride. I hope that when MS unveils a 12TF box that draws less than 300W at the wall, you’ll be able to appreciate the technical achievement for what it is. The usefulness of our dialogue has reached a conclusion.
 
No precision. just faster real read speed than any PC SSD.
I agree that access to SSD could be a bigger game changer than more compute power. When I think about the Hellblade 2 demo; and I think about the level of detail in there, I think having more access to the SSD will enable us to drive very high fidelity assets which is a separate discussion than just GPU compute. I think there are a lot of tools, features and optimization methods available to maximize your compute resources (especially looking into next generation); but the hard wall in getting the fidelity up in terms of assets will be reliant on that disk access.

But I think people (with respect ton the general discussion out there on Sony's solution vs NVMe) are also severely underestimating the speed of nvme when properly utilized.
nvme is no slouch, and if Sony has something that is faster; I think that's awesome. But I don't expect it to be magnitude of order faster than nvme.
 
If we're going to get serious about talking about SSD and nvme and where developers need to go with this; we need to start here with understanding the actual properties of nvme vs ssd vs sata

https://panthema.net/2019/0322-nvme-batched-block-access-speed/

I think it's important to understand understanding the different ways that the term 'speed' is addressed.

This blog goes through different types of test to see where the drives are performing at their optimal and when they aren't.

I think this is critical basic understanding for understanding 'customization' at the hard drive level if this thread is going to move away from GPU.

People talk about theoretical maximums; we're looking at 'Scan/read' results. But there are a great deal of other types of access requests too. Worth reading through especially if you're looking to buy a drive in the near future.
 
nvme is only one part of the problem. There is also filesystem and caches organisation. All of them contribute to the final speed.
The blog covers that.
filesystem and cache organization are covered by batch sizes and different types of read requests.
Drivers/implementation will improve performance, but I think because we are talking about consoles the expectation is that they will write the implementation for best performance.

ie. a filesystem setup well with sequential scan (read) is going to maximize bandwidth when running a batch size of >64. But a filesystem with random block reads will 1/2 the performance in the case of the hardware for the blog.
 
Last edited:
Status
Not open for further replies.
Back
Top