anexanhume
Veteran
I thought that was later retracted?Flute benchmarks suggest that L3 is quartered in the PS5. That's ~22mm2 freed up.
I thought that was later retracted?Flute benchmarks suggest that L3 is quartered in the PS5. That's ~22mm2 freed up.
I strongly disagree with your kind of thinking a GPU is 'idle' just because it is not rasterizing at a certain moment.And also make heavy use of GPGPU when he is idle.
Really stupid question: Could it happen they double the SIMDs per CU but not TMU, ROPs etc.?The problem is that none of those are consistent with performance numbers we’re given. PS5 is supposedly 10TF or greater, and XSX is over 10TF, possibly 12TF. They’re doing that in less area than Navi 10 + Zen 2
Flute CPU is definitely 8MB; see this comparison:I thought that was later retracted?
I thought that was later retracted?
Yes, that's why it's not logical to think that a RX 580 with 8GB GDDR5 will consume more watts than a full console with a CPU, HDD, 12GB GDDR5 and a similar GPU (albeit wider with less clocks). Testing with the same workload is paramount here before we make such a grand conclusion.Nevertheless, it's not a comparison of Apples to Apples, as RX580 is the GPU and board alone, while the console includes CPU, Optical Drive, Hard Drive, etc...
Gears 5 is the most stressing case for XB1 known to date. That’s still less than this single instance of 580 example. Requesting an exhaustive dataset is disingenuous in my opinion. There is a somewhat extensive technical disclosure of directed efforts on Microsoft’s part to limit power consumption by aggressive voltage tailoring. That’s exactly the results we see.Yes, that's why it's not logical to think that a RX 580 with 8GB GDDR5 will consume more watts than a full console with a CPU, HDD, 12GB GDDR5 and a similar GPU (albeit wider with less clocks). Testing with the same workload is paramount here before we make such a grand conclusion.
The test was done on Gears 4, not 5.Gears 5 is the most stressing case for XB1 known to date. That’s still less than this single instance of 580 example.
There is nothing difficult about running Gears 4 at equivalent settings at 1080p60 and 4K30 on a 580 then measuring power consumption.(across platforms where that equalization of parameters is by design difficult and open to argument)
This fantasy that an extensive voltage tailoring can reduce the consumption of a complete system (CPU +GPU) below that of a similar GPU to that system needs to stop, before you make that claim you need to provide the adequate data for it. Fact is there is NONE yet.There is a somewhat extensive technical disclosure of directed efforts on Microsoft’s part to limit power consumption by aggressive voltage tailoring.
And Jason Schreier
Shit was not supposed to post this.
"We cannot quantify which console is more powerful because of SSD speed"?
Say what?
Is PS5 SSD in-game speed faster than “theoretical speed” of PC SSD?This is not this point, he just say the PS5 SSD is very fast. I post another comment before but I was not supposed to post it. I deleted it because it was by a major industry source. And the guy told the PS5 SSD is faster than any existing PC SSD.
Is PS5 SSD in-game speed faster than “theoretical speed” of PC SSD?
The way it's done on consoles is to find a section of a any game that reaches the highest. It's been a reliable method, after testing a dozen games that peak rarely moves. It's the number being estimated here. From the wall.Yes, that's why it's not logical to think that a RX 580 with 8GB GDDR5 will consume more watts than a full console with a CPU, HDD, 12GB GDDR5 and a similar GPU (albeit wider with less clocks). Testing with the same workload is paramount here before we make such a grand conclusion.
"We cannot quantify which console is more powerful because of SSD speed"?
Say what?
The test was done on Gears 4, not 5.
Locked framerate isn’t an issue here, games are optimized to a fixed hardware configuration to specifically maximize performance out of them. The console case would be more stressing than a PC with unconstrained variables. Of course, to suggest you try and de-embed the CPU from a console test is asinine.Again, if you still don't understand how running a game in a console environment can limit power consumption (by virtue of V-Sync/ fps cap and CPU limitations), then that's your problem, the 580 power consumption figures you mentioned are done in a fully unlocked game scenario, no V-Sync, no fps cap, no CPU limitation.
This fantasy that an extensive voltage tailoring can reduce the consumption of a complete system (CPU +GPU) below that of a similar GPU to that system needs to stop, before you make that claim you need to provide the adequate data for it. Fact is there is NONE yet.
I agree that access to SSD could be a bigger game changer than more compute power. When I think about the Hellblade 2 demo; and I think about the level of detail in there, I think having more access to the SSD will enable us to drive very high fidelity assets which is a separate discussion than just GPU compute. I think there are a lot of tools, features and optimization methods available to maximize your compute resources (especially looking into next generation); but the hard wall in getting the fidelity up in terms of assets will be reliant on that disk access.No precision. just faster real read speed than any PC SSD.
nvme is only one part of the problem. There is also filesystem and caches organisation. All of them contribute to the final speed.If we're going to get serious about talking about SSD and nvme and where developers need to go with this; we need to start here with understanding the actual properties of nvme vs ssd vs sata
https://panthema.net/2019/0322-nvme-batched-block-access-speed/
The blog covers that.nvme is only one part of the problem. There is also filesystem and caches organisation. All of them contribute to the final speed.