Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Zen APUs already exist. The power efficiency curves don't really change that much when the GPU and CPU are stuck together.

Yes, but with half the number of CPU cores and very low performing GPU (at least for what a next gen console needs).

The power scaling will more than linear as you increase the number of units in each and add something like a very wide bus.
 
Yes, but with half the number of CPU cores and very low performing GPU (at least for what a next gen console needs).

The power scaling will more than linear as you increase the number of units in each and add something like a very wide bus.
We have an example. The Xbox One X pushed the frequency on both the CPU and GPU components and drew less at the wall than an equivalent 6TF GPU card.

We also have examples of Intel parts lacking iGPUs not clocking as high, presumably because the silicon die being smaller gives it less heat spreading. So unless you’re in a corner case where both are at max load, there’d be a net benefit to a larger die for thermal properties.
 
PS4 launched with the equivalent of a mid range GPU (7850/70). PS5 will launch with the equivalent of a mid range GPU (RX 5700 is my bet). This is dictated by economics.

A 3700X, with a base clock of 3.6GHz, has a TDP of 65W. Asuming power scales quadratically with frequency, that yields 45W @ 3GHz. In reality the power consumption will be lower because the CPU is tacked onto the shared memory system.

~40W for the CPU and 170W for the GPU+15W for everything else is 225W which fits into what can be packaged densely and still cooled without too much noise and cost.

Every gamer PC going forward will be 6 cores or more. To cater to developers, the console vendors have better align themselves with this development.

Cheers
 
PS4 launched with the equivalent of a mid range GPU (7850/70). PS5 will launch with the equivalent of a mid range GPU (RX 5700 is my bet). This is dictated by economics.

Xbox One X launched with a 580 equivalent. So clearly a willingness to launch at $500 invalidates that as a sole solution. PS4 also launched with clamshell memory necessitating 16 chips. That won’t be necessary this time.

A 3700X, with a base clock of 3.6GHz, has a TDP of 65W. Asuming power scales quadratically with frequency, that yields 45W @ 3GHz. In reality the power consumption will be lower because the CPU is tacked onto the shared memory system.

Power does not scale quadratically with frequency. It does so with voltage. Voltage often needs to be increased to increase frequency, but iso frequency, it’s linear. The graph TT posted shows it’s less than a slope of 1.
 
I don't think raytracing is memory bandwidth limited, I think it is memory transaction limited. The memory subsystem is the single most expensive component of a console (and consoles are all about costs).

If we see raytracing in consoles, it will be tacked on to existing hardware.
We have a bit of evidence to the contrary, with Sony patenting RT tech and employing a PVR veteran. As such, it's possible they have a different solution. Hypothetical scenario - PS5 has a specific large RT cache for low latency random access, and a system that keeps this refreshing very quickly from main RAM, so the transaction rate of the GDDR isn't a bottleneck as it services large batches of copies consuming massive BW.
 
PS4 launched with the equivalent of a mid range GPU (7850/70). PS5 will launch with the equivalent of a mid range GPU (RX 5700 is my bet). This is dictated by economics.

The RX 5700 XT averages at 1850 MHz which is 9.5 TFLOPs. A rather long shot from the proposed 7 TFLOPs being suggested here.
 
From my perspective, working just on compute, it looks very different:
7950 five times faster than GTX670 (although the latter was a bit faster in games)
280X two times faster than Kepler Titan.
GTX1070 exactly in the middle between 7950 and FuryX. (interesting: Here real world performance of 1 NV TF matches 1 GCN TF for me)
(no data for generations in between and more recent stuff)

But NV always won all the game benchmarks, so i expect AMD to move from strong compute towards strong rasterization (and now RT?) as well. Major reason i'm personally always nervous with new architectures and less excited about HW RT than others ;)
I see the TF value mainly as a measurement of general purpose compute power. It works much better for this than for rasterization rlated tasks.

Ok true. The only perspective gamers had was gaming performance though, and the 2013 consoles where mid range by then, even compared to NV stuff.

Xbox One X launched with a 580 equivalent.

Yes which was a generation behind, RX Vega was out before the One X even, it's a much faster GPU overall (Vega64).

The RX 5700 XT averages at 1850 MHz which is 9.5 TFLOPs. A rather long shot from the proposed 7 TFLOPs being suggested here.

Maybe he ment 5700 non-XT?
 
~40W for the CPU and 170W for the GPU+15W for everything else is 225W which fits into what can be packaged densely and still cooled without too much noise and cost.
Cheers

I think 170W for the GPU is on the very high side. I think you need to knock off at least 50W of the total power consumption likely increase the budget for the every thing else category.

I can't see the SOC taking more than 100-120W. ~25-30W for the GPU and 75-90W for the CPU. I really hope that I'm wrong, but I think the whole thing will be under 175W.
 
We have a bit of evidence to the contrary, with Sony patenting RT tech and employing a PVR veteran. As such, it's possible they have a different solution. Hypothetical scenario - PS5 has a specific large RT cache for low latency random access, and a system that keeps this refreshing very quickly from main RAM, so the transaction rate of the GDDR isn't a bottleneck as it services large batches of copies consuming massive BW.

And now we come back full circle to the old Arcturus leak:)
 
Another thing about Fehu's leak. Assuming real, could very likely be about Scarlet instead. Soldering down the SSD is something MS would do.

The Oberon/Flute rumors indicated a 300mm SOC with a 2GHZ GPU. If that's true, it does not have a 384-bit bus (SOC is too small.).
 
Another thing about Fehu's leak. Assuming real, could very likely be about Scarlet instead. Soldering down the SSD is something MS would do.

For efficiency/speed purposes though, I'm not sure Sony wouldn't be forced to do the same :?:
 
Another thing about Fehu's leak. Assuming real, could very likely be about Scarlet instead. Soldering down the SSD is something MS would do.
Microsoft has historically been less lenient about allowing people to swap the mass storage, yes.
 
Another thing about Fehu's leak. Assuming real, could very likely be about Scarlet instead. Soldering down the SSD is something MS would do.

The Oberon/Flute rumors indicated a 300mm SOC with a 2GHZ GPU. If that's true, it does not have a 384-bit bus (SOC is too small.).
I don't think this is the case. As we see with Surface Pro and Pro X. The demo boards on Scarlett look to have an external port that looks very similar to the Surface pro X
 
Another thing about Fehu's leak. Assuming real, could very likely be about Scarlet instead. Soldering down the SSD is something MS would do.

Given Sony being so vocal on their storage being faster than anything on PC it makes me think they might be trying to curb exactly that expectation on the PS5. If they won't let you upgrade they need to really make it seem like a valid reason or they will take flack from their faithful.
 
The RX 5700 XT averages at 1850 MHz which is 9.5 TFLOPs. A rather long shot from the proposed 7 TFLOPs being suggested here.
But non XT has 7.9 TF - lowering frequency a bit as usual and... it's not this would make no sense?
Scaling BW to 348 bus would give 675, not 800. So maybe one more CU disabled but higher frequency.
 
I think it's an educated guess to make that the PS5 will use at the very least 2304 shader cores (40 CUs for the Dev kit, Cut-down for the consumer product, as usual), to make BC with PS4 (Pro) games more straight-forward. Clock rate is the only question mark. I'd be shocked though if it's not at least 1.6GHz.
 
I think it's an educated guess to make that the PS5 will use at the very least 2304 shader cores (40 CUs for the Dev kit, Cut-down for the consumer product, as usual), to make BC with PS4 (Pro) games more straight-forward. Clock rate is the only question mark. I'd be shocked though if it's not at least 1.6GHz.
The just announced mobile parts have a clock of 1.45GHz, so I’d call that the min.
 
Regarding storage, an external hard drive connected by USB to smaller internal SSD that can store 3 or 4 games makes sense to me. The internal storage can swap out games from external drive but realistically 3 or 4 games can be accessed real time. PS4 can be played from external storage but PS5 games in library have to loaded into internal SSD drive which can occur while playing some other game active in the SSD.
 
Status
Not open for further replies.
Back
Top