Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
If that's true then cutting on CPU clock rates seems silly while at the same time running RDNA 7nm @2GHz (?).

Makes more sense to run the CPU @~3GHz and the GPU @~1.6-1.7 Ghz. That's where the sweetspot is.

2.2ghz cpu seems more believeable then 2ghz gpu but ok, agree. In general, if specs are low, they are suspect, if they are high their generally accepted as true. If i would be a ''leaker'' i wouldn't say 2ghz cpu and 7tf gpu atleast.
 
A really cheap, early PS5 combined with a ps4/pro production end (and a really nice, exclusive, bundle game) may give really hard time to concurrence...
 
I don’t believe we have. Just talking through the data points available. What specs seem glaringly suspect that we should investigate?
In the case of microsoft, i dont think 8 Zen 2 cores at 2.2 ghz are 4x faster than the Jaguar cores in Xbox X.
 
Compared to NV gpu's in some area maybe, but compared to what was available the PS4 was kinda low end. 7970ghz was out for a year by ps4 launch. NV gpu's where that bad either, mostly outperforming amd in most games This time around were getting a real CPU, and a more advanced GPU for the time (i think).
From my perspective, working just on compute, it looks very different:
7950 five times faster than GTX670 (although the latter was a bit faster in games)
280X two times faster than Kepler Titan.
GTX1070 exactly in the middle between 7950 and FuryX. (interesting: Here real world performance of 1 NV TF matches 1 GCN TF for me)
(no data for generations in between and more recent stuff)

But NV always won all the game benchmarks, so i expect AMD to move from strong compute towards strong rasterization (and now RT?) as well. Major reason i'm personally always nervous with new architectures and less excited about HW RT than others ;)
I see the TF value mainly as a measurement of general purpose compute power. It works much better for this than for rasterization rlated tasks.
 
Is there any possibility that CPU & GPU runs at the same freq ? Could that allows special RT tasks that needs both operating in conjunction ?!?
 
Please answer: why on ONE X we still have Jaguars ?

Simple, Return on Investments. Albert Penello went into that in one of his tweets analyzing what's required for ROI when you're doing even minor design tweaks. I pulled that into a few different threads before to show even if something seems obvious to do, it may not make financial sense to do so.
 
If that's true then cutting on CPU clock rates seems silly while at the same time running RDNA 7nm @2GHz (?).

Makes more sense to run the CPU @~3GHz and the GPU @~1.6-1.7 Ghz. That's where the sweetspot is.

We have no idea what the actual sweet spot is however. Standalone CPU and GPU have their own sweet spots, but when you bring them onto one die, they may have a different behavior especially due to the power density issues that may show up. You're bringing two very hot and power hungry things very close together so it wouldn't be surprising if the clocks on the SOC were much lower than standalone components.
 
Last edited:
If that's true then cutting on CPU clock rates seems silly while at the same time running RDNA 7nm @2GHz (?).

Makes more sense to run the CPU @~3GHz and the GPU @~1.6-1.7 Ghz. That's where the sweetspot is.
GPUs have redundant CUs to shut off to improve yields. I don’t believe this is the case with CPU.

There are also other methods to save CPU cycles as well. At least with respect to draw call submissions and AVX.
 
Why are people buying into what fehu posted?

Parts of the spec are glaringly suspect.
It's talking through a speculation. I was actually in the process of moving this to the 'baselss rumour' thread, but it's taking too long to separate out the past pages of discussion!
 
I'm wondering that myself.
The 2.2GHz CPU clocks are specially glaring. Zen 2 at 3.2GHz is massively efficient. Why would Sony want to loose almost a third of potential CPU performance just to save what, 8 Watts?


ATSM2E1.png

I want to stress that the improvement here occurs on a slope of less than 1. P=fCV^2, but the power goes up with about 0.5f here.

Because it is NOT needed...
This ps5 needs to not leave ps4 behind... so a 3.2 ghz CPU is way too much... better saving 8 watts (only 8 watts ????)

Also because of bandwidth... Too strong CPU cuts down GPU bandwidth.... system needs balance. Please answer: why on ONE X we still have Jaguars ?

This is without technical merit IMO. Zen 2 needs less than 50GB/s, which is perhaps a tenth of total available bandwidth. We know it’s much more important to manage the fact that any bandwidth consumed by the CPU reduces available bandwidth to the GPU due to system inefficiencies, no matter the CPU’s clock rate.

Reasons could be to improve CPU yields, or they are power and heat constrained, so they will design their cooling and power circuitry around those limits.

These frequencies are on the bottom end of the normal distribution. There’s nothing to be gained there.

We’re talking a 5-10% increase in total APU power for 45% more CPU performance. 45%.
 
Last edited:
Remember RT seems to be bandwith bottlenecked in Nvidias solution. Maybe it will throw a wrench at what TF/BW ratios we are used to

I don't think raytracing is memory bandwidth limited, I think it is memory transaction limited. The memory subsystem is the single most expensive component of a console (and consoles are all about costs). To think Sony would sink 50% more costs on this without adding ordinary GPU resources to take advantage of this when rasterizing is ridiculous.

If we see raytracing in consoles, it will be tacked on to existing hardware.

Cheers
 
Status
Not open for further replies.
Back
Top