Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Only reason we take 56CU as "gospel" is because other chip theoretical CU number is specifically active.

But on different note, how do you see 56CU chip on 320bit bus? And on top of that, it has to fit height and length of Scarlett chip we saw.


No, Era mods verified he COULD have info (around GDC), that is - he could be someone with access to the info. As with Klee, his info is not verified and honestly I suspect he is full of shit. Especially after him and Klee pretty much confirmed 64CUs.

Hmqgg never verified 64CUs. Also he is not full of shit. He is a game dev with sources at AMD.
 
I thinking the latter makes sense. Defective chips with defective Zen 2 cores / CUs gets used in Azure, where there is a much greater range of performance SKUs targeted, compared to the 2 SKUs in the retail consoles.
That's going to be very expensive. Only perfect chips will be useable. That's AFAIK unheard of, let alone on such a massive slab of silicon.
 
That's going to be very expensive. Only perfect chips will be useable. That's AFAIK unheard of, let alone on such a massive slab of silicon.

I'm getting 71% yield on die wafer calculator. Obviously only a portion of that can stable at the required clocks.

I guessing it's a matter of how many of the defective dies can be acceptable for use in Azure / Xcloud, and the volume needed for Azure / Xcloud.

i.e 20 million Xsx SOC fabbed lifetime. 50% are defective. 10 million perfect chips are in the console. ~5-6 million defective chips are used in XCloud / Azure.
 
Last edited:
I'm getting 71% yield on die wafer calculator. Obviously only a portion of that can stable at the required clocks.
Using what calculator and parameters? Maybe I'm out of touch, but that seems optimistic to me. eg.

70% on Zen 2 7nm which is <90 mm². 4x the die size for XBSX should be far worse yields for perfect chips.
 
Using what calculator and parameters? Maybe I'm out of touch, but that seems optimistic to me. eg.

70% on Zen 2 7nm which is <90 mm². 4x the die size for XBSX should be far worse yields for perfect chips.
Seems overly pessimistic. How bad would that make any die over 200mm² (even Navi 10)?

Here is defect density from TSMC

ELUh4zRUcAEjw1_


Its pretty straightforward to get ballpark.
 
Is it like the haptic in the steam controller? Always liked dual shock controllers more anyway, might get a DS5 for the PS5 aswell.
 
Doesn't World Of Tanks use the CPU to construct the BVH's (Bounding Volume Hierachies)?

Though I'd assume the latency of moving data between CPU<>GPU would be incredibly detrimental to real-time RT? (Perhaps an advantage of an APU?)

Disclaimer: I have an incredibly rudimentary understanding of the hardware underpinnings of how RT is done, so may be way off the mark here...:confused:
 
Sounds they use CPU to select a good distribution of probe positions, likely from a given large set, player position, time of day, open / closed doors... just guessing.
And then use GPU RT to generate cubemaps or SH... for those probes.
Likely very efficient because probes can be reused and need no update every frame for each of them. I guess it's faster than Morgan McGuires dense probe grid but less accurate.
 
Sounds they use CPU to select a good distribution of probe positions, likely from a given large set, player position, time of day, open / closed doors... just guessing.
And then use GPU RT to generate cubemaps or SH... for those probes.
Likely very efficient because probes can be reused and need no update every frame for each of them. I guess it's faster than Morgan McGuires dense probe grid but less accurate.

Is it possible for probe selection to use a significant portion of the CPU frame time? i.e. 8ms
 
Status
Not open for further replies.
Back
Top