Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I hope it's as detailed as the Microsoft reveals.
 
unknown.png
 
Not sure but, i think the 3700x boosts to 3.8ghz on all of it's cores, single core to 4.4, higher or lower depending on temp/workload? That's how my 3900x works atleast, range between 3800 bclk to 4600 boost, 12c/16t cpu.



As DF mentions, a game optimized for current gen runs the same on XSX as on a 2080 pc. Goes both ways i assume then, as that same game aint optimized for newer archs either. When mesh shaders etc or whatever more advanced features become more used, both would see improvements.



It's a mobile/laptop cpu :p

The clocks a 2080 actually runs at put it well above your stated tflops tho.
 
From the DF article they mentioned a total of only 76 MB SRAM [ https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs ]:

There are customisations to the CPU core - specifically for security, power and performance, and with 76MB of SRAM across the entire SoC, it's reasonable to assume that the gigantic L3 cache found in desktop Zen 2 chips has been somewhat reduced. The exact same Series X processor is used in the Project Scarlett cloud servers that'll replace the Xbox One S-based xCloud models currenly being used. For this purpose, AMD built in EEC error correction for GDDR6 with no performance penalty (there is actually no such thing as EEC-compatible G6, so AMD and Microsoft are rolling their own solution), while virtualisation features are also included.​

Why would it be? Ryzen 3700X has 38MB (L2+L3) of cache, the GPU maybe has 10MB, that leaves plenty enough for other uses.

That depends on what they're willing to count in the total.
AMD's presentation for Vega 10 stated it had 45 MB of SRAM across the entire chip, which I don't think there's been a full public accounting of.

A 56 CU GPU would have ~14.7 MB just for the register files, ~3.67 MB for LDS, ~2.29 MB for L0, instruction, and scalar caches. Maybe 4MB or more for L2 cache. I'm not certain at this point about the L1, but if this is a 4 shader-array GPU, there's .5 MB for L1.
There could be other caches like the parameter cache, which was 1 MB for Scorpio and might be higher for the next gen.
All these known entities push the buffer totals to a similar amount as the 38 MB figure, so the question is how much of the "other" that wasn't detailed for Vega is in the total for Arden.




The standards can outline a fair amount of details surrounding the DIMM and ECC scheme adopted, which the industry would be unusually consistent in implementing in the absence of a standard. Wouldn't JEDEC have a provision for this for DDR4 DIMMs?

GDDR is more like HBM in how a single device is the endpoint of a channel or set of channels. For HBM, there was provision in the standard for ECC.
GPUs did initially offer GDDR5 ECC by setting aside capacity and additional memory accesses, though this was a less than complete solution for GDDR5 as that standard didn't protect the address and command buses.
GDDR6 is at least somewhat more protected due to the error detection on those buses, though I'm curious what use case Microsoft has for ECC on a console APU. Is there a more high-end workload that would justify such a scheme, concerns about higher penalties from bit corruption due to memory encryption/compression, or maybe a hinderance for rowhammer attacks?

What are the chances that RDNA 2 has cache growth?

Does 32MB L3 let XSX emulate OG XB1, or do they offer a X1X like compatibility for that?
 
That depends on what they're willing to count in the total.
AMD's presentation for Vega 10 stated it had 45 MB of SRAM across the entire chip, which I don't think there's been a full public accounting of.

A 56 CU GPU would have ~14.7 MB just for the register files, ~3.67 MB for LDS, ~2.29 MB for L0, instruction, and scalar caches. Maybe 4MB or more for L2 cache. I'm not certain at this point about the L1, but if this is a 4 shader-array GPU, there's .5 MB for L1.
There could be other caches like the parameter cache, which was 1 MB for Scorpio and might be higher for the next gen.
All these known entities push the buffer totals to a similar amount as the 38 MB figure, so the question is how much of the "other" that wasn't detailed for Vega is in the total for Arden.




The standards can outline a fair amount of details surrounding the DIMM and ECC scheme adopted, which the industry would be unusually consistent in implementing in the absence of a standard. Wouldn't JEDEC have a provision for this for DDR4 DIMMs?

GDDR is more like HBM in how a single device is the endpoint of a channel or set of channels. For HBM, there was provision in the standard for ECC.
GPUs did initially offer GDDR5 ECC by setting aside capacity and additional memory accesses, though this was a less than complete solution for GDDR5 as that standard didn't protect the address and command buses.
GDDR6 is at least somewhat more protected due to the error detection on those buses, though I'm curious what use case Microsoft has for ECC on a console APU. Is there a more high-end workload that would justify such a scheme, concerns about higher penalties from bit corruption due to memory encryption/compression, or maybe a hinderance for rowhammer attacks?

I don’t think the console has ecc memory, it’s the cloud variant of the APU that does. That’s how I read it.
 
I don’t think the console has ecc memory, it’s the cloud variant of the APU that does. That’s how I read it.
I would need to go re-read it, but I read it that it's the same soc, so console will have ecc also.
I'd be surprised if they mentioned it just for xcloud as they've not talked about azure use yet.
 
Status
Not open for further replies.
Back
Top