Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I think you're asking about the SSD node process size? Cause I thought they already specified 1TB of SSD Capacity, maybe even last year at VGAs.
Ah I missed it or I forgot, everyone said it would have 1TB since forever so it's hard to remember the signal from the noise.
 
Ah I missed it or I forgot, everyone said it would have 1TB since forever so it's hard to remember the signal from the noise.

The signal to noise ratio is about as good as when squirrels at holes in my cable line, so when a tech spliced the line water sprayed out in his face.
 
Ah I missed it or I forgot, everyone said it would have 1TB since forever so it's hard to remember the signal from the noise.

Don't worry, you're not going crazy(ier), I had to look again at their official material to make sure it said the size [1 TB SSD] @ https://www.xbox.com/en-US/consoles/xbox-series-x .

* Though now I can't recall with specificity when they had SSD size listed, but it's certainly been updated with new media today.
 
I'm somewhat disapointed with the look of those innards. Seems less elegantly assembled than the external look sugested. What an anvil of a console this is likely to be...
The specs, on the other hand, are really exciting. This gen is gonna kick ass.
 
So Xsx is running Gears 5 port at 4k 60fps Ultra with additional settings beyond PC's highest preset...

index.php
 
3700X is 3.6GHz base. Does it do 4.4GHz all core on boost? Either way, the point about boost is that it’s not guaranteed. They’re essentially using a 65W desktop CPU. Probably saving a bit of power with less L3, though.
I'm really impressed by the 3.6GHz on the SoC though. AMD officially released the Renoir H series today, and even the top-end 45W 4900H is "only" 3.3GHz base, and that only has 8MB L3.

So, are we thinking Xbox Series X CPU has 16 MB L3 or 8 MB L3?
 
So, are we thinking Xbox Series X CPU has 16 MB L3 or 8 MB L3?

From the DF article they mentioned a total of only 76 MB SRAM [ https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs ]:

There are customisations to the CPU core - specifically for security, power and performance, and with 76MB of SRAM across the entire SoC, it's reasonable to assume that the gigantic L3 cache found in desktop Zen 2 chips has been somewhat reduced. The exact same Series X processor is used in the Project Scarlett cloud servers that'll replace the Xbox One S-based xCloud models currenly being used. For this purpose, AMD built in EEC error correction for GDDR6 with no performance penalty (there is actually no such thing as EEC-compatible G6, so AMD and Microsoft are rolling their own solution), while virtualisation features are also included.​
 
From the DF article they mentioned a total of only 76 MB SRAM [ https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs ]:

There are customisations to the CPU core - specifically for security, power and performance, and with 76MB of SRAM across the entire SoC, it's reasonable to assume that the gigantic L3 cache found in desktop Zen 2 chips has been somewhat reduced. The exact same Series X processor is used in the Project Scarlett cloud servers that'll replace the Xbox One S-based xCloud models currenly being used. For this purpose, AMD built in EEC error correction for GDDR6 with no performance penalty (there is actually no such thing as EEC-compatible G6, so AMD and Microsoft are rolling their own solution), while virtualisation features are also included.​

So 76 MB SRAM across the entire SoC would be all the L1, L2 and L3 cache together, including what the RDNA2 GPU has.
 
Error correction is irrelevant to memory technology or PHY. It's a logic block in the memory controller implementation, and additional memory width the vendor needs to dedicate to it. Saying gddr6 doesnt support ecc is very weird. Ddr4 doesn't specifically support it either, it's not the job of the memory PHY.
 
Why would it be? Ryzen 3700X has 38MB (L2+L3) of cache, the GPU maybe has 10MB, that leaves plenty enough for other uses.
That depends on what they're willing to count in the total.
AMD's presentation for Vega 10 stated it had 45 MB of SRAM across the entire chip, which I don't think there's been a full public accounting of.

A 56 CU GPU would have ~14.7 MB just for the register files, ~3.67 MB for LDS, ~2.29 MB for L0, instruction, and scalar caches. Maybe 4MB or more for L2 cache. I'm not certain at this point about the L1, but if this is a 4 shader-array GPU, there's .5 MB for L1.
There could be other caches like the parameter cache, which was 1 MB for Scorpio and might be higher for the next gen.
All these known entities push the buffer totals to a similar amount as the 38 MB figure, so the question is how much of the "other" that wasn't detailed for Vega is in the total for Arden.



Error correction is irrelevant to memory technology or PHY. It's a logic block in the memory controller implementation, and additional memory width the vendor needs to dedicate to it. Saying gddr6 doesnt support ecc is very weird. Ddr4 doesn't specifically support it either, it's not the job of the memory PHY.
The standards can outline a fair amount of details surrounding the DIMM and ECC scheme adopted, which the industry would be unusually consistent in implementing in the absence of a standard. Wouldn't JEDEC have a provision for this for DDR4 DIMMs?

GDDR is more like HBM in how a single device is the endpoint of a channel or set of channels. For HBM, there was provision in the standard for ECC.
GPUs did initially offer GDDR5 ECC by setting aside capacity and additional memory accesses, though this was a less than complete solution for GDDR5 as that standard didn't protect the address and command buses.
GDDR6 is at least somewhat more protected due to the error detection on those buses, though I'm curious what use case Microsoft has for ECC on a console APU. Is there a more high-end workload that would justify such a scheme, concerns about higher penalties from bit corruption due to memory encryption/compression, or maybe a hinderance for rowhammer attacks?
 
Glad to see a reasonable die size and rdna2's big improvement just in time for next gen, it means the doom-and-gloom about price was an overreaction, the SoC is practically the same size as previous launch consoles. I'm not really worried about the ram cost because the weird two speed partitions is a good indication they still try to design to a price point. And the power should be maybe a little above 200W and not too expensive cooling.

That made me think of something from the DF article on the XBSX.

"12 TFLOPs was our goal from the very beginning. We wanted a minimum doubling of performance over Xbox One X to support our 4K60 and 120 targets. And we wanted that doubling to apply uniformly to all games," explains Andrew Goossen. "To achieve this, we set a target of 2x the raw TFLOPs of performance knowing that architectural improvements would make the typical effective performance much higher than 2x. We set our goal as a doubling of raw TFLOPs of performance before architectural improvements were even considered - for a few reasons. Principally, it defined an audacious target for power consumption and so defined our whole system architecture.

Considering that they started development of the XBSX back in 2016, it must have been a very real possibility of the console being a high wattage machine. Back then, they couldn't have known that AMD would have as much success as they have had in increasing the perf/watt efficiency of their GPU designs as much as they seem to have with RDNA2.

So, they were certainly prepared (bolded part) to have to market a very high wattage machines, but it may turn out that they don't have to.

Regards,
SB
 
That made me think of something from the DF article on the XBSX.



Considering that they started development of the XBSX back in 2016, it must have been a very real possibility of the console being a high wattage machine. Back then, they couldn't have known that AMD would have as much success as they have had in increasing the perf/watt efficiency of their GPU designs as much as they seem to have with RDNA2.

So, they were certainly prepared (bolded part) to have to market a very high wattage machines, but it may turn out that they don't have to.

Regards,
SB

Bodes well for their mid gen refreshes.
 
I'm somewhat disapointed with the look of those innards. Seems less elegantly assembled than the external look sugested. What an anvil of a console this is likely to be...
I guess it is well short of the fantasy vortex-cooled super arrangement some of us might have first imagined. Curious assembly though, that's easy to take apart and put together. Is that to reduce servicing costs? Or enable home-brew hardware modding for alternate cases, water-cooled overclocking, and hacked-in RAM upgrades?
 
Man, I wish there was more information about this new "Velocity Architecture", and each of the components therein. I would LOVE to see a proper tech demonstration of how it works and the benefits that result from it. I guess that will come soon enough, but still.. it's all so very interesting.
 
Think I saw in a thread that one of the most potentially interesting talks has been pulled from their gdc replacement.
One talking about RT, cloud, next gen etc.
 
Status
Not open for further replies.
Back
Top