Ah I missed it or I forgot, everyone said it would have 1TB since forever so it's hard to remember the signal from the noise.I think you're asking about the SSD node process size? Cause I thought they already specified 1TB of SSD Capacity, maybe even last year at VGAs.
Ah I missed it or I forgot, everyone said it would have 1TB since forever so it's hard to remember the signal from the noise.
Ah I missed it or I forgot, everyone said it would have 1TB since forever so it's hard to remember the signal from the noise.
3700X is 3.6GHz base. Does it do 4.4GHz all core on boost? Either way, the point about boost is that it’s not guaranteed. They’re essentially using a 65W desktop CPU. Probably saving a bit of power with less L3, though.
I'm really impressed by the 3.6GHz on the SoC though. AMD officially released the Renoir H series today, and even the top-end 45W 4900H is "only" 3.3GHz base, and that only has 8MB L3.
So, are we thinking Xbox Series X CPU has 16 MB L3 or 8 MB L3?
From the DF article they mentioned a total of only 76 MB SRAM [ https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs ]:
There are customisations to the CPU core - specifically for security, power and performance, and with 76MB of SRAM across the entire SoC, it's reasonable to assume that the gigantic L3 cache found in desktop Zen 2 chips has been somewhat reduced. The exact same Series X processor is used in the Project Scarlett cloud servers that'll replace the Xbox One S-based xCloud models currenly being used. For this purpose, AMD built in EEC error correction for GDDR6 with no performance penalty (there is actually no such thing as EEC-compatible G6, so AMD and Microsoft are rolling their own solution), while virtualisation features are also included.
Why would it be? Ryzen 3700X has 38MB (L2+L3) of cache, the GPU maybe has 10MB, that leaves plenty enough for other uses.From the DF article they mentioned a total of only 76 MB SRAM, it's reasonable to assume that the gigantic L3 cache found in desktop Zen 2 chips has been somewhat reduced
That depends on what they're willing to count in the total.Why would it be? Ryzen 3700X has 38MB (L2+L3) of cache, the GPU maybe has 10MB, that leaves plenty enough for other uses.
The standards can outline a fair amount of details surrounding the DIMM and ECC scheme adopted, which the industry would be unusually consistent in implementing in the absence of a standard. Wouldn't JEDEC have a provision for this for DDR4 DIMMs?Error correction is irrelevant to memory technology or PHY. It's a logic block in the memory controller implementation, and additional memory width the vendor needs to dedicate to it. Saying gddr6 doesnt support ecc is very weird. Ddr4 doesn't specifically support it either, it's not the job of the memory PHY.
Cloud based work when not being used to run games.Is there a more high-end workload that would justify such a scheme, concerns about higher penalties from bit corruption due to memory encryption/compression, or maybe a hinderance for rowhammer attacks?
Glad to see a reasonable die size and rdna2's big improvement just in time for next gen, it means the doom-and-gloom about price was an overreaction, the SoC is practically the same size as previous launch consoles. I'm not really worried about the ram cost because the weird two speed partitions is a good indication they still try to design to a price point. And the power should be maybe a little above 200W and not too expensive cooling.
"12 TFLOPs was our goal from the very beginning. We wanted a minimum doubling of performance over Xbox One X to support our 4K60 and 120 targets. And we wanted that doubling to apply uniformly to all games," explains Andrew Goossen. "To achieve this, we set a target of 2x the raw TFLOPs of performance knowing that architectural improvements would make the typical effective performance much higher than 2x. We set our goal as a doubling of raw TFLOPs of performance before architectural improvements were even considered - for a few reasons. Principally, it defined an audacious target for power consumption and so defined our whole system architecture.
That made me think of something from the DF article on the XBSX.
Considering that they started development of the XBSX back in 2016, it must have been a very real possibility of the console being a high wattage machine. Back then, they couldn't have known that AMD would have as much success as they have had in increasing the perf/watt efficiency of their GPU designs as much as they seem to have with RDNA2.
So, they were certainly prepared (bolded part) to have to market a very high wattage machines, but it may turn out that they don't have to.
Regards,
SB
I guess it is well short of the fantasy vortex-cooled super arrangement some of us might have first imagined. Curious assembly though, that's easy to take apart and put together. Is that to reduce servicing costs? Or enable home-brew hardware modding for alternate cases, water-cooled overclocking, and hacked-in RAM upgrades?I'm somewhat disapointed with the look of those innards. Seems less elegantly assembled than the external look sugested. What an anvil of a console this is likely to be...