anexanhume
Veteran
I think NAND used as SLC will have better lifetime too.interesting take. 32GB or even 64gigs would be enough to load in large portions of a game.
I think NAND used as SLC will have better lifetime too.interesting take. 32GB or even 64gigs would be enough to load in large portions of a game.
Yeah, assuming it sticks to two SEs.
It'd be more important on the geometry side for Lockhart not to be totally gimped as that would actually be more important the lower the resolution for a given set of model assets while not everything can be LOD'ed for a certain triangle: pixel density.
They could even just halve the number of ROPs per SA for Lockhart to save some die space on there. According to the Navi 10 die shot, eight DCUs should be in the region of 32mm^2 while each 64-bit MC is roughly 13mm^2. If they simply shave off 16 DCUs and 128-bit, that's about 90mm^2 off of Anaconda's size. Hopefully they don't gimp the bandwidth that much further (<192-bit), but 12GB would be a rather simple configuration with a 6x2GB setup.
Maybe LH is in the region of 240-250mm^2 when all is said and done?
Interesting figures! Are you including dropping the L2 cache along with the memory controllers?
With so many fewer CUs per SA in the hypothetical setup, might they also be able to halve L2 cache on the remaining controllers (down to 2MB from the 4MB in RDNA1)? That might save them a bit more area still.
Maybe they could get away with a cheaper bin (e.g. 12Gbps).I do wonder about memory though. 6 x GDDR5 would be a lot of bandwidth for a budget device around 4TF, even with 8 Zen 2 cores. Although the CPU and file IO on the XSX only seeing 336GB/s across the whole 16GB, maybe that's a (tenuous) indicator that Lockhart might have the same....
Because the speed described is guaranteed bandwidth. Which means under all load conditions (heat) that is what to expect. MS never gave out optimal speeds, we just assumed they were the same.I found this an interesting part of the tweaktown article regarding the controller speed:"
Based on this, the Xbox Series X's SSD can come up in to 2TB capacities, and theoretically deliver up to 3.75GB/sec sequential reads and writes..." I take it that that is Raw speeds and not compressed?
if so why does MS advertise 2.4 raw and 4.8 compressed instead? Is that why the HW decompressor chip is listed at 6GB/s rating well above MS' listed speeds?
Very confusing unless the difference is for overhead.
To get 3750 they need to buy the 1200 MT nand chips.I found this an interesting part of the tweaktown article regarding the controller speed:"
Based on this, the Xbox Series X's SSD can come up in to 2TB capacities, and theoretically deliver up to 3.75GB/sec sequential reads and writes..." I take it that that is Raw speeds and not compressed?
if so why does MS advertise 2.4 raw and 4.8 compressed instead? Is that why the HW decompressor chip is listed at 6GB/s rating well above MS' listed speeds?
Very confusing unless the difference is for overhead.
To get 3750 they need to buy the 1200 MT nand chips.
With that specific single core controller, the overhead and signalling and ECC etc... Gives 3750 (out of a 4800 nand bus) available to the host on the other side of the controller.
So they would be using 800 MT parts, which add up to 2500 after removing the same overhead, making 2400 "guaranteed" reasonable. These are widespread and are much less expensive than the cream of the crop.
Sony must be using 533 MT or 667 MT, saving more money on the nand, but spending more on the controller.
Yep, the bandwidth they wanted seems to be the foundation of the entire design. The flash parts, the custom controller, and the decompression block in the SoC.and then create 12 channels compared to 4 to reach the speeds touted in their solution?
Won't happen. OS needs RAM for background tasks and you want to reduce write with SSD.The entire OS could run on virtual ram from the SSD with no issues.
MS choose the RAM setup, because developers like such a trade-off when it gives them more bandwith. Goosen talked about this in the Inside XsX Digital Foundry article.I dont know why MS did that, wish they hadn't.
I think that a lot of people believe that it will be handled by the developers hence the concerns.MS choose the RAM setup, because developers like such a trade-off when it gives them more bandwith. Goosen talked about this in the Inside XsX Digital Foundry article.
Very unlikely, IMO. Those will cost a premium and console are very cost sensitive.To get 3750 they need to buy the 1200 MT nand chips.
The channel counts for both flash controller chips are known. Sony uses a custom 12-channel design, while Microsoft uses a PS5019-E19T.I would be surprised if both MS and Sony solutions don't use 8 channels, it doubles the amount of IOPS your storage device can handle, and will be crucial to how the devices are used.
Alright, surprised by this.The channel counts for both flash controller chips are known. Sony uses a custom 12-channel design, while Microsoft uses a PS5019-E19T.
What does meansThe channel counts for both flash controller chips are known. Sony uses a custom 12-channel design, while Microsoft uses a PS5019-E19T.
The channel counts for both flash controller chips are known. Sony uses a custom 12-channel design, while Microsoft uses a PS5019-E19T.
Chip Enable lines, it allows to put more chips on the same channels to grow the capacity, but only one chip can be enabled per channel at a time, exactly like using more dimms on a PC above the physical channel count. But here it's limited to 2TB total.What does means
CE #?
Max: 16
The architect for the Series X gave a >6 GB/s throughput for the decompression block, though the decision to not use that as the official number seems to indicate it's not common.Very unlikely, IMO. Those will cost a premium and console are very cost sensitive.
I would be surprised if both MS and Sony solutions don't use 8 channels, it doubles the amount of IOPS your storage device can handle, and will be crucial to how the devices are used.
The 2.4GB/s firgure might be a limit of the decompression block, ie. 4.8GB/s, decompressed, is quite a lot. In both cases there is plenty of bandwidth.
Cheers
From the piece:The architect for the Series X gave a >6 GB/s throughput for the decompression block, though the decision to not use that as the official number seems to indicate it's not common.
https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs
That is pretty neat.As textures have ballooned in size to match 4K displays, efficiency in memory utilisation has got progressively worse - something Microsoft was able to confirm by building in special monitoring hardware into Xbox One X's Scorpio Engine SoC. "From this, we found a game typically accessed at best only one-half to one-third of their allocated pages over long windows of time," says Goossen.