Xbox Series X [XBSX] [Release November 10 2020]

Tile sizes are fixed in hardware PRT. I forget the amount but 64K tile size seems to come to mind. IIRC it was too large for them. If you want a custom tile size, PRT+ won't do it for you. Gains from the hardware portion of it, were loss else where down the chain to support the size of the tile if that's sort of what he was getting at.
Thanks that could be it.
Don't suppose you remember what tile sizes are now?

I remember seeing I think it was a PRT plane demo last gen, impressive and looked so promising, much like this demo.
Hence why I hope version 2 is actually "fixed".
 
Last edited:
Thanks that could be it.
Don't suppose you remember what tile sizes are now?

I remember seeing I think it was a PRT plane demo, impressive and looked so promising, much like this demo.
Hence why I hope version 2 is actually "fixed".
no change, still 64K IIRC. @DmitryKo would know best off hand
 
  • Like
Reactions: Jay
What metric is being used to measure complexity? Amount of hardware in general, or what was exposed to the programmer?
Also who is being compared in each generation?
The original Playstation had a CPU and some dedicated processors on-die, with a separate graphics chip.
The PS2 had a CPU with dedicated processors, though it included a non-standard on-die bus between the CPU and vector units as well as scratchpad memory.
In these cases, there was an element of CPU die silicon that went towards something to be programmed for geometry processing. The PS2's graphics chip had EDRAM, although the graphics elements were primarily related to pixel processing.
The PS2 also included a PS1 processing element that served in an IO capacity if not being tasked with backwards compatibility.
Much of this was exposed at a lower level and without the level of hardware management and protection common today.

The original Xbox had a variant of a commodity x86 processor, which was straightforward to program despite having a comparatively large amount of internal complexity. The GPU was a variant of a PC architecture GPU with hardware T&L.

The PS3 had a similar CPU+processing element concept, although the SPEs were tasked with more than geometry (they did rather well with the geometry tasks they were given). There was one general purpose core that could be programmed in a relatively straightforward manner, and the SPEs were architecturally distinct programming targets with an explicit and non-standard memory organization. This was paired with an unusually standard GPU, for Sony. The apparent story there is that Sony's original plan for a more exotic solution fell through.
The XBox 360 had a custom CPU, but it was a uniform set of 3 general purpose cores. The GPU was a unified architecture with an EDRAM pool.

The PS4 design is an APU that is mostly standard. The Xbox One had the ESRAM, which was a memory pool that introduced complexity, although in terms of how it was integrated into the system it was intended to be even easier to use than what was considered acceptable with the Xbox 360's EDRAM.
The current gen consoles are APUs and it's down to secondary hardware blocks and ancillary elements like IO or variations in IP or bus width to distinguish them.




Is this the claim that was corrected a few posts ago? This seems like a mistatement and a mislabelling. The OS's primary footprint is in the 6GB region, but it's a fraction of it.


Was this the choice of 320 bits versus wider? The differently handled address ranges wouldn't seem to matter electrically. The split is a matter of the capacity of the chips on the bus. The bus is not affected by the chip's capacity.

What the MS engineer is saying in the video backs up what Andrew Goossen, one of the system architects, said in the DF deep dive. Goossen also justified the split memory bus by talking about issues encountered with signal integrity during testing of GDDR6:

"it sounds like a somewhat complex situation, especially when Microsoft itself has already delivered a more traditional, wider memory interface in Xbox One X - but the notion of working with much faster GDDR6 memory presented some challenges. "When we talked to the system team there were a lot of issues around the complexity of signal integrity and what-not," explains Goossen. "As you know, with the Xbox One X, we went with the 384[-bit interface] but at these incredible speeds - 14gbps with the GDDR6 - we've pushed as hard as we could and we felt that 320 was a good compromise in terms of achieving as high performance as we could while at the same time building the system that would actually work and we could actually ship."

https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

Well, we just heard from said system team directly.
 
Is this new? https://m.intl.taobao.com/detail/detail.html?id=643602411425&sku_properties=5919063:6536025

A chinese PC with XSX APU without the GPU part, but with 16 GB GDDR6 RAM.

I can't open anything intelligible to me from that link, but couldn't it be a Series S SoC instead?
All they had to do was place 2*16Gbit chips in clamshell for each channel to reach 16GB GDDR6, and for a GPU-less SoC that's already more than enough bandwidth for the CPU cores alone.

But the GPU-less part is intriguing. For all I know the Series SoCs only have 2x PCIe 4.0 lanes for storage, so it's either for a headless datacenter or it has only 2 lanes of PCIe 4.0 for a dGPU and then any local storage would need to use USB.
 
An APU without the GPU part? Wouldn't that just be a CPU?
AMD already abandoned the APU term anyway. Regardless, it obviously has the GPU, it's just disabled for one reason or another.

I can't open anything intelligible to me from that link, but couldn't it be a Series S SoC instead?
All they had to do was place 2*16Gbit chips in clamshell for each channel to reach 16GB GDDR6, and for a GPU-less SoC that's already more than enough bandwidth for the CPU cores alone.

But the GPU-less part is intriguing. For all I know the Series SoCs only have 2x PCIe 4.0 lanes for storage, so it's either for a headless datacenter or it has only 2 lanes of PCIe 4.0 for a dGPU and then any local storage would need to use USB.
One of the marketing images confirm 10 chip memory config around the SoC just like XSX
 
What the MS engineer is saying in the video backs up what Andrew Goossen, one of the system architects, said in the DF deep dive. Goossen also justified the split memory bus by talking about issues encountered with signal integrity during testing of GDDR6:

"it sounds like a somewhat complex situation, especially when Microsoft itself has already delivered a more traditional, wider memory interface in Xbox One X - but the notion of working with much faster GDDR6 memory presented some challenges. "When we talked to the system team there were a lot of issues around the complexity of signal integrity and what-not," explains Goossen. "As you know, with the Xbox One X, we went with the 384[-bit interface] but at these incredible speeds - 14gbps with the GDDR6 - we've pushed as hard as we could and we felt that 320 was a good compromise in terms of achieving as high performance as we could while at the same time building the system that would actually work and we could actually ship."

https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

Well, we just heard from said system team directly.
I want this translated to something that a non-techie can understand :p
 
I want this translated to something that a non-techie can understand :p
Remember when Egon said never to cross the beams? The beams started crossing when they tried wide unified memory configuration instead of the split one they ended up using.
I think.
 
Remember when Egon said never to cross the beams? The beams started crossing when they tried wide unified memory configuration instead of the split one they ended up using.
I think.

This still is unified memory.
 
I want this translated to something that a non-techie can understand :p
The bolded part?
GDDR6's tight electrical signaling requirements (significantly tighter than GDDR5's) makes it harder / more expensive to build a PCB with many memory channels. That's why they reduced the bus width from the 384bit (12 x 32bit channels) GDDR5 on the One X to 320bit (10 x 32bit channels) GDDR6 on the SeriesX.
But they still gained plenty of raw bandwidth anyway due to the latter's higher clock speeds. It wasn't really a tradeoff.
 
It's a trade-off if you actually trade something off.
Compared to the OneX, Microsoft didn't trade bandwidth off with the jump to GDDR6, nor RAM amount, nor latency. It was a win-win situation, except for cost obviously.
 
It's a trade-off if you actually trade something off.
Compared to the OneX, Microsoft didn't trade bandwidth off with the jump to GDDR6, nor RAM amount, nor latency. It was a win-win situation, except for cost obviously.

In a sense though, they did. An extra 4GB of memory would've pretty much guaranteed the XSX thrashed the PS5 for the entire generation. As it is, that have a system that's going to do increasingly better than the PS5 as the generation goes on, but still in a way that might not be all that visible to a layman.

I'm glad that Microsoft have gone with a two tier launch, and I hope the XSS is successful because I want it to demonstrate that a two tier launch has value. But I would've much rather seen a less compromised duo of a 40CU XSS with 12GB GDDR6, and a 60CU XSX with 20GB GDDR6. I think that would've put Sony in a much more difficult position.

But, of course, hindsight is 20/20 ¯\_(ツ)_/¯
 
In a sense though, they did. An extra 4GB of memory would've pretty much guaranteed the XSX thrashed the PS5 for the entire generation. As it is, that have a system that's going to do increasingly better than the PS5 as the generation goes on, but still in a way that might not be all that visible to a layman.

I'm glad that Microsoft have gone with a two tier launch, and I hope the XSS is successful because I want it to demonstrate that a two tier launch has value. But I would've much rather seen a less compromised duo of a 40CU XSS with 12GB GDDR6, and a 60CU XSX with 20GB GDDR6. I think that would've put Sony in a much more difficult position.

But, of course, hindsight is 20/20 ¯\_(ツ)_/¯
While two tier system is a brilliant idea it hinders VR a lot. I'm glad PS5 did not go for it.
 
While two tier system is a brilliant idea it hinders VR a lot. I'm glad PS5 did not go for it.
Why?
Works on PC as well, why should it make things worse for VR? (except for the visual quality, but still better than what PS4 can deliver)
I'm not really a fan of VR so far. There is still the headset that is ... well disturbs my experience and well the simulator sickness is the other problem I have.
Also VR demand 90-120 FPS to work well. So much of the extra power consoles have is split in half. But PS4 showed that VR was acceptable for many people with just a low-end console. But after all, it was so for just a gimmick for most people who just once bought it, had their fun and than almost never touched it again. Just like the eye-toy, kinect (v1), ... before, but still wasn't as successful. It is still a niche market and does no longer get as much attention as before. Sony could have prevented that if PSVR would have been directly compatible to PS5 games, but they didn't want that, so you can imagine, that it wasn't a financial success so far and they seem to want to give it another try in a few years with PSVR2. They think this might be the future (like 3D TVs ...), but so far, the market didn't fully accept it.
 
In a sense though, they did. An extra 4GB of memory would've pretty much guaranteed the XSX thrashed the PS5 for the entire generation.
This isn't a trade-off from adopting GDDR6. They could have had extra 4GB with the current 10-channel arrangement, simply by using 16Gb chips on all channels.
It would also prevent the memory contention issues the platform is apparently having, as all memory would be accessed at the same 560GB/s bandwidth.

Not getting 20GB GDDR6 was a cost decision, not an architectural limitation. I doubt it's a supply limitation considering the Series S and the PS5 are using plenty of 16Gb chips.


As it is, that have a system that's going to do increasingly better than the PS5 as the generation goes on, but still in a way that might not be all that visible to a layman.
I'm yet to see a single developer making such a statement. I've seen more than enough developers stating some engines will push more from the I/O and faster+narrower architecture on one console, and more from compute on the other console. None has specifically stated there's a clear long-term winner.
Feel free to provide examples for your claims, though.
 
Not getting 20GB GDDR6 was a cost decision, not an architectural limitation. I doubt it's a supply limitation considering the Series S and the PS5 are using plenty of 16Gb chips.
Exactly. Also they have included technologies to save memory and bandwidth, so limiting the gpu in theory to "just" 10GB of fast memory wasn't really the problem. There is still plenty of data that needs to be in memory but isn't accessed that often (like sound data, world data, AI, ...). The only problem is, that it is not really a split pool but in theory it is. Memory has still the same latency but reading from the "extra" memory is just a bit slower.
The only problem with the "10GB fast memory" is currently the new concepts must be adopted to actually save memory and bandwidth. And when that is done, the memory and bandwidth should be used much, much more efficient.
For the Series S I look at it just like having a PC with an GTX1060. You can play with it, games are fun but just don't look that well.

When the new technologies are adopted, I guess consoles can still look better (except for RT) than PC games running on "newer" hardware, just because the PC is just not ready to adopt the new technologies and it still needs some time until everyone has an nvme SSDs in their gaming-system. So developers still need to use much more memory on PC side to compensate this for this.
 
Why?
Works on PC as well, why should it make things worse for VR? (except for the visual quality, but still better than what PS4 can deliver)
I'm not really a fan of VR so far. There is still the headset that is ... well disturbs my experience and well the simulator sickness is the other problem I have.
Also VR demand 90-120 FPS to work well. So much of the extra power consoles have is split in half. But PS4 showed that VR was acceptable for many people with just a low-end console. But after all, it was so for just a gimmick for most people who just once bought it, had their fun and than almost never touched it again. Just like the eye-toy, kinect (v1), ... before, but still wasn't as successful. It is still a niche market and does no longer get as much attention as before. Sony could have prevented that if PSVR would have been directly compatible to PS5 games, but they didn't want that, so you can imagine, that it wasn't a financial success so far and they seem to want to give it another try in a few years with PSVR2. They think this might be the future (like 3D TVs ...), but so far, the market didn't fully accept it.
We already have games which render as low as 720p on S. S is too weak for nextgen VR.
 
Back
Top