Xbox Series X [XBSX] [Release November 10 2020]

We already have games which render as low as 720p on S. S is too weak for nextgen VR.
That is only a question of quality. The Series S is by far more powerfull than the PS4, so it should still get be enough. And normally VR games are not very "attractive" from a graphical standpoint. But I really don't think VR is coming for consoles, because it is still a flop. It is interesting but the audience just isn't there and sony effectivly killed it by not supporting PSVR for PS5 titles (only through BC).
 
Last edited:
I really don't think VR is coming for consoles, because it is still a flop. It is interesting but the audience just isn't there and sony effectivly killed it by not supporting PSVR for PS5 titles (only through BC).


A Flop? PSVR made billions of revenue, if not profits, for Sony. They've sold 5m. They're totally committed to PSVR2 and growing the market. BC is just Sony being their usual naff selves with customers. Millions of Quests have been sold as well, outpacing PSVR sales. Again into billions of revenue.

It's not mega numbers relative to consoles, never mind phones, but VR is far from a passing fad now. Xbox aren't too late to the party, but MS could really do with WMR strategy that overlaps with Xbox.
 
We already have games which render as low as 720p on S. S is too weak for nextgen VR.

Extra gpu inside of the vr headset's power supply.


But on a serious note I wouldn't rule out the S . with eye tracking and foveated rendering the S can produce a pretty good experiance. Add in a DLSS for vr to the fire and now your cooking
 
This isn't a trade-off from adopting GDDR6. They could have had extra 4GB with the current 10-channel arrangement, simply by using 16Gb chips on all channels.
It would also prevent the memory contention issues the platform is apparently having, as all memory would be accessed at the same 560GB/s bandwidth.

Not getting 20GB GDDR6 was a cost decision, not an architectural limitation. I doubt it's a supply limitation considering the Series S and the PS5 are using plenty of 16Gb chips.

Wait, you say it's not a trade-off, and then you give evidence that it is a trade-off.

As I pointed out and as MS has hinted at. The trade-off was between speed, cost, and capacity.

Cost includes increasing complexity (thus cost) of PCB layout and signal integrity as well as memory capacity.

They could have simplified it by having fewer channels as with the PS5, but at the cost of either slower memory performance or greatly increased cost at using premium high speed grade chips that were not yet in mass production.

They could have had a memory pool with uniform speed. But with the channels they decided on, that would have meant either 10 GB (too little) or 20 GB (too costly) with the memory speed grade that they went with. If they used cheaper and thus slower memory chips, then they could have save money on the chips but then the solution would be too slow.

Trade-offs are all about finding the best balance given the limitations of cost, product availability, time for implementation, limits of technology, etc.

Both the PS5 and XBS-X/S are exercises in trade-offs given these and potentially other factors.

You can't say there is no trade-off and then point out that something was limited due to it's cost. True everything (well, outside of products targeting the market where money doesn't matter) is limited by cost. But within that limitation, you decide what you want to "trade-off" in order to find the best balance for the product you are making.

PS5 traded off lower maximum memory speed in order to lower complexity of implementation. XBS traded off low complexity of implementation in order to attain a higher maximum memory speed. Each was governed by how their overall system was architected and how each felt they could best achieve performance given the cost (money) cap that each were given.

Regards,
SB
 
You can't say there is no trade-off and then point out that something was limited due to it's cost. True everything (well, outside of products targeting the market where money doesn't matter) is limited by cost. But within that limitation, you decide what you want to "trade-off" in order to find the best balance for the product you are making.

To quote myself:

It's a trade-off if you actually trade something off.
Compared to the OneX, Microsoft didn't trade bandwidth off with the jump to GDDR6, nor RAM amount, nor latency. It was a win-win situation, except for cost obviously.
 
To quote myself:

Even when compared to the XBO-X there are trade-offs. The XBO-X has uniform speed across the entire memory pool. While it can be argued that not all tasks require maximum memory bandwidth it does make it easier to utilize that memory. As we've talked about in this thread and as MS have noted (via monitoring hardware that they build into the XBO-X SOC itself), the differing speeds of memory accesses depending on which mapping of memory is accessed shouldn't affect predicted usages of the consoles, we don't actually know the fine details of it.

How easy or difficult is it for developers to manage their memory usage? Are there cases where a developer actually would need maximum memory speed for more than 10 GB of game data? SFS obviously will help with this, but as SFS is currently non-existent in shipping games, we have no idea how effective it is. The velocity architecture for the SSD is also likely designed to help with this trade-off. Again it's non-existent in shipping games so we don't have any means to judge it's potential effectiveness.

The XBO-X was interesting in that it simplified their memory subsystem by doing away with the memory subsystem trade-offs that we see in the X360 (EDO RAM module) and base XBO (ESRAM). However, the trade-off there was increased complexity in PCB layout in order to maintain signal integrity to the memory chips. The XBS consoles reintroduce some complexity into the memory subsystem, albeit nothing as dramatic as the X360 or base XBO, as a trade-off for achieving the minimum bandwidth that they required.

So, yes, even when compared to the XBO-X there was an obvious trade-off other than cost.

Regards,
SB
 
When the new technologies are adopted, I guess consoles can still look better (except for RT) than PC games running on "newer" hardware, just because the PC is just not ready to adopt the new technologies and it still needs some time until everyone has an nvme SSDs in their gaming-system. So developers still need to use much more memory on PC side to compensate this for this.

Why couldnt developers support SFS on systems that are capable and have a slower fallback path for systems that aren't? Simply lower texture resolution to compensate.

Does SFS actually need an SDD? I know DirectStorage does but I'm unsure if the two are linked in that way.
 
Does SFS actually need an SDD? I know DirectStorage does but I'm unsure if the two are linked in that way.
Going by what MS say, possibly yes in this form.

Guess you need it because it's almost like texture/tile on demand. So you need the speed to support that.
 
Going by what MS say, possibly yes in this form.

Guess you need it because it's almost like texture/tile on demand. So you need the speed to support that.
I think more accurately, you need a very low latency storage system with actual throughput being less of a concern.
 
I'm yet to see a single developer making such a statement. I've seen more than enough developers stating some engines will push more from the I/O and faster+narrower architecture on one console, and more from compute on the other console. None has specifically stated there's a clear long-term winner.
Feel free to provide examples for your claims, though.

I thought it was generally accepted that the greater number of CU's and greater bandwidth is going to lend itself well to RT and ML tasks? I seem to recall seeing some RDNA2 GPU benchmarks in which the wider cards generally excel at RT workloads when rendering at higher resolutions. I'll try to find them.

Those benchmarks aren't perfect when trying to relate them to the XSX and PS5, because the discrete GPU's are clocked the same IIRC. But extrapolating from that fairly shaky data, I expect that the PS5 will generally see fewer bounces per ray and/or a lower resolution. But I don't think either of those things will be visible to the average Joe, whereas I do think more CU's along with more memory would've made for noticeably different visuals, akin to PS4 Pro and X1X.

And, of course, that's all predicated on the idea that RT and ML are the direction that a lot of high tier games/engines will head over the coming years. The extent to which that becomes the case is difficult to predict, as is the speed at which it happens.

My pure speculation is that while the XSX will be all around pretty solid (i.e. 1440p-1800p) as the years go by and engines mature, the PS5 will be in the range of pretty good to good enough, and will begin to hover around the 1080p-1440p range at the same kind of timeframe that the PS4 began to hover around the 900p mark. Which will be just fine for the mainstream audience. I also anticipate a PS5 Pro around this sort of time to cater to nerds like myself who would like 4K60.
 
I thought it was generally accepted that the greater number of CU's and greater bandwidth is going to lend itself well to RT and ML tasks?
Yes, but as the latest examples have shown, the number of simulated rays doesn't translate linearly into better gaming performance, and for ML Microsoft has yet to implement it in any meaningful way in a released game.

The compute advantage of the SeriesX does give it the advantage of simulating more rays/second, but OTOH the higher clocks on the PS5 should give it better performance on the back-end. The SeriesX has higher memory bandwidth but the PS5 has faster I/O and no memory contention issues, they both have the same number of shader engines so it's likely that the PS5 has a bit better utilization of its shader processors (less ALUs per shader engine).

In the end they're both very close. There's no clear cut "long-term winner" here. If 1st party and high-profile 3rd party developers do their job well, both consoles will get gorgeous titles this gen.
 
why are we ignoring clock here ? higher clock also lend itself well to rt and ml tasks
Compute performance from higher clocks relies on being fed, it relies on data being in cache to maximize performance; inversely idle clock cycles results in lower performance. With RT, you're bouncing rays around to random areas meaning cache coherency is shot, ultimately driving towards pulling data from main memory. That main memory has the same latency, regardless of how high your clock cycles are. So waiting = bad for higher clock systems. Wide bandwidth coupled with wide CUs means you've reduced the penalty of idle cycles due to latency, at the cost of doing things slower.

If you can fit BVH into cache that is ideal. If you cannot, you'll need to tap memory.

on the ML side, that depends on the algorithm. NN in particular run serially, meaning you can't proceed to calculate the next layer until the previous layer is finished computation. Clockspeed isn't going to help as much as being wide in this type of circumstance and width requires bandwidth. But that also depends on how many nodes you are going to be running per layer.
 
Compute performance from higher clocks relies on being fed, it relies on data being in cache to maximize performance; inversely idle clock cycles results in lower performance. With RT, you're bouncing rays around to random areas meaning cache coherency is shot, ultimately driving towards pulling data from main memory. That main memory has the same latency, regardless of how high your clock cycles are. So waiting = bad for higher clock systems. Wide bandwidth coupled with wide CUs means you've reduced the penalty of idle cycles due to latency, at the cost of doing things slower.

If you can fit BVH into cache that is ideal. If you cannot, you'll need to tap memory.

on the ML side, that depends on the algorithm. NN in particular run serially, meaning you can't proceed to calculate the next layer until the previous layer is finished computation. Clockspeed isn't going to help as much as being wide in this type of circumstance and width requires bandwidth. But that also depends on how many nodes you are going to be running per layer.
I’m prety sure you can oc rtx cards and get nice boost in heavy rt games
 
Last edited:
Doesn't imply that much about hardware capabilities until we're seeing multiplat games look way better on the ps5, or something.

No one suggested we'll be seeing "games looking way better on the ps5".
 
No one suggested we'll be seeing "games looking way better on the ps5".
You linked a gameplay video of a game which looks (I'm saying) way better than anything that will release exclusive on xsx this year. But that's because its a good art team, I don't see anything technically dazzling in ratchet.
 
I’m prety sure you can oc rtx cards and get nice boost in heavy rtx games
sure, all things equal, going faster is going to be better than going slower. But the improvements on clockspeed on nvidia's side may not propagate to what we have on the console side of things. They run higher bandwidth on memory, are wider, and run fairly fast as well, with dedicated RT cores. Both consoles are hamstrung on width, have limited cache, and bandwidth is much lower than nvidia offerings. Nvidia's whole pipeline is larger in general, so improving clockspeeds are less likely to result into hitting a bottleneck earlier: in particular I'm looking at memory.

I'm not sure how the consoles will react here. 6800+ series of GPUs have big caches that possibly could fit a portion of BVH. I really don't know what to expect without more data.
 
Back
Top