Xbox Series X [XBSX] [Release November 10 2020]

If I was MS and know that Sony's cards are on the table I'd be looking into going for a kill shot, a GPU upclock (to be announced much closer to release). Lets say 2Ghz. That now pushes you over 13TF and things start to get dire for PS5 and they have no way to respond, as they're already pushing their own clocks to the edge. If you're seeing something like an average of 9TF (average clocks) vs 13.3+ that's a problem for PS5, big problems.

I have no idea however if MS has any interest in this or how feasible it is.

Also, I really felt like writing "kill shot" :LOL:
 
I didn't know NZXT made a very Series-like PC case. Cool. At least I assume it was a response to the Series X considering the video is just a couple days old, dont know the release date of the actual case.

Also not sure but suspect it could be significantly larger than the Series X.

 
I didn't know NZXT made a very Series-like PC case. Cool. At least I assume it was a response to the Series X considering the video is just a couple days old, dont know the release date of the actual case.

Also not sure but suspect it could be significantly larger than the Series X.


yeah tha tlooks larger than XSX. The old build on linus seems way smaller
 
Hello all, I have a question regards the NVME ssd solution in the XSX. Do you think MS would have included the ability to double the ssd bandwidth to the system when the 2nd ssd is plugged in? I am thinking something along the lines of RAID0. Would a feature like that be more cost-effective than Sony's ground-up solution? MS gets to put more performance towards other parts of the system while still delivering a generational improvement but makes it so if the user wants even faster loading times they can add the 2nd ssd?
 
No.
If it can't be counted on existing in ALL SeriesX consoles it can't be used as basis for games.
Right. You might be able to benefit from a faster drive (there really aren't drives that are much faster for sale atm) in terms of load times but it shouldn't impact gameplay other than that.
 
It's not split pools. The CPU can access all the RAM, just at 336 GB/s. The GPu can only access 10 GB at 540 GB/s (as I understand it). It amy even be all RAM is addressable by all components, but the GPU will hit the lower BW if reaching into the standard memory pool:

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

If the CPU, audio and IO couldn't access the 'GPU optimal RAM', they wouldn't see identical performance for the two memory types.

I'm pretty sure that both the CPU and GPU has access to all the RAM. Outside of the OS being limited to the slow pool of RAM, it's up to the developers how they use the game accessible RAM.

Regards,
SB
 
Ah ok going to read into that. Much of new names this generation, texture screen space shading, sampler feedback, velocity engine, direct storage, bcpack, geometry engine. Can understand its confusing for the average joe not tech oriented, but it probably sounds cool to the ears to anyone :p
 
Haven't seen much about XBSX audio.

Project Acoustics – Incubated over a decade by Microsoft Research, Project Acoustics accurately models sound propagation physics in mixed reality and games, employed by many AAA experiences today. It is unique in simulating wave effects like diffraction in complex scene geometries without straining CPU, enabling a much more immersive and lifelike auditory experience. Plug-in support for both the Unity and Unreal game engines empower the sound designer with expressive controls to mold reality. Developers will be able to easily leverage Project Acoustics with Xbox Series X through the addition of a new custom audio hardware block.
 
Ah ok going to read into that. Much of new names this generation, texture screen space shading, sampler feedback, velocity engine, direct storage, bcpack, geometry engine. Can understand its confusing for the average joe not tech oriented, but it probably sounds cool to the ears to anyone :p

https://devblogs.microsoft.com/dire...edback-some-useful-once-hidden-data-unlocked/
https://devblogs.microsoft.com/dire...on-shaders-reinventing-the-geometry-pipeline/
https://devblogs.nvidia.com/texture-space-shading/
https://devblogs.nvidia.com/introduction-turing-mesh-shaders/
 
I know this has been brought up before, but was there any actual thoughts on if the 4FP and 8FP RPM be a good fit for AI upscaling?

Would be interesting if MS made it available to Xbox api, so xsx could render at 1440p and AI upscale to 4K. That would be a lotta TF for that resolution. Not sure how it behaves with RT, but in the vid they show control so maybe not too badly? Maybe run games at 1440p120
 
I know this has been brought up before, but was there any actual thoughts on if the 4FP and 8FP RPM be a good fit for AI upscaling?

Would be interesting if MS made it available to Xbox api, so xsx could render at 1440p and AI upscale to 4K. That would be a lotta TF for that resolution. Not sure how it behaves with RT, but in the vid they show control so maybe not too badly? Maybe run games at 1440p120

Will be interesting to see how hard DLSS2.0 hits the tensor cores, because RTX should have way more power for that kind of math than Series X.
 
I know this has been brought up before, but was there any actual thoughts on if the 4FP and 8FP RPM be a good fit for AI upscaling?

Would be interesting if MS made it available to Xbox api, so xsx could render at 1440p and AI upscale to 4K. That would be a lotta TF for that resolution. Not sure how it behaves with RT, but in the vid they show control so maybe not too badly? Maybe run games at 1440p120
Its lower precision, so my answer is it depends. Lower precision may be better for different things and different algorithms, different use cases etc.
Most deep learning networks at 16FP IIRC
 
If I was MS and know that Sony's cards are on the table I'd be looking into going for a kill shot, a GPU upclock (to be announced much closer to release). Lets say 2Ghz. That now pushes you over 13TF and things start to get dire for PS5 and they have no way to respond, as they're already pushing their own clocks to the edge. If you're seeing something like an average of 9TF (average clocks) vs 13.3+ that's a problem for PS5, big problems.

I have no idea however if MS has any interest in this or how feasible it is.

Also, I really felt like writing "kill shot" :LOL:
To me it is rather obvious, that they've done the upclock already, before reveal, with a sole puropse to "killshot" 10tf Sony machine. 12tf is more than anyone expected within sensible budget.
 
Its lower precision, so my answer is it depends. Lower precision may be better for different things and different algorithms, different use cases etc.
Most deep learning networks at 16FP IIRC

Yah, looking at this the tensor cores in rtx work primarily with fp16, so I think it's really unlikely we see something like DLSS2.0 unless it's a very cheap operation.
 
Back
Top