Xbox Series S [XBSS] (Lockhart) General Rumors and Speculation *spawn*

Status
Not open for further replies.
Actually, if Lockhart and Anaconda really can treat 100GB of SSD space as virtual memory, then the 7.5GGB might not be a big deal. If you've already worked out what can be treated as pageable (like most assets, audio, menus) then the same will be true for both Anaconda and Lockhart.
I've never subscribed to that interpretation.
I believe the 100GB was just an example, and is just the game package or part of it, and the way you can choose to access it.
So the engine can choose to access the assets as if it was extention of memory not a separate cornered off 100GB section of the ssd.
You create some rules and let the system page data in, on the fly, hundreds of times a frame as you move around the environment.

Lockhart might have a lot less memory but I'm guessing what matters is that 7.5 GB is enough for "core" data like game code and gameplay influencing physics, and also for immediate minimum level LODS etc. And as Lockhart is proportionately more likely to be drawing less detailed frames (lower lods, less texture data etc) they can possibly lean on the virtual memory as an extension of regular memory even more heavily than Anaconda. E.g., for an area of the screen that might need four 64 x 64 texture tiles to be transferred in at 4K, 1080p might need only 1, meaning you have more available virtual memory transfers left to support something else.
I just think this would be pretty hard, as the game would be expecting to be able to access 9GB of memory and swapping in and out at a system level invisible to the game engine sounds like it would be prone to misses, and those misses would have a huge impact on the game. But if possible, I'd be down with that.

1X version of the games would give some distinct advantages, settings, graphics and performance modes etc.
Think a lot of the really bad image quality of the XO is down to resolution though, even makes the same textures look bad. Extremely high AF, AA, double resolution could go a really long way.
On the one hand in a basic BC mode the next gen consoles should have a lot of power left over, but on the other hand if regular compute shaders could do this fast enough, I don't see why Nvidia would have stuffed their GPUs with Tensors. Though over time techniques do tend to become better, faster and less expensive.

So ... basically ... I have no idea. :runaway:
Don't disagree, only thing I'd say though is unlike Nvidia having to do it on pc. You know how much TF you have to dedicate to it in this circumstance.
3TF.
 
Last edited:
I've never subscribed to that interpretation.
I believe the 100GB was just an example, and is just the game package or part of it, and the way you can choose to access it.
So the engine can choose to access the assets as if it was extention of memory not a separate cornered off 100GB section of the ssd

Whether the 100GB is a literal amount (it might be representative of a limit for addressable range per game) I agree it's probably not a separate partition. The thought that struck me was just how good a way this is to make good use of the SSD and to make it as transparent as possible. Paging on demand means you naturally tend to spread out accesses due to the way game worlds exist in 3D space (data is physically spread out around the game world), it means you probably keep latency low automatically, and it means you only transfer what you need or are very likely need (the ideal with any type of transfer system).

It fits very well, from what I understand, with the idea behind sampler feedback and SFS. Infact keeping textures in virtual memory and using SF/SFS to automate the process of fetching and blending seems like a really good fit.

I think running games on Lockhart will turn out remarkably well given the memory differences - especially if the virtual memory is of the same size and works the same way. It strikes me as a very scalable approach!

I just think this would be pretty hard, as the game would be expecting to be able to access 9GB of memory and swapping in and out at a system level invisible to the game engine sounds like it would be prone to misses, and those misses would have a huge impact on the game. But if possible, I'd be down with that.

1X version of the games would give some distinct advantages, settings, graphics and performance modes etc.
Think a lot of the really bad image quality of the XO is down to resolution though, even makes the same textures look bad. Extremely high AF, AA, double resolution could go a really long way.

I wasn't really thinking about X1 BC there (but it's not beyond the realms of possibility IMO), sorry if that wasn't clear!

I was thinking about managing both Lockhart and Anaconda versions of a next gen game and minimising developer considerations wrt to the memory setups.
 
  • Like
Reactions: Jay
Hard to say, Nvidia use very powerful Tensor cores to do that, and I have no idea how it would fare on regular compute shaders. Tensor cores seem to support a range of int and float formats, and while the XSX does support accelerated int8 and int4 rates I don't know if these could be used or if they'd be fast even if they could.

On the one hand in a basic BC mode the next gen consoles should have a lot of power left over, but on the other hand if regular compute shaders could do this fast enough, I don't see why Nvidia would have stuffed their GPUs with Tensors. Though over time techniques do tend to become better, faster and less expensive.

So ... basically ... I have no idea. :runaway:

In my opinion, the hardware-accelerated int8 / int4 method used in XSX can perform highly efficient ray-tracing. This has also been confirmed from developer sources. The same hardware unit is likely to work well for DirectML upgrades in XSX.
 
It fits very well, from what I understand, with the idea behind sampler feedback and SFS. Infact keeping textures in virtual memory and using SF/SFS to automate the process of fetching and blending seems like a really good fit.

I think running games on Lockhart will turn out remarkably well given the memory differences - especially if the virtual memory is of the same size and works the same way. It strikes me as a very scalable approach!
Yea, I agree with you I was still stuck in thinking about BC.
10GB sounds like a extremely low amount especially as my ongoing view has been 12GB.
But for next gen games, my view is that Sony and MS know what their doing until proven otherwise.
So if 10GB with SSD was deemed enough I'm inclined to believe it will be. I don't think MS would put themselves in the same situation as they found themselves with the XO.
With the lower texture sizes, buffer sizes, framebuffer size, as long as the GDDR6 bandwidth is high enough and ssd, I don't see it being a problem.
It will take a bit more texture handling by the studio though, the bigger the difference the more work will need to be required.

I wasn't really thinking about X1 BC there (but it's not beyond the realms of possibility IMO), sorry if that wasn't clear!

I was thinking about managing both Lockhart and Anaconda versions of a next gen game and minimising developer considerations wrt to the memory setups.
yea get ya now.
 
Not enough power for DLSS. There’s not enough for Anaconda either I don’t think. I mean, you can do it using compute, nvidia did do this for a little while IIRC. But the results pale to their DLSS 2.0 solution running on their tensor hardware.

If you want better quality you’re going to need to crush a larger network faster
Well MS ML solution is also supposed to be performing LOD management in the Velocity Architecture as well as any upres so whatever ML the Series X has it will be in constant use it would seem.
 
Well MS ML solution is also supposed to be performing LOD management in the Velocity Architecture as well as any upres so whatever ML the Series X has it will be in constant use it would seem.
I’m excited for the potential; but I don’t know the costs or quality. So I’m just waiting to see something before making further statements.
 
This bit is interesting regarding streaming...

EbWzi0tXQAIeBtM


What's Dante & Edinburgh?

At least I saw no more LockharD. LOL

Tommy McClain

Nested ifs.

Yuck.
 
I am doubt they'll tweak the CPU clocks without looking at other parts of the system too. For Durango, they bumped both the CPU and GPU.
The cpu in these consoles sip power compared to the GPU portions. Lockhert even if they do manage to get up to 5 or 6 tflops like I was told they are testing for would use close to 1/3rd the power of XSX
 
This bit is interesting regarding streaming...

EbWzi0tXQAIeBtM


What's Dante & Edinburgh?

At least I saw no more LockharD. LOL

Tommy McClain
Hmm so it’s still debatable if it will release. This likely is coming down to what PS5 is priced at I imagine.
 
RIP portable Lockhart dream. I was hoping that at 3nm they could make a portable Xbox using the Lockhart APU but that isn't happening with a 3.6GHz click on the cpu. 3.2 ghz with a 4tflop gpu would've been perfect. Ah well.

3nm is years out. Are you familiar enough with the characteristics of the node and Zen to write a portable off based on a 13% upclock? :p
 
3nm is years out.

I'm aware. It was 2023 at the earliest but COVID has probably pushed that back to 2024 or even 2025.

Are you familiar enough with the characteristics of the node and Zen to write a portable off based on a 13% upclock? :p

Looking at the 4800u clocks i just don't see how it could be done whist also adding in a RDNA GPU part. Given that TDP is based on base clocks, going from 1.8 ghz to 3.6 is a tall order. I could be wrong of course.

Once AMDs Van Gogh chip is out in the wild we will have the information to make more accurate predictions given that it's most likely based on the Lockhart chip but downclocked for portable devices.
 
Status
Not open for further replies.
Back
Top