I've never subscribed to that interpretation.Actually, if Lockhart and Anaconda really can treat 100GB of SSD space as virtual memory, then the 7.5GGB might not be a big deal. If you've already worked out what can be treated as pageable (like most assets, audio, menus) then the same will be true for both Anaconda and Lockhart.
I just think this would be pretty hard, as the game would be expecting to be able to access 9GB of memory and swapping in and out at a system level invisible to the game engine sounds like it would be prone to misses, and those misses would have a huge impact on the game. But if possible, I'd be down with that.You create some rules and let the system page data in, on the fly, hundreds of times a frame as you move around the environment.
Lockhart might have a lot less memory but I'm guessing what matters is that 7.5 GB is enough for "core" data like game code and gameplay influencing physics, and also for immediate minimum level LODS etc. And as Lockhart is proportionately more likely to be drawing less detailed frames (lower lods, less texture data etc) they can possibly lean on the virtual memory as an extension of regular memory even more heavily than Anaconda. E.g., for an area of the screen that might need four 64 x 64 texture tiles to be transferred in at 4K, 1080p might need only 1, meaning you have more available virtual memory transfers left to support something else.
Don't disagree, only thing I'd say though is unlike Nvidia having to do it on pc. You know how much TF you have to dedicate to it in this circumstance.On the one hand in a basic BC mode the next gen consoles should have a lot of power left over, but on the other hand if regular compute shaders could do this fast enough, I don't see why Nvidia would have stuffed their GPUs with Tensors. Though over time techniques do tend to become better, faster and less expensive.
So ... basically ... I have no idea.
I've never subscribed to that interpretation.
I believe the 100GB was just an example, and is just the game package or part of it, and the way you can choose to access it.
So the engine can choose to access the assets as if it was extention of memory not a separate cornered off 100GB section of the ssd
I just think this would be pretty hard, as the game would be expecting to be able to access 9GB of memory and swapping in and out at a system level invisible to the game engine sounds like it would be prone to misses, and those misses would have a huge impact on the game. But if possible, I'd be down with that.
1X version of the games would give some distinct advantages, settings, graphics and performance modes etc.
Think a lot of the really bad image quality of the XO is down to resolution though, even makes the same textures look bad. Extremely high AF, AA, double resolution could go a really long way.
Hard to say, Nvidia use very powerful Tensor cores to do that, and I have no idea how it would fare on regular compute shaders. Tensor cores seem to support a range of int and float formats, and while the XSX does support accelerated int8 and int4 rates I don't know if these could be used or if they'd be fast even if they could.
On the one hand in a basic BC mode the next gen consoles should have a lot of power left over, but on the other hand if regular compute shaders could do this fast enough, I don't see why Nvidia would have stuffed their GPUs with Tensors. Though over time techniques do tend to become better, faster and less expensive.
So ... basically ... I have no idea.
Yea, I agree with you I was still stuck in thinking about BC.It fits very well, from what I understand, with the idea behind sampler feedback and SFS. Infact keeping textures in virtual memory and using SF/SFS to automate the process of fetching and blending seems like a really good fit.
I think running games on Lockhart will turn out remarkably well given the memory differences - especially if the virtual memory is of the same size and works the same way. It strikes me as a very scalable approach!
yea get ya now.I wasn't really thinking about X1 BC there (but it's not beyond the realms of possibility IMO), sorry if that wasn't clear!
I was thinking about managing both Lockhart and Anaconda versions of a next gen game and minimising developer considerations wrt to the memory setups.
Well MS ML solution is also supposed to be performing LOD management in the Velocity Architecture as well as any upres so whatever ML the Series X has it will be in constant use it would seem.Not enough power for DLSS. There’s not enough for Anaconda either I don’t think. I mean, you can do it using compute, nvidia did do this for a little while IIRC. But the results pale to their DLSS 2.0 solution running on their tensor hardware.
If you want better quality you’re going to need to crush a larger network faster
I’m excited for the potential; but I don’t know the costs or quality. So I’m just waiting to see something before making further statements.Well MS ML solution is also supposed to be performing LOD management in the Velocity Architecture as well as any upres so whatever ML the Series X has it will be in constant use it would seem.
Interesting, thank you!Well MS ML solution is also supposed to be performing LOD management in the Velocity Architecture as well as any upres so whatever ML the Series X has it will be in constant use it would seem.
I wonder if this could mean pre-production validation and testing is now done and they have been able to see that the Lockhart thermals look good without a downclock?
This bit is interesting regarding streaming...
What's Dante & Edinburgh?
At least I saw no more LockharD. LOL
Tommy McClain
Same cpu speed , same I/O performance , slightly less ram but all one speed and half the tflops. Priced at $200 or $250 then it could sell very well vs a $400 xbox series x
The cpu in these consoles sip power compared to the GPU portions. Lockhert even if they do manage to get up to 5 or 6 tflops like I was told they are testing for would use close to 1/3rd the power of XSXI am doubt they'll tweak the CPU clocks without looking at other parts of the system too. For Durango, they bumped both the CPU and GPU.
Hmm so it’s still debatable if it will release. This likely is coming down to what PS5 is priced at I imagine.This bit is interesting regarding streaming...
What's Dante & Edinburgh?
At least I saw no more LockharD. LOL
Tommy McClain
RIP portable Lockhart dream. I was hoping that at 3nm they could make a portable Xbox using the Lockhart APU but that isn't happening with a 3.6GHz click on the cpu. 3.2 ghz with a 4tflop gpu would've been perfect. Ah well.
3nm is years out.
Are you familiar enough with the characteristics of the node and Zen to write a portable off based on a 13% upclock?
You can still dream!RIP portable Lockhart dream.
Confirmed by TSMC earlier this month, testing 3nm Q1 2021, mass production by H2 2022.3nm is years out. Are you familiar enough with the characteristics of the node and Zen to write a portable off based on a 13% upclock?