Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Regarding BC I was under the impression from posts here that MS did this using SW layers. Reading this XSX tech article https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs it says "Series X can technically run the entire Xbox One catalogue, but this time it's done with no emulation layer - it's baked in at the hardware level". So maybe from here and into the future the BC strategy from Sony and MS will both be solved in hardware?
 
Regarding BC I was under the impression from posts here that MS did this using SW layers. Reading this XSX tech article https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs it says "Series X can technically run the entire Xbox One catalogue, but this time it's done with no emulation layer - it's baked in at the hardware level". So maybe from here and into the future the BC strategy from Sony and MS will both be solved in hardware?
Just means doesn't need an emulator or to be repackaged like x360 etc.
MS will still basically absract the hardware, so your not writing directly to it. So will use the layers as you call it.
 
Regarding BC I was under the impression from posts here that MS did this using SW layers. Reading this XSX tech article https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs it says "Series X can technically run the entire Xbox One catalogue, but this time it's done with no emulation layer - it's baked in at the hardware level". So maybe from here and into the future the BC strategy from Sony and MS will both be solved in hardware?

I think they are making the distinction between 360 to One, and One to XSX

We know that 360 to XO involves some software layer, repackage of binaries and son on.

In contrast from One to series X, given both are based on X86-64 and GPU from AMD, that´s the hardware part that they referenced. The same binary* from XO will be used in the Series X, The emulation will be handled at the api level.

* They could load different asets, textures, etc
 
For GTA if they have multiple character like in GTA 5, it will help to do a fast transition between the character.

Yup, although it always impressed me how quickly GTA V on consoles made this feel with the zoom out, pan across the geography to the new protagonist, then zooming down upon them.
 
Last edited by a moderator:
Regarding BC I was under the impression from posts here that MS did this using SW layers. Reading this XSX tech article https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs it says "Series X can technically run the entire Xbox One catalogue, but this time it's done with no emulation layer - it's baked in at the hardware level". So maybe from here and into the future the BC strategy from Sony and MS will both be solved in hardware?
Right; the Xbox one catalog can be done without software emulation. So it’s just native binaries they are working with here.
 
Cerny did say in the GDC presentation that it's fast enough to keep assets loaded based on the viewing frustum, and load as you turn. At over 1 million iops and 5.5GB/s decompressed at wire speed up to 22GB/s, it does add up to enough bandwidth and latency.
If you unload the assets for rendering what isn't in view, how can you ray trace reflections and shadows for things that aren't in view.

Hmmm...... Screen space ray tracing.
 
If you unload the assets for rendering what isn't in view, how can you ray trace reflections and shadows for things that aren't in view.

I guess it depends how much you unload, textures but not geometry or tertiary object information. As Shifty said above, I think Mark Cerny used this as example of how quickly the SSD could feed the engine, rather than as a suggestion for how games should manage data to feed the engine.
 
I guess it depends how much you unload, textures but not geometry or tertiary object information. As Shifty said above, I think Mark Cerny used this as example of how quickly the SSD could feed the engine, rather than as a suggestion for how games should manage data to feed the engine.
unfortunately that's given people some wild imagination of what their ssd is capable of. Multiple times I have read some opinions thinking this SSD solution could replace system memory at the least to replace DDR4.

I do think that there is enough time even within the same frame to travel into the SSD and recall textures and use them within the same frame for that type of quoted situation if required. It's likely going to be of lower quality depending on how much you need to retrieve or you need a longer frame time to accommodate it.
 
If you unload the assets for rendering what isn't in view, how can you ray trace reflections and shadows for things that aren't in view.

Hmmm...... Screen space ray tracing.
You unload what you don't need. You can reduce 1 or 2 mipmap levels and geo LoD from the reflection projection cone or something, 4 to 8 times less data. Lots of tricks possible. Its' not universal, some games will use this all the time, some won't at all, some will use it partially.
 
unfortunately that's given people some wild imagination of what their ssd is capable of. Multiple times I have read some opinions thinking this SSD solution could replace system memory at the least to replace DDR4
That statement was from microsoft. It's accessed as if you have 100GB of memory. I can't blame people for misunderstanding.
 
That statement was from microsoft. It's accessed as if you have 100GB of memory.
That quote is in response to virtual paging. And only virtual paging, that isn't a quote based on delivery speeds, but a quote on how XSX has an address space of up to 100 GBs. Meaning it can virtually address all assets needed as long as it's under 100GB of addressable space. But the maximum possible throughput of what is transferrable is still limited by drive speed. No one I have seen discuss this has gotten this wrong. It is a designed to support the improvement of PRT/tiled resources - used for the purpose of streaming (mainly).

But looking at how we do paging on the OS side of things, it could potentially allow the system to page in and out more than just textures. We're talking paging large data structures that could have game logic save states, AI routines, animation routines, audio etc. The enemy is no longer on the screen, page it out. The enemy is back, page it all back in. As I understand it, strictly talking about being able to have addressable paging up to 100 GB worth of space. We're talking about paging the UI, paging menu data, paging the xbox UI etc. I mean, I assume that they have always been doing something like this, but there are limitations (the hard drive) on how much back and forth paging you can do (a smaller Virtual Memory size, or there is no paging at all (I'm not really sure TBH), and so this may have put a huge damper on things for this generation. So instead of having to randomly seek pages on the HDD, they just re-copied all their data in sequential order and had games read massive blocks in and out as required. But that most certainly limited encounter variety and game design, because you couldn't just dynamically generate things because you didn't have dynamic paging available as you're limited by HDD performance.

An intelligent paging system will keep the most used pages in memory swap out the least used ones for new things to be paged in when it needs it. They used the words instantly accessible. Accessibility doesn't necessarily mean 100 GB can be transferred immediately.

I've yet to see individuals talk about moving core systems out of memory into SSD (and to operate within SSD) except when on topic of Sony's SSD.
 
Last edited:
I guess it depends how much you unload, textures but not geometry or tertiary object information. As Shifty said above, I think Mark Cerny used this as example of how quickly the SSD could feed the engine, rather than as a suggestion for how games should manage data to feed the engine.

Going back and watching Road to PS5, the only thing Cerny mentions is loading texture data based on the view frustrum, not other assets like models etc. I think the hype on forums about this large scale loading and unloading of world data based on the view frustrum is getting way ahead of what the likely outcome is. Most likely we'll see games designed around a tiled game world that is smartly cached like Spiderman, but with texture data loaded in on the fly. That likely suggests virtual texturing will be a popular solution this gen.

 
It's Microsoft's fault that people believe the PS5 SSD will be able to instantly load and unload entire scenes of data as the player turns around?
You guys are just destroying any possibility of technological arguments here.

Using ssd as generic memory is not happening, that is just false.

Using ssd to load and unload the scene data as the user turns is what the claim is. You don't think it's possible that's your prerogative. The math does add up, and the claims have been made.
 
Going back and watching Road to PS5 the section about world caching is particularly interesting. Instead of storing the same mailbox 1000s of times on the SSD, you can store it once and load it into RAM as needed because random reads are much much better. So each chunk of the game world can have a larger variety of assets per the same amount of data, and also have more data per chunk because of the SSD performance. That makes much more sense as a streaming strategy than trying to load things, besides maybe textures, based on the view frustrum. I'd imagine each world chunk could just be a list of assets with metadata about position and orientation and then you only have to store those assets on SSD once, but you're still loading and unloading chunks of the world as they do in a game like Spiderman.
 
He said the IOPS (which is technically over 1 million iops for a 12ch interface) allows them to avoid needing to split the world into chunks (he was showing a tiled space partitioning, megatexture style), and it can instead load individual assets as needed.
 
You guys are just destroying any possibility of technological arguments here.

Using ssd as generic memory is not happening, that is just false.

Using ssd to load and unload the scene data as the user turns is what the claim is. You don't think it's possible that's your prerogative. The math does add up, and the claims have been made.

Nothing I've posted suggests that I believe things can't be loaded on the fly. I went through the numbers to see how much data could be loaded per frame. The number was not zero.

There are people on this forum that have posted, since the very beginning, that entire scenes could be loaded and unloaded as the view frustrum changes. I don't think that's realistic.
 
He said the IOPS (which is technically over 1 million iops for a 12ch interface) allows them to avoid needing to split the world into chunks (he was showing a tiled space partitioning, megatexture style), and it can instead load individual assets as needed.

All I see is him saying the ssd is fast enough that you don't need to store the next 30 seconds of data in RAM. You'll likely store the next one second in RAM instead. You can load a lot of data in one second, but the portion of that data that you can load dynamically per frame as the camera moves is relatively limited. It's hundreds of times better than previous gen. I still think there will be a mix of pre-loading as before, with some type of world chunk strategy, with a mix of other data that's loaded on the fly with textures being a good candidate.
 
Nothing I've posted suggests that I believe things can't be loaded on the fly. I went through the numbers to see how much data could be loaded per frame. The number was not zero.

There are people on this forum that have posted, since the very beginning, that entire scenes could be loaded and unloaded as the view frustrum changes. I don't think that's realistic.
I missed those posts. I see many claims you can unload what's behind (including from Cerny), and reload it as the user turns, that's pretty much loading based on the frustum, or more precisely would need a 180 penumbra on a 90 frustum. I fully agree you cannot teleport within one frame.
 
Status
Not open for further replies.
Back
Top