Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I missed those posts. I see many claims you can unload what's behind (including from Cerny), and reload it as the user turns, that's pretty much loading based on the frustum, or more precisely would need a 180 penumbra on a 90 frustum. I fully agree you cannot teleport within one frame.

https://forum.beyond3d.com/posts/2122105/

This is the post. I think it's pretty well reasoned, but if there are any obvious problems I'm open to discussing it.

Some insight into data sizes for textures and models, assuming virtual texturing, would be pretty beneficial.
 
He said the IOPS (which is technically over 1 million iops for a 12ch interface) allows them to avoid needing to split the world into chunks (he was showing a tiled space partitioning, megatexture style), and it can instead load individual assets as needed.

As he said, the chunks can be no more than lists of Objects IDs and their positions and the actual object data is loaded separately from an universal object library. You'd still probably wanna group clusters of objects into these kind of "chuncks" instead of handling a completely unorganized object soup directly. That also ties in with LODing and culling hierachies anyway.
 
As someone else asked, and I haven't seen an answer, let me ask it again...

Won't you need to keep some items in memory that are outside the forward frustum because of RayTracing and Reflections?
 
As someone else asked, and I haven't seen an answer, let me ask it again...

Won't you need to keep some items in memory that are outside the forward frustum because of RayTracing and Reflections?

I think most games are using a separate scene representation in the BVH that's usually a much lower lod. I think DXR had some issues about changing lods because of bvh updates being costly. No idea how it'll work with AMD hardware and Sony's software.
 
I'm not sure if texture information is inside the BVH, maybe I'm taking the name too literally to mean only Bounding Volume Hierarchy data for Hit or No Hit decisions.
 
Just asking but i guess theres no way to even moderately guess what amd cpu would be equivalent to ps5s? Even as a lower bounds?

Since we can assume a core taken by OS on both machines its likely more like 7 cores 14 threads than 8 core 16 right? And for ps5...3.5ghz is max but assuming it doesnt drop too low from there.....
 
As someone else asked, and I haven't seen an answer, let me ask it again...

Won't you need to keep some items in memory that are outside the forward frustum because of RayTracing and Reflections?
Yes, I said you keep what you need. Project reflections and add that to the list of what you need. It's also bound to be a lower LOD and lower mipmaps. Not all games would be able to use this 100%. TPS would be very good candidates.
 
Just asking but i guess theres no way to even moderately guess what amd cpu would be equivalent to ps5s? Even as a lower bounds?

Since we can assume a core taken by OS on both machines its likely more like 7 cores 14 threads than 8 core 16 right? And for ps5...3.5ghz is max but assuming it doesnt drop too low from there.....
4x the jaguar performance of today.
 
Since we can assume a core taken by OS on both machines its likely more like 7 cores 14 threads than 8 core 16 right? And for ps5...3.5ghz is max but assuming it doesnt drop too low from there.....
Not the right way to think about it.
On PC the OS still needs to be running on the CPU.
 
Har har. I get it though. No accurate way to guess about the current ryzen line and custom parts even on same architecture...since jt is so semi customized
I think @Dictator did comment on its performance off hand in a post. But it more or less implies not close to desktop equivalents.
 
Not close as in not accurate enough to make any conclusions or just outright in inferior performance i wonder? Of course either one would be expected i guess having to run in an apu with all kinds of constraints. I guess we shall see when the games are shown off....any day now....
 
Not close as in not accurate enough to make any conclusions or just outright in inferior performance i wonder? Of course either one would be expected i guess having to run in an apu with all kinds of constraints. I guess we shall see when the games are shown off....any day now....
I think the general consensus is that it’s a nice bump in performance and lifts a lot of restrictions, but not such a large increase in performance that they don’t need to manage it.
 
I think the general consensus is that it’s a nice bump in performance and lifts a lot of restrictions, but not such a large increase in performance that they don’t need to manage it.

Oh..so it might be better....that sounds nice. I was mainly thinking in terms of the console cpus being weaker 1:1 having to account for design constraints desktops dont...the opposite is something im not complaining about
 
Last edited:
You unload what you don't need. You can reduce 1 or 2 mipmap levels and geo LoD from the reflection projection cone or something, 4 to 8 times less data. Lots of tricks possible. Its' not universal, some games will use this all the time, some won't at all, some will use it partially.
It begs the question, how much you can separate out while keeping it worthwhile. I mean, you remove the actual texture and have a very crude colour value to aid with light bounce calculations? If you're going for crude, why bother with RT at all?

I think on the whole, not having to keep the assets in RAM for the environment immediately behind you is a bit silly and an edge case. Sure, maybe you can do this at the expense of good RT or with certain graphics styles. Equally we don't know how good (or bad) the RT on nextgen consoles will be yet. Maybe it's not powerful enough where lack of high-res textures for colours will be a problem.
 
It begs the question, how much you can separate out while keeping it worthwhile. I mean, you remove the actual texture and have a very crude colour value to aid with light bounce calculations? If you're going for crude, why bother with RT at all?

I think on the whole, not having to keep the assets in RAM for the environment immediately behind you is a bit silly and an edge case. Sure, maybe you can do this at the expense of good RT or with certain graphics styles. Equally we don't know how good (or bad) the RT on nextgen consoles will be yet. Maybe it's not powerful enough where lack of high-res textures for colours will be a problem.
It sounds like a complete pain in the ass without a hardware based system to do the paging in and out for you. Could you imagine the level of manual memory management one would have to do to constantly drop everything out of memory and fill it with new stuff for this edge case?

I can only imagine that all consoles use paging, but the details around it are limited here. You can choose to go without it of course; but it's on developers to manage what's in and what's out at all times in a radius of some sort. That's fine, but that's a lot of fine tuning and work.
 
Sounds unreal.
:devilish:
I actually don't know the answer on this one. Hope a developer can chime in here for a more detailed answer between direct memory mangement vs virtual memory management on console, pros and cons. This was the best I got, old but perhaps useful information to gleam from:

https://www.eurogamer.net/articles/digitalfoundry-ps3-system-software-memory

UPDATE #2: Sony has issued a statement:

We would like to clear up a misunderstanding regarding our "direct" and "flexible" memory systems. The article states that "flexible" memory is borrowed from the OS, and must be returned when requested - that's not actually the case.

The actual true distinction is that:

  • "Direct Memory" is memory allocated under the traditional video game model, so the game controls all aspects of its allocation
  • "Flexible Memory" is memory managed by the PS4 OS on the game's behalf, and allows games to use some very nice FreeBSD virtual memory functionality. However this memory is 100 per cent the game's memory, and is never used by the OS, and as it is the game's memory it should be easy for every developer to use it.
We have no comment to make on the amount of memory reserved by the system or what it is used for.

Based on this information, plus the new source coming forward to explain the properties of flexible memory, our take on this right now is that there is 4.5GB of conventional RAM available to developers, along with the OS-controlled flexible memory Sony describes, in addition to that.

We understand that this is a 1GB virtual address space, split into two areas - 512MB of on-chip RAM is used (the physical area) and another 512MB is "paged", perhaps like a Windows swap file. But to be clear, of the 8GB of GDDR5 on PS4, our contention is that 5GB of it is available to developers.

UPDATE: A new source familiar with the matter has provided additional information to Digital Foundry that confirms only 4.5GB of the PS4's 8GB GDDR5 memory pool is guaranteed to game developers right now, while also clarifying how the PS4's "flexible memory" works in practice.

In real terms, an additional 512MB of physical RAM may be available in addition to the 4.5GB mentioned in the SDK. Flexible memory consists of physical and virtual spaces, and the latter introduces paging issues which impact performance. In our original story we combined them together.

For practical game applications, the correct figures for this story, as we understand it now, are a guaranteed 4.5GB for development and a further 512MB from the flexible pool. We have updated the headline accordingly.

 
Status
Not open for further replies.
Back
Top