TeamGhobad
Newcomer
Is it safe to say that RAM is the real bottleneck this gen? Not enough and too slow?
Is it safe to say that RAM is the real bottleneck this gen? Not enough and too slow?
Mark Cerny specifically mentioned an example of 36CUs vs. 48CUs. IMO maybe this is how to decide to choose 36CUs.
But I still wonder how much 2.23GHz for other GPU parts would help compensate fewer CUs.
But the SSD advantage is huge, and the CPU advantage is miniscule.
So VRS is not confirmed for PS5 yet? Does it need it if it already has Geometry Engine?
Absolutely not.
what is then?
Is it safe to say that RAM is the real bottleneck this gen? Not enough and too slow?
Ram bw is certainly going to be one on PS5, and to a lesser extent on the Xsx.
22.2% per more rasterization power on 20% less bw.
That's 52% more stress on bw from non-compute pipeline on PS5 than Xsx.
Forgoing 16gbps GDDR6 chips was a mistake. Thanks but no thanks Sony Japan!
Ram bw is certainly going to be one on PS5, and to a lesser extent on the Xsx.
22.2% per more rasterization power on 20% less bw.
That's 52% more stress on bw from non-compute pipeline on PS5 than Xsx.
Forgoing 16gbps GDDR6 chips was a mistake. Thanks but no thanks Sony Japan!
Ram bw is certainly going to be one on PS5, and to a lesser extent on the Xsx.
22.2% per more rasterization power on 20% less bw.
That's 52% more stress on bw from non-compute pipeline on PS5 than Xsx.
Forgoing 16gbps GDDR6 chips was a mistake. Thanks but no thanks Sony Japan!
I think some are forgetting that regardless of how fast your SSD is things ultimately have to go through system ram, so the amount of system ram and bandwidth is still a limiting factor. Things can get swapped in and out faster and that's important and will be beneficial on both systems but it's not going to be some kind of miracle game changer graphically.
So I've been thinking about this a bit, but it doesn't HAVE to be this way.
It sounds at least on the MS side, they are fully intending to make a portion of the SSD directly addressable by the CPU/GU.
This would in fact give MS 3 different types of memory allocation, fast RAM, slow RAM, and SSD.
They have already indicated they are going to be providing a smart memory allocator to make sure that the right parts of the program get memory allocated from the right block re: the fast/slow ram pools,
adding a 3rd layer to the memory allocation is a small change from there.
So in - best case- theory you could access the SSD memory directly without it going to RAM first, IF anyone does that, is another question.
But I'll always take the system with higher GPU/ system memory bandwidth.
Honestly i think the Sony solution is great, but going from 40MB/s to 2.5Gb/s and then to 5.5Gb/s,
What you get form that last extra 3GB/s isn't going going to be much.
Especially when your Ram is running close to 500Gb/s
From what I made out of the Road to PS5 presentation, it was pretty clear that devs can treat the SSD as it's own pool of ram. Unless I am mistaken.
From what I made out of the Road to PS5 presentation, it was pretty clear that devs can treat the SSD as it's own pool of ram. Unless I am mistaken.
GPU shader engines have had geometry engines, or geometry processors, depending on what AMD has decided to call them. If there's a distinction to be made between the two terms, I haven't seen that communicated.Geometry engines have been in GCN cards for years. Mesh shaders are a RDNA2 only thing, as they seem to be step above the primitive shaders in RDNA1. @3dilettante help!
So it's a VERY VERY large pool of VERY VERY slow ram!
Thats sorta how I understood it too, also why i said i'd prefer the faster system ram instead.
448 + 5.5 < 560 + 2.4.
Overall I can't wait to find out what they do with these systems.
Even if you fully ignore the ability of the GPU to access the SSD's, what this does for world and systems design is amazing.
a CPU with a memory space of 50+GB that can be access @ 3Gb/s, the possibilities are endless!
Mesh shaders are just primitive shaders in AMD terminology. It's Microsoft who's at fault for not using official vendor terminology and having to use another hardware vendor's terminology to describe their implementation. AMD's inner circle has never made any mention of mesh shaders in their documentations or in their open source drivers.
Parts of them map to AMD's primitive shaders, or at least the formulation we know of. The primitive shaders we know about lack the developer access, generalized input, or anything but the culling. In that regard, both the Microsoft and Nvidia formulations give small amounts of their descriptions to culling. Nvidia's initial announcements on mesh shading had something like one sentence which encapsulates all that AMD's method does."Mesh shaders" is an abstract concept that Microsoft/Nvidia made up that just coincidentally happens to somewhat map to AMD hardware.
Task shaders and amplification shaders, if going by Microsoft's formulation--which may not map fully to Nvidia's Task shaders."Task shaders" is not a thing that maps to AMD hardware hence why that shader stage needs to be emulated with an indirect dispatch and a compute shader since their hardware doesn't automatically have a shader stage where they can just outright launch or spawn more mesh shaders like a task shader would.
The culling shaders can be auto-generated, as AMD does for RDNA, gave up on for Vega, and parallels the triangle sieve shader customization created for the PS4.Another big thing about mesh shaders on AMD hardware is that they have an option to be "less explicit". Their mesh shader implementation can be potentially compiler generated!