right forgot the bandwdith numbers being reported are likely a result of multi-threads pulling from the SSD as opposed to a single thread.If the XSX SSD solution isn't fast enough for that (previous gen game), then PS5's wont be either for true next gen games. Most likely because Gears 5 isnt tailored for those speeds. Much like most pc games on NVME right now.
And the whole concern is whether the textures you need are in fast RAM. The whole point of MS’s methodology is to keep textures out of RAM because they likely won’t be needed. To that end, it’s going to help you try and stay constrained to that 10GB to keep your BW metrics up.You're not constrained to 10GB, it's just faster to use.
PCs don't care, they can always brute force their way out of any storage problem with more RAM, beyond 2020 most above average PCs will have 24GB of RAM or 32GB, problem solved.
JITT - Just In Time TexturesAnd the whole concern is whether the textures you need are in fast RAM. The whole point of MS’s methodology is to keep textures out of RAM because they likely won’t be needed. To that end, it’s going to help you try and stay constrained to that 10GB to keep your BW metrics up.
well, it's okay, it doesn't mean it's efficient use of resources or your money.I don't like how the term PC is used to somehow describe super high end PC which only a small portion in people actually own.
Getting JITTY with it.JITT - Just In Time Textures
I don't like how the term PC is used to somehow describe super high end PC which only a small portion in people actually own.
Xbox has higher memory bandwdith, this should help making the gap wider in RT.It'll be proportionally the same for RT as rasterising. If PS5 is 80% of XBSX when rasterising, it'll be 80% of XBSX on the GPU side when raytracing. And if rendering 80% of the pixels, it'll need cast 80% of the rays.
I imagine that is because the game is still utilising texture steaming pool sizes from the old consoles. That would be hard coded I imagine.I mean, even in this demo here:
When you see the XSX load, you still see pop-in happening. I'm not sure if this is a result of poor code or what not, I'm open to suggestions, but even with the SSD and the CPU and the GPU; it's still present even if for a split second.
Weren't we hearing similar arguments about bandwidth to flops with Xbox One versus PS4? It seems like more marketing than an actual point.
There may be a point about balance, maybe streaming assets is more important than more flops at a certain point but if that's the case Sony should be presenting some info to support that design philosophy.
And the whole concern is whether the textures you need are in fast RAM. The whole point of MS’s methodology is to keep textures out of RAM because they likely won’t be needed. To that end, it’s going to help you try and stay constrained to that 10GB to keep your BW metrics up.
The memory speed hardware specs actually seem to be really close, if you average the total bandwidth of the 16GB on Xbox you get 476GB/s vs the 448GB/s on the PS5. So it’s going to be very interesting to learn what the advantages of doing this end up being.
Again, it's proportionally the same. PS5 has 80% the GPU resources and 80% the RAM BW, giving the same BW per RT unit. XBSX has 46.7 GB per TF, and PS5 has 44 GB per TF. If rendering to a display 80% the resolution, the RT aspect should be nigh identical.Xbox has higher memory bandwdith, this should help making the gap wider in RT.
Again, it's proportionally the same. PS5 has 80% the GPU resources and 80% the RAM BW, giving the same BW per RT unit. XBSX has 46.7 GB per TF, and PS5 has 44 GB per TF. If rendering to a display 80% the resolution, the RT aspect should be nigh identical.
That's nonsense. PS5 has to divide available memory bandwidth between GPU and CPU, so effective GPU memory bandwidth will be much lower than the theoretical maximum and less deterministic. XBSX GPU will have the fast memory all to itself (unless devs are morons) and CPU will almost always use slower memory. That way it will be much, much easier to actually utilize GPU resources to the max and do heavy data lifting on the CPU at the same time (think BVH for example). I expect the XBSX to run around PS5 in circles when it comes to RTRT performance.Again, it's proportionally the same. PS5 has 80% the GPU resources and 80% the RAM BW, giving the same BW per RT unit. XBSX has 46.7 GB per TF, and PS5 has 44 GB per TF. If rendering to a display 80% the resolution, the RT aspect should be nigh identical.
As PS5's are clocked higher, per flop should be a better comparison of RT capabilities which should be tied to CU count and clock speed same as the ALUs.Or you could look at it as PS5 has 12.44GB's per CU and Series X has 10.76GB's per CU.
I completely agree, especially with it being monitored by software. Sony also made the comment you only need the next 1 second of textures in memory versus 30 seconds with the old memory topology.There is 16GBs in total. 2.5GB is for the OS, so the remaining 13.5GB is for games. The OS portion will come out of the 336GBs memory.
Given those numbers I think it's quite logical to assume that the portion of game data that doesn't require the highest bandwidth can and will fill up that 3.5GB (336GBs is probably an overkill for a lot of things...), and in the worst case the data that would see benefit from the highest bandwidth will only spill over very marginally to the slower section of the pool. Personally I think 10GB out of 13.5GB is a very generous portion of fast graphics memory out of the total.