Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Sony are presumably using 12Ch * 64GiB chips for ~825GB.

Other than cost, would there have been anything stopping them from mixing capacities and using 128GiB chips in 2 of those channels to get them close enough to 1TB (~962GB)?

Would it effect speed/parallelism of access on just those two channels, effect speed/parallelism across the board or would it simply not work?
 
Besides BC, the 3.8ghz mode probably could come in handy for certain situations where raw clock speed is needed.

Im not sure there are many, situations where a <10% upclock would outpace SMT assuming that there is appropriate data independence between the different tasks. I don't think the higher clock speed will be used by many AAA next generation engines, maybe on indie games and at first when engines get ported that are designed heavily around a max of 6/7 threads.

And even if there is situation where having the higher clock is better, it is likely that, that is not for the entire frame and there are other tasks that would benefit from SMT.

Edit.

Looks like my understanding of SMT is entirely off, disregard this post.
 
Last edited:
Question about the interaction between mesh shaders, culling and ray tracing. I am thinking that in practice the two sets of technologies don't work well with each other.

Imagine a scenario where you have a light source that's obstructed by an object that's tagged for culling, whether be that it's object obstructed by another object, or behind your FOV.

You're not going to get remotely accurate shadows and lighting in that case.

I guess in in doors / nigh time you can disabled culling for all objects in close proximity regardless of objects. VRS in this case would really help with perf as the details are impacted at night.

In day time culling isn't as important as it's cheaper to just use ray-traced global illumination instead.

@3dilettante
 
Sony are presumably using 12Ch * 64GiB chips for ~825GB.

Other than cost, would there have been anything stopping them from mixing capacities and using 128GiB chips in 2 of those channels to get them close enough to 1TB (~962GB)?

Would it effect speed/parallelism of access on just those two channels, effect speed/parallelism across the board or would it simply not work?
Mixing capacities never work well. They would have twice as many request going to the larger chips, and it would reduce the effective bandwidth. This is a well known problem of any multi-channel memory or storage systems.
 
Im not sure there are many, situations where a <10% upclock would outpace SMT assuming that there is appropriate data independence between the different tasks. I don't think the higher clock speed will be used by many AAA next generation engines, maybe on indie games and at first when engines get ported that are designed heavily around a max of 6/7 threads.

And even if there is situation where having the higher clock is better, it is likely that, that is not for the entire frame and there are other tasks that would benefit from SMT.

Edit.

Looks like my understanding of SMT is entirely off, disregard this post.

Sadly a lot of games still get bottle-necked (CPU-limited) by clock speed because of the main or render thread ... Thankfully Doom Eternal is here to save the day and it doesn't have a main thread or a render thread! WTF?! Apparently it's some kind of job graph system and all cores are used equally. Can't wait to see a talk on that. Also want to see some benchmarks between like 6 core high clock and 8 core low clock and see which one can push the most frames. Hopefully that becomes more common and core utilization is very high this gen.
 
Question about the interaction between mesh shaders, culling and ray tracing. I am thinking that in practice the two sets of technologies don't work well with each other.

Imagine a scenario where you have a light source that's obstructed by an object that's tagged for culling, whether be that it's object obstructed by another object, or behind your FOV.

You're not going to get remotely accurate shadows and lighting in that case.

I guess in in doors / nigh time you can disabled culling for all objects in close proximity regardless of objects. VRS in this case would really help with perf as the details are impacted at night.

In day time culling isn't as important as it's cheaper to just use ray-traced global illumination instead.

@3dilettante

I believe the BVH contains the lowest detail LODs of the game objects, including things that would be culled for rasterization. That way they can draw shadows correctly with ray tracing for things that the gpu has culled before rasterization.
 
I believe the BVH contains the lowest detail LODs of the game objects, including things that would be culled for rasterization. That way they can draw shadows correctly with ray tracing for things that the gpu has culled before rasterization.

OK, makes sense. Is the BVH data be made streamable like textures and other data?
 
Sadly a lot of games still get bottle-necked (CPU-limited) by clock speed because of the main or render thread ... Thankfully Doom Eternal is here to save the day and it doesn't have a main thread or a render thread! WTF?! Apparently it's some kind of job graph system and all cores are used equally. Can't wait to see a talk on that. Also want to see some benchmarks between like 6 core high clock and 8 core low clock and see which one can push the most frames. Hopefully that becomes more common and core utilization is very high this gen.

We also have to remember that the PS4/XB1's CPU were quite anemic even by their launch date's standards, which is not the case this time.
I wouldn't expect CPU to be a bottleneck this generation unless the average game's CPU load change undergoes a quite drastic change, thus the reason for me feeling the entire CPU difference being pretty much moot.
 
We also have to remember that the PS4/XB1's CPU were quite anemic even by their launch date's standards, which is not the case this time.
I wouldn't expect CPU to be a bottleneck this generation unless the average game's CPU load change undergoes a quite drastic change, thus the reason for me feeling the entire CPU difference being pretty much moot.

Yah, considering there will unfortunately be a lot of 30fps games, and 60fps will probably still be the max for the most part, cpu may not be an issue. On the PC side, my Ryzen 1700X has cpu-limited me in a number of games because it doesn't have a particularly high clock speed.
 
Sounds a lot more similar to Sony this time. They can't guarantee all games will work without testing them, and they must have found a few problematic titles. Maybe it's the same third party title with a nightmare multithreaded coded from a bankrupt company.
At some point it's just not economical to test all games and you just have to let people try. Both consoles should have a mechanism for auto-reporting crashes to the manufacturer and developer. At least I'd hope so. If so, and there is a will, perhaps such issues can be reviewed retrospectively.
 
Im not sure there are many, situations where a <10% upclock would outpace SMT assuming that there is appropriate data independence between the different tasks. I don't think the higher clock speed will be used by many AAA next generation engines, maybe on indie games and at first when engines get ported that are designed heavily around a max of 6/7 threads.

And even if there is situation where having the higher clock is better, it is likely that, that is not for the entire frame and there are other tasks that would benefit from SMT.

Edit.

Looks like my understanding of SMT is entirely off, disregard this post.

If the PS5 doesn’t offer SMT, I think many third party devs will opt for the 3.8 GHz.
 
At some point it's just not economical to test all games and you just have to let people try. Both consoles should have a mechanism for auto-reporting crashes to the manufacturer and developer. At least I'd hope so. If so, and there is a will, perhaps such issues can be reviewed retrospectively.
Beta-tested in the future ?
 
Sony didnt talk about many things. Remember the AI that was supposed to guide you in case you were stuck in a game? Would that require some kind of machine learning?
I hope it wasnt just some feature that got scrapped
 
Going by your logic every single cpu and gpu is limited/throttled unless running under liquid nitrogen.
Comparing ps5 frequency changing and xsx fixed frequency and saying its limited just doesn't make sense in the context of this discussion.


Yea, probably more accurate in terms of their process, but also probably unnecessary.
Cerny did seem quite pleased with their cooling solution ;)
 
Status
Not open for further replies.
Back
Top