Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
in 2013 low power 8 core cpu was rare but the PS4 made it way more mainstream in a $ 400 box.

how likely could a low power 16 core cpu be for the PS5 that could cost no more than $ 600 for next gen?
 
Unlikely. It's doable, but why would anyone create a custom CPU for that? The existing offerings like 8 Core Zen will be a better fit offering stronger single thread performance which is still valuable to software.
 
Devs can barely properly saturate 6 cores as it is, 16 is just nuts. And somes games only use 2 cores believe it or not so you want higher clocks rather than more cores.

8 cores with smt is possible though, but even smt will result in somewhat lower clocks so maybe not the best. 7 usable zen cores at 3.2 ghz or higher would be great imo esp. if it's zen 2 we get.
 
Devs can barely properly saturate 6 cores as it is, 16 is just nuts. And somes games only use 2 cores believe it or not so you want higher clocks rather than more cores.

8 cores with smt is possible though, but even smt will result in somewhat lower clocks so maybe not the best. 7 usable zen cores at 3.2 ghz or higher would be great imo esp. if it's zen 2 we get.

Are the clocks only effected by SMT when SMT is actually in use? In other words, could it be up to developers to choose whether to clock higher without SMT or vice versa, or does SMT hardware inherently run slower (or hotter at a lower clockspeed)?
 
Are the clocks only effected by SMT when SMT is actually in use? In other words, could it be up to developers to choose whether to clock higher without SMT or vice versa, or does SMT hardware inherently run slower (or hotter at a lower clockspeed)?

With smt or hyperthreading there's more work being done by the cores which uses more power and therefore creates more heat. So in a console closed box, smt would result in lower clocks to compensate. A developer would have no such choice on console rather than making use of all threads or not.

The clocks wouldn't be vastly lower mind you, maybe 300 to 400 mhz or thereabouts but considering the state of software development I would personally say smt is still not worth it in a closed box assuming 8 cores are there.
 
Devs can barely properly saturate 6 cores as it is, 16 is just nuts.

Mind you, console devs are targeting a handful of modestly clocked Jaguar cores (held together by duct tape and also needing to share resources) so there are going to be inherent limitations to the dataset & engine considerations when it comes to CPU crunching. Some engines are currently multithreaded enough to spread across available threads on PC, but it'd be dubious to expect saturation considering the vast gulf in difference between IPC & clocks relative to the target development platforms.
 
Mind you, console devs are targeting a handful of modestly clocked Jaguar cores (held together by duct tape and also needing to share resources) so there are going to be inherent limitations to the dataset & engine considerations when it comes to CPU crunching. Some engines are currently multithreaded enough to spread across available threads on PC, but it'd be dubious to expect saturation considering the vast gulf in difference between IPC & clocks relative to the target development platforms.
I think the new Shadow of the Tomb Raider can scale to a great deal of many cores and divide the work up efficiently. It took a while, but it feels like this is the turning point where DX12 is now being used as more than just dx11 re-written in dx12 and will begin to out-perform dx11 consistently in the PC space.
 
I think the new Shadow of the Tomb Raider can scale to a great deal of many cores and divide the work up efficiently. It took a while, but it feels like this is the turning point where DX12 is now being used as more than just dx11 re-written in dx12 and will begin to out-perform dx11 consistently in the PC space.
I don't suppose anyone has done CPU utilization graphs? Kind of curious as to what's going on in the scenes. Recent Frostbite games would be interesting to survey as well or even the recent COD titles.

hm... Should look up how Forza Horizon 4 performs too.
 
Am I missing something? There’s minimal scaling at everything but the 1080p medium detail case.
There's no frame rate loss. as the core count goes up. That means each core is doing less work without overhead issues. This isn't the graph that I saw, just another article showing the values.
What I wanted to show was the % utilization per core, which is a graph I saw for SOTR. But this will have to suffice. TBH I am afraid the article no longer exists which puts my point into contention.
 
Unlikely. It's doable, but why would anyone create a custom CPU for that? The existing offerings like 8 Core Zen will be a better fit offering stronger single thread performance which is still valuable to software.

Part of it could be marketing seeing how higher core counts are projected for desktop.

Devs can barely properly saturate 6 cores as it is, 16 is just nuts. And somes games only use 2 cores believe it or not so you want higher clocks rather than more cores.

8 cores with smt is possible though, but even smt will result in somewhat lower clocks so maybe not the best. 7 usable zen cores at 3.2 ghz or higher would be great imo esp. if it's zen 2 we get.

This is for moving forward though specially on consoles.

I don't mean to sound arrogant but shouldn't major studios worth their salt could easily and properly use most of the current gen's 6 cores?

I think the new Shadow of the Tomb Raider can scale to a great deal of many cores and divide the work up efficiently. It took a while, but it feels like this is the turning point where DX12 is now being used as more than just dx11 re-written in dx12 and will begin to out-perform dx11 consistently in the PC space.

i actually heard about this and briefly saw it in one of the DF videos.

there was also another AAA game that is able to use 16 threads but I can't remember which one.

still, 8 cores with SMT sounds more realistic and having that 3.2 Ghz figure and 16 threads looks real easy to market specially in a side by side table with the PS4. This is also probably easier for amd to produce.
 
There's no frame rate loss. as the core count goes up. That means each core is doing less work without overhead issues. This isn't the graph that I saw, just another article showing the values.
What I wanted to show was the % utilization per core, which is a graph I saw for SOTR. But this will have to suffice. TBH I am afraid the article no longer exists which puts my point into contention.
Indeed. Otherwise there’s no way to guarantee that 2 cores aren’t doing all the work while the rest twiddle their thumbs. In either case, that doesn’t seem like a CPU stressing game.
 
Part of it could be marketing seeing how higher core counts are projected for desktop.



This is for moving forward though specially on consoles.

I don't mean to sound arrogant but shouldn't major studios worth their salt could easily and properly use most of the current gen's 6 cores?



i actually heard about this and briefly saw it in one of the DF videos.

there was also another AAA game that is able to use 16 threads but I can't remember which one.

still, 8 cores with SMT sounds more realistic and having that 3.2 Ghz figure and 16 threads looks real easy to market specially in a side by side table with the PS4. This is also probably easier for amd to produce.
Thank you.
https://www.resetera.com/posts/12549265/
sottr_2018_09_12_18_0oodxh.png
 
but it'd be dubious to expect saturation considering the vast gulf in difference between IPC & clocks relative to the target development platforms.

I'm not convinced. Does shadow of the tomb raider in DX12 on the 2080ti, in low settings 720p fill up all cores because... if you're correct and game engines can now spread perfectly and innumerably among cores, it should utilize 100% throughput at that point.

But the reality is most PC games are coded for 4 cores. Then, if you have say 16 cores, that workload designed for the 4 cores is spread among the 16. Effectively, each of the 16 cores can only utilize 25%.

Then there are games like kingdom come which only saturate 2 bloody cores, and that game looks amazing technically.

I say, let's give every developer the best performance and ease of use by have higher clocked chips, not forcing platinum to make their beat em ups spread across 16 threads.
 
not sure where i saw it tbh. I just remember reading how well it split the workload
https://nl.hardware.info/reviews/86...s-het-boekje-invloed-cpu-core-scaling-a-ryzen
Thanks.

Also saw this for FH4: http://www.pcgameshardware.de/Forza...za-Horizon-4-Technik-Test-Benchmarks-1265758/

5GHz Caffeine Lake, minfps

2c4t - 71
4c4t - 92
4c8t - 118
6c12t - 134

Pretty good scaling from 4c4t to 6c12t (almost +50%). The 4c8t scaling seems in line with typical HT/SMT (~30%). Too bad they don't have 2c2t & 6c6t on there just to be sure.
 
I'm not convinced. Does shadow of the tomb raider in DX12 on the 2080ti, in low settings 720p fill up all cores because... if you're correct and game engines can now spread perfectly and innumerably among cores, it should utilize 100% throughput at that point.

I'm saying that there might not be enough work for the current engine/game design targets, so even being able to spread the threads, you won't necessarily see 100% across all threads on a CPU that's in a galaxy far, far away from those cats.
 
Guerilla was using all 7 cores available on a launch game in 2013. I’m sure devs will find a way to use the CPU afforded to them.

https://www.slideshare.net/mobile/guerrillagames/killzone-shadow-fall-demo-postmortem

They had different development dynamics than your typical multi-platform developer would, though. The latest Steam hardware survey has 89% of PCs at 4 cores or less. That's an awful lot of your market that will see minimal (if any) benefit from threading past that point. That optimization effort takes also time and resources and I expect will yield diminishing returns with each thread added. There's got to be a point where allocating additional development resources and time on creating more threads stops making sense and therefore there also has to be a point where adding hardware cores also stops making sense.
 
Status
Not open for further replies.
Back
Top