General Next Generation Rumors and Discussions [Post GDC 2020]

We've had consoles with 8 cores for a while and most engines are still notoriously bad at fully utilizing multiple cores and threads with some notable exceptions like Anvil and Frostbite. And even if you check any 3700x benchmark, you'll see that at 4k the 3700x doesn't have any trouble managing Battlefield V with a CPU usage that rarely goes over 50%. Even at 1080p, from the videos/benchmarks I've seen, the CPU is still GPU bound.

If, as some users here say, Cernie is lying and PS5 can't achieve max CPU and GPU clocks at the same time, it makes much more sense to lower the frequency on the CPU that you are not going to fully use anyway.

edit: or deactivating some threads (can it be done? Or you need to turn off SMT all together?).



Yeah, for CS:GO players with 244hz 1080p monitors I do agree that having a fast CPU is important, at 4K, not so much (A first gen ryzen is only 1% slower than Zen2 at 4K).
That's not necessarily true.
Games as they get more complex with more players and more things to do require more CPU. See PUBG for instance. That game is very difficult to scale higher and higher and it's not exactly pretty or complex. There is just so much to take account of, the doors, the windows, view distance, map size, loot, ammo, vehicles, gas etc. You want a game more complex than PUBG to run 60fps, your whole system needs to be up to the task.
 
We've had consoles with 8 cores for a while and most engines are still notoriously bad at fully utilizing multiple cores and threads with some notable exceptions like Anvil and Frostbite. And even if you check any 3700x benchmark, you'll see that at 4k the 3700x doesn't have any trouble managing Battlefield V with a CPU usage that rarely goes over 50%. Even at 1080p, from the videos/benchmarks I've seen, the CPU is still GPU bound.

If, as some users here say, Cernie is lying and PS5 can't achieve max CPU and GPU clocks at the same time, it makes much more sense to lower the frequency on the CPU that you are not going to fully use anyway.
.
I don't agree on you comparing CPU usage in consoles and comparing it to PC. Console usage of Jaguar cores was likely very very high, because these CPUs were quite underpowered even by 2013 standards. To run 8th console gen game on modern Zen 2 or Intel CPu with 8 cores you won't need to thinker alot about parralel workloads, but once these Zen 2 cores in consoles are used to their full potential, rest assured current PC CPUs won't easily follow that.

In that way, its better to lower GPU clocks by few % and gain big amount of TDP back (because GPU is already being pushed to its limits), then downclock Zen 2 from 3.5GHz to say 3.2GHz, because you will get much less TDP that way (since Zen 2 is very efficient in entire 2.8 to 3.8GHz range).

It makes more sense to have close parity with CPUs then to lower CPU another 10% to gain 2% on GPU.
 
I think XSX has it too, not at that level, but its not doing compression via CPU.

This. Please I still remember the power of the cloud and the XBO coprocessors that somehow were going to reduce the power gap.

PS5 is a fine console, if they release at 399/499$, it's basically a 2080 super with a 3700x and a super fast SSD for half the price of having the same specs on a PC. I don't know why people want to make it look worse or better than it is. At the end of the day, as the great Bernie Stolar says, it's about the software (as long as the hardware isn't too bad lol).
 
That's not necessarily true.
Games as they get more complex with more players and more things to do require more CPU. See PUBG for instance. That game is very difficult to scale higher and higher and it's not exactly pretty or complex. There is just so much to take account of, the doors, the windows, view distance, map size, loot, ammo, vehicles, gas etc. You want a game more complex than PUBG to run 60fps, your whole system needs to be up to the task.

30% max usage of a 3700x in PUBG.

I don't agree on you comparing CPU usage in consoles and comparing it to PC. Console usage of Jaguar cores was likely very very high, because these CPUs were quite underpowered even by 2013 standards. To run 8th console gen game on modern Zen 2 or Intel CPu with 8 cores you won't need to thinker alot about parralel workloads, but once these Zen 2 cores in consoles are used to their full potential, rest assured current PC CPUs won't easily follow that.

In that way, its better to lower GPU clocks by few % and gain big amount of TDP back (because GPU is already being pushed to its limits), then downclock Zen 2 from 3.5GHz to say 3.2GHz, because you will get much less TDP that way (since Zen 2 is very efficient in entire 2.8 to 3.8GHz range).

It makes more sense to have close parity with CPUs then to lower CPU another 10% to gain 2% on GPU.

I hope you are right and get utilized more often, but even during the PS3 and 360 era, when both consoles had already multiple cores (or SPEs) most PC games barely used more than one core while the rest were close to idle.
 
I think XSX has it too, not at that level, but its not doing compression via CPU.
It does and was quoted as using the equivalent 8 Zen2 cores where as PS5 was quoted as using the equivalent of 9 Zen2 cores in practice. People thinking the PS5's SSD is going to make up for lack of raw power remind me of myself thinking Esram, and move engines were going to do the same last gen. Looking back now it's hard to think that I believed that but I did.
 
I hope you are right and get utilized more often, but even during the PS3 and 360 era, when both consoles had already multiple cores (or SPEs) most PC games barely used more than one core while the rest were close to idle.
Thats because 360 3 core Xenon was rather weak in order CPU, as was PPE from Cell. Although Cell was serious number cruncher, you had to offload alot of stuff GPU would to Cell in order to level the playing field, which nullified its CPU performance vs Intel's at time.
 
30% max usage of a 3700x in PUBG.
I hope you are right and get utilized more often, but even during the PS3 and 360 era, when both consoles had already multiple cores (or SPEs) most PC games barely used more than one core while the rest were close to idle.
If set everything to none, then the CPU has very little to calculate.
I'm not saying you're wrong about the fact that PUBG could be heavier, but when more CPUs are now available at the baseline developers will use it.
 
How does XBSX memory set-up work? When the CPU is accessing the slow RAM, what BW is available to the GPU on the faster bus?

It's a fascinating approach. I don't think it's bad, 10Gb GDDR6 coupled with a few gigs/sec of SSD bandwidth should be more than enough, but why?. I can only assume cost, maybe Microsoft have leveraged cheaper chips for the non-performance critical stuff (the OS and ancillary game data) to keep the cost balanced. It's a sensible design :yes:

the spiderman demo was optimized for the new storage, no ?

Not as far as I know. More than likely it was just dropped onto the SSD. From what Mark Cerny said, devs have to do almost nothing to leverage the SSD - which I admit is not what I expected. I anticipated some careful crafting and tagging of files with during build time with special SDK tools to help devs, but it sounds like you just throw stuff into that storage pool and other than six levels of I/O prioritisation, it just works magically.
 
SSD’s in consoles is like a godsent, away from those aging 5200rpm mechanical spinning drives.

Agreed. I find a whole genre (sports games) impractical to play because of load times. The worst thing in the world is when I buy a new game that doesn't make good use of quick resume (always online games are the worst).
 
It's a fascinating approach. I don't think it's bad, 10Gb GDDR6 coupled with a few gigs/sec of SSD bandwidth should be more than enough, but why?. I can only assume cost, maybe Microsoft have leveraged cheaper chips for the non-performance critical stuff (the OS and ancillary game data) to keep the cost balanced. It's a sensible design :yes:

Agreed. I believe its a non-ideal design choice from a performance standpoint but its something devs can work around and is more BOM friendly. From a performance standpoint, its better than 10 GBs of RAM but when was that ever a suitable choice for MS given the X and its 12 GBs.
 
Quite possibly. CPU scaling is already a part of cross-platform titles, so it's natural for XBSX to get an immediate advantage for games that do scale. SSD scaling on PS5 is going to be a platform specific optimisation. We may see an advantage in console-only games, if such a thing exists, where the same streaming tech uses the available streaming BW to full advantage

Regarding the Sony SSD. Something to consider.

MS is putting their tech on the PC.
No word on Sony doing the same.
Sony is, apparently, going to begin releasing their games on PC as well. (Almost certainly after a delay, but that doesn't matter to this point.)
So MS's tech becomes the default best case scenario on PC?
How do you design a game around requiring these theoretical streaming assets that require the Sony SSD tech and then release it on PC?
If it truly is something spectacular beyond what MS is offering, they would still have to take this into account.


Sill waiting to hear about VRS, ML and if there is any difference between the Primitive/Mesh shader implementations.
 
Sony is, apparently, going to begin releasing their games on PC as well. (Almost certainly after a delay, but that doesn't matter to this point.)
So MS's tech becomes the default best case scenario on PC?
How do you design a game around requiring these theoretical streaming assets that require the Sony SSD tech and then release it on PC?
If it truly is something spectacular beyond what MS is offering, they would still have to take this into account.
The most likely explanation is they'll ignore the PC until such time as they intend to port, either by which time PC will have caught up, or where the PC version can be cut down on storage access requirements.
 
My take away:

CPU: equal
GPU: xbox by 20-30% moderately calculated.
RT: xbox, 44 % more CUs means 44% more "RT cores" even with faster clocks xbox will have an advantage here.
RAM: equal
SSD: Sony
Audio: MS is unknown at the moment so sony for now.
 
RT: xbox, 44 % more CUs means 44% more "RT cores" even with faster clocks xbox will have an advantage here.
Although RT per pixel is likely the same if PS5 is rendering less pixel per second. And faster clocks offsets the CU count advantage a bit.

Take home really is PS5 will be producing the same graphics as XBSX but at marginally lower fidelity, such that most people probably won't even notice.
 
if 10.3 teraflops was your original target then 36 CU's is a weird choice.

Depends on the design goals. 36CU's were their minimum, in order to ensure backwards compatibility with PS4Pro games. It seems that, between the collaboration of Sony's and AMD's engineers, they were confident enough early enough that they could clock high. RDNA2's recent reveal shows this to be justified.

Sony's engineering team will have to justify their actions to bean counters too. If it was set in stone that they were going to use a chip that can clock very high, as well as a cooling solution that can cope with high clocks, then justifying even more cost on a few more CU's may not be possible.

I would've preferred more CU's, but if this 2.23GHz clockspeed is sustained the lion's share of the time, I can understand the tradeoff.

Sony now has to run the GPU at extraordinary clocks producing massive amounts of heat and drawing a lot of power and noise to make the 10.3 teraflop mark and by Cerny's own admission the console won't be able to run at that speed consistently and would be lowered by 10% making it 9.2 teraflops.

I think someone else here already pointed it out, but he said that a couple of percent reduction in clockspeed results in a 10% reduction in power draw.

(128*36)*2.23 = 10,275.84
(128*36)*(2.23*0.98) = 10,070.3232

So still a good way off from 9.2TF. Time will tell if the GPU's clockspeed has to be compromised further than a couple of percent, but "Cerny's own admission" only indicates the above figures.

It seems more likely that the high clocks are a reaction to xbox's 12 teraflops. If Sony was genuine about 10.3 teraflops then 44 CUs would have been more appropriate.

Don't be daft. You can't just decide to ramp up GPU clockspeeds by nearly 400MHz in a tiny space of time because you've caught wind that your competitor's more powerful. The closest instance we've seen come close to that was with the XBoxOne, and that cranked up its CPU frequency by what, 50MHz?

I would've preferred something like 40CU's at a consistent 2GHz. If the 2.23GHz figure is sustained most of the time though, I think they've been quite smart in getting a lot of power out of quite a small chip.

The shite bandwidth still bothers me though. The rest of the system seems quite elegantly designed (again, as long as the 2.23GHz clockspeed is generally sustained) but it's going to be let down by its pitiful 448GB/s bandwidth.

I'm still going to buy it, it'll be my only console, and I'll have plenty of fun, I'm sure. But I'll keep wishing they'd charged me an extra £20 so its CPU usage doesn't starve it of GPU bandwidth.
 
Back
Top