Could next gen consoles focus mainly on CPU?

Digital Foundry and the push to 4K is going to be a headache for devs and publishers. The casual gamers and fanboys are going to complain if their PS5/X2 games aren't native 4K, and to do that PLUS see a jump in photorealism expected of a console generation jump means that we need a console more than twice as powerful as the Xbox One X.
They will complain about lazy devs & weak consoles blah blah blah if checkerboard rendering or temporal reconstruction is used instead of rendering at native 4K. They will compare XOX rendering lastgen games like Gears of War 4 at native 4K, yet Gears of War 6 with its (large jump in photorealism over GOW4) for Xbox Two only renders at 1600p and uses temporal reconstruction to bring it to 4K.

People always complain. So what? It's not going to affect sales of anything. Regardless of the noises people (loud vocal minority) make, if the products offer a suitable improvement at a suitable price, people will buy in, and if not, they won't. Switch is being enjoyed by PS4/XB1 owners despite a visual downgrade, and despite The Internet complaining the console wasn't powerful enough.

If they're swayed by some online analysis, then they already cared about such things. If the individual actually valued the social aspects (e.g. their friends only have the one system, so there is no illusion of choice), then it's a moot point.

Having access to more information to make an informed purchasing decision isn't new to human history.

All these opinions don't imply a different POV, per se. The fact that it's a good thing to be able to know all the technical details of a game and the amount of power/capabilities of a piece of hardware doesn't mean that you have a specific opinion about how that power should be used... that's why I love Digital Foundry and forums such as this one but I'd love too to see developers focusing more in a noticeable improvement in graphics rather than just upping the resolution (more quality in pixels, rather than just more pixels).
 
60 FPS always looks better than 30 FPS, IMO. The judder in 30 FPS games just looks absolutely horrible. It's one reasons why, even though HZD looks great in screenshots, it'd never win best looking game of the year for me. Once players start moving it just falls apart, and if they do rapid view changes, it's even worse.
That one is debatable I think. I always thought the main argument is that it plays better since controls are more responsive, not that it looks better in motion considering the resolution sacrifice for higher rates. 30fps without stutter and good use of motion blur(or without it) doesn't look bad by any meaning. Why do you think initial reactions of many after they see in-engine (not ingame) or even higher quality trailers is "the game will never look that good" and not "wished framerate was higher". I think it is because it already looks solid in motion.
 
with todays TV's motion interpolation with little lag (even on 4k images) I think 60 fps has less meaning than before
 
60 fps or bust. 30 fps is no longer good enough. I'd like open world games to reach 60, and not just corridor shooters. I will accept the visual tradeoffs this requires.
 
That one is debatable I think. I always thought the main argument is that it plays better since controls are more responsive, not that it looks better in motion considering the resolution sacrifice for higher rates. 30fps without stutter and good use of motion blur(or without it) doesn't look bad by any meaning. Why do you think initial reactions of many after they see in-engine (not ingame) or even higher quality trailers is "the game will never look that good" and not "wished framerate was higher". I think it is because it already looks solid in motion.

If you ever want a display that isn't highly blurred in motion, you're going to have to feed it a signal of at least 60hz, but probably something 100hz with a flickering back light or black frame insertion. I played Counter Strike at 100 Hz on a CRT around 2000 or 2001. Gaming should be moving to low persistence displays, even with tvs. This 30 Hz shit is for people who like to compare screenshots, not people who like to play games.
 
what about 50 hz... I think most Tv worldwide can work on that refresh frequency and here in Europe its actually the Tv transmission standard... Should be quite easier to be reached
 
I thought the majority of displays were 60Hz with 50Hz support for compatibility with TV transmission. i.e. in Amurika and Japan, you don't get the option (60Hz only and maybe 24/48 for judder-free Bul-Ray), but places that were of 50Hz legacy now have the option of 60Hz input/output?
 
I feel like CPU and GPU technology growth rates are both important in regards to allowing developers as much flexibility as possible for what they want to do in games.

Zen architecture just on the merits of being able to go beyond the limitations of jaguar allows for greater ceiling possibilities in gaming as a whole with a large userbase with stronger CPUs on the field. The same goes for a good GPU increase
 
ok 50 hz not worldwide awailable... So 48 hz as for the blueray standard. I think some standard for games has to be set.... 48 hz could be on reach at 720p or 1080p even on current gen consoles
 
What if you want to do cities like AssCreed Unity or Witcher 3, but with three three times the crowds and four times the quality of the behaviours and the animation?
What if you want to do a GTA VI with more cars, more players, huge crowds

In the current scenario, that's a GPU geometry output bottleneck. Increase the number of geometry processors, increase GPU cloks, adopt TBDR, TBR, DSBR, primitive shaders, you name it.
Low-level APIs seem to have taken away the limitation of draw calls, and I don't think the PS4 Pro or XBone X CPUs are bottlenecks in that regard.


What if PUBG 3 has even greater scope with an even more complex sandbox?
Then PUBG devs need to start properly optimizing the game. The thing is a car trying to run on squared wheels right now.


better AI,
Right now, A.I. processing is more efficiently done on lots of small ALUs, both power and die-area wise, so there's little reason to spend more budget on the CPU for this.


and destructible environments?

Physics processing is also done more efficiently on lots of small ALUs, so again: little reason to spend more budget on the CPU for this.
 
Right now, A.I. processing is more efficiently done on lots of small ALUs, both power and die-area wise, so there's little reason to spend more budget on the CPU for this.

Are you talking about AI like we are seeing in automated cars or actually about game AI that is already done on the GPU on current games that you know of? Any examples? Are there any game engines that already support AI done on the GPU?

EDIT - I suppose you could use GPU compute to "feed" AI on your own code, but without some kind of built in functionality would current GPUs be enough? Would it require further Async Compute capabilities?
 
Last edited:
That one is debatable I think. I always thought the main argument is that it plays better since controls are more responsive, not that it looks better in motion considering the resolution sacrifice for higher rates. 30fps without stutter and good use of motion blur(or without it) doesn't look bad by any meaning. Why do you think initial reactions of many after they see in-engine (not ingame) or even higher quality trailers is "the game will never look that good" and not "wished framerate was higher". I think it is because it already looks solid in motion.
there's significantly more clarity at higher frame rates. The delta between 2 frames is less and less as the frame rate goes higher and higher (mmm goodness, imagine frame rate at the speed of light? man it's like it's stopped). But yes, the more frame rate the less difference between two frames, so the clarity is increased greatly because your mind doesn't need to fill in the gaps and if your display can keep up with higher and higher frame rates, you're going to see significantly less ghosting etc. So overall, clarity is increased in the image. This won't change resolution of course, but the image will come across significantly less blurry in the viewers mind while in motion.

One thing to note. If you've ever played a fps at 100+ fps on a 144Hz monitor. And then jump to a TV and play 30fps console game, you're going to notice the slide show that 30fps is until you play long enough for your mind to make adjustments.
 
What flavour of AI?
AFAIK, both types of neural network activities are more power efficient on GPUs. Training does well with FP32 and FP16, inference with FP16 and INT8, both of which are much cheaper to do on GPUs.
I do have some friends working in the field, and I haven't heard a lot about neural networks that need to work on something that GPUs can't handle like FP128.

Unless if by "A.I." you actually mean craploads of scripting i.e. state machines, but can that be called "A.I." nowadays?
 
In the current scenario, that's a GPU geometry output bottleneck. Increase the number of geometry processors, increase GPU cloks, adopt TBDR, TBR, DSBR, primitive shaders, you name it.
Low-level APIs seem to have taken away the limitation of draw calls, and I don't think the PS4 Pro or XBone X CPUs are bottlenecks in that regard.

In the case of the games I mentioned there are severe CPU bottlenecks on console, and it's definitely not the geometry output bottleneck. In games like Witcher 3, Hitman, and Unity the game performs worse in stress points than PC GPUs with far less geometry throughput but faster CPUs do.

Additionally, the X1X brings 2.75 times (or more) the peak geometry throughput than X1, but even with the the "60 hz" mode enabled Witcher 3 bottoms out to about 30% faster than the X1 in town - or about the same difference as between the CPU clocks. 30 fps in Witcher 3 is not a X1X geometry output limitation. Not by a frikkin' mile!

I actually highly doubt geometry throughput is a significant bottleneck on console. X1X running unpatched X1 games can achieve up to 2x (or more) the performance in shader limited scenarios before it hits 60 fps with forced vsync. But in what you would expect to be CPU bound areas ... it's about 30%. The only part of the system with that small a gap is the CPU. Everything else is a crap load faster.

Additionally to that ... we also have developers flat out saying that the CPU is a bottleneck, and that it simply couldn't get their games to near 60 fps.

And we want complexity to go up in future, not creep towards a slightly more stable 30 fps while not progressing in terms of the worlds they present.

Then PUBG devs need to start properly optimizing the game. The thing is a car trying to run on squared wheels right now.

It's also incredibly ambitious, and can sustain frame rates three or more times higher with a decent CPU.

12 player corridor shooters are going to have a lot of competition over the next few years, and from a game type that needs a boat load more CPU.

Right now, A.I. processing is more efficiently done on lots of small ALUs, both power and die-area wise, so there's little reason to spend more budget on the CPU for this.

On boy, that certainly isn't true for games AI!

Physics processing is also done more efficiently on lots of small ALUs, so again: little reason to spend more budget on the CPU for this.

And yet Havoc is CPU only, and even games that don't use Havoc do pretty much all their none-graphics physics on the CPU. And PhysX on GPU kicks that shit out of your frame-rate ...

I dare say there are some types of physics calculation that prefer CPU architectures to GPU.
 
Back
Top