Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
CPU's and GPUs are pretty different in this regard. There literally designed and built for parallel work loads(which graphics is well suited for). That's the tasks that they are for. Not the same as trying to turn sequential code to work in parallel or on a transputer.

Sure but the fundamental problem is the same and the problem is as old early ring/super-computers.

Say a specific rendering task may not necessarily parallelize widely - because there is a limit on the available bandwidth when the job is scheduled, or there is a dependancy on other data generated in parallel that can't be produced faster and maybe PS5 can only use 24 of those 36 CUs for the task. Modern GPUs are designed so that the remaining processing units can do other tasks and that works well as long as the rendering task (and other any tasks feeding it) using 24 CUs doesn't need all of the cache If it does, tasks using the other 12 CUs will muddy the cache causing slow hits to memory.
 
With closed system optimizations, you’re likely looking at a 2080S+ performance for the XSX which is a great baseline for game development. This is for traditional gaming.

how it stacks up in Ray Tracing needs to be separated out.

With lockhart being a thing can we really call it baseline? Not to mention ps5.

I think we may end up with 4tflop rdna2 sata ssd and lower clocked cpu as industry wide baseline for this gen
 
With lockhart being a thing can we really call it baseline? Not to mention ps5.

I think we may end up with 4tflop rdna2 sata ssd and lower clocked cpu as industry wide baseline for this gen
I doubt it will have a SATA with a Zen 2 CPU. That would be absurd.
 
With lockhart being a thing can we really call it baseline? Not to mention ps5.

I think we may end up with 4tflop rdna2 sata ssd and lower clocked cpu as industry wide baseline for this gen
Maybe the SeriesX ended up with a considerably slower SSD because using a 4x PCIe 4.0 NVMe was too expensive to put in Lockhart, so they went with a 2x PCIe 4.0 solution instead.

That said, if Lockhart is still coming then I hope they standardize the SSD solution (raw NAND performance + decompression blocks), otherwise many of the gameplay and QoL innovations we should expect this gen from the revamped I/O may be at risk even for multiplatform games. And that would be a shame.
 
Last edited by a moderator:
I doubt it will have a SATA with a Zen 2 CPU. That would be absurd.

Im talking more about baseline for industry not the consoles..since pc users may not take kindly to mandatory ssd upgrades to the highest end in the near future, devs being extremely ultra conservative with optimizing third party games for non hdds seems possible by focusing baseline on levels sata drives can keep up with

I mean most major games are still using hdd as the baseline so even that jump should be pretty good..
 
HW features are a more important issue in supporting rendering pipelines across platforms when it comes to scaling for performance. The bigger issue is RAM & SSD requirement, and we don't know what LH will have with the former.

Texture assets are the first thing to cross people's minds, but at the same time, we're expecting the SSD to have near instant loading, so less buffering in theory (I really don't know about that for certain engines at least). At the same time, multiplatform devs have to consider where they draw the line on Windows and what sorts of requirements are reasonable.

Steam HW survey also indicates nearly 1/3rd of users have GTX 1050/1060-level GPUs, while they do have to be somewhat mindful of scaling down to a lower GPU RAM reality.

As for the CPU, the clocks shouldn't be too far off for LH to make such a drastic issue (unless MS does something insane there, but the thermal/yield benefits below 80-90% of Anaconda clocks are probably not as worthwhile in that sense). As a baseline, it's a massive jump in CPU capability as it is. The upshot is that it just means either some CPU-related dips on Lockhart or just smoother consistent performance on Oberon/Anaconda for particular framerate targets (if CPU is even the problem to begin with). On PC, it'll be interesting to see how it goes since Zen 2 level of performance is probably not doubled practically anywhere nor a significant presence in the market yet (e.g. 30fps CPU game would be that much harder to double).
 
Last edited:
I think we may end up with 4tflop rdna2 sata ssd and lower clocked cpu as industry wide baseline for this gen
Think when talking baseline you have to say in relation to what i.e. resolution.

I wouldn't say 12TF RDNA2 is anything special for 8K, but I think it's good for 4k.

So Lockhart, the question is has MS made a good 1080p machine in relation to 4k xsx.
Memory, cpu, bandwidth, IO speed to support Lockhart at <1440p, not 4k.
So that's the baseline for 1080p pc's
Xsx baseline for 4k pc's
As normal graphics can be user configured around those.

Also worth adding its a baseline feature set also.
 
With closed system optimizations, you’re likely looking at a 2080S+ performance for the XSX which is a great baseline for game development. This is for traditional gaming.

how it stacks up in Ray Tracing needs to be separated out.
XSX should be around 2080S+ even without closed system optimizations, Navi10 scaled to XSX's flops would be there already and XSX should be faster clock to clock and flops to flops, unless of course there's some bigger bottleneck somewhere which doesn't apply to Navi10
 
Downscaling to lower hardware is generally a "suck it up" approach. I don't imagine anyone using Lockhart as the foundation platform. AAA games, for example, need to sell on the WOW factor against the competition so it's in their best interest to use the stronger platform (XSX no LH) give them the best option of showing the goods. LH players of those games might suffer from unstable frame rate, pop in, stuttering etc. It is what it is.
 
HW features are a more important issue in supporting rendering pipelines across platforms when it comes to scaling for performance. The bigger issue is RAM & SSD requirement, and we don't know what LH will have with the former.

Indeed and Switch is a great example of that, as despite the bottomless difference in processing capabilities, having near parity hardware features wise with PS4 and XBox One, allowed it to have quite a few impressive ports.
 
With closed system optimizations, you’re likely looking at a 2080S+ performance for the XSX which is a great baseline for game development. This is for traditional gaming.

how it stacks up in Ray Tracing needs to be separated out.
Not just Ray Tracing, but also FP16 performance, ML performance (if that ever gets used), VRS, Mesh Shaders, Sampler Feedback ..etc, the features of Series X are the same in Turing, so whichever architecture delivers the faster fps with them is going to determine who comes out on top.
 
Not just Ray Tracing, but also FP16 performance, ML performance (if that ever gets used), VRS, Mesh Shaders, Sampler Feedback ..etc, the features of Series X are the same in Turing, so whichever architecture delivers the faster fps with them is going to determine who comes out on top.

That's correct and we have some indication of XSX's Int4/int8 performance vs tensor cores but a lot of the other features are unknown. This makes it hard to know what is there as a competitive check box, what is actually competitive in practice and what doesn't have a counterpart at all.

That's why gears comparison of traditional gaming performance is the one that makes sense.

Also, Turing will be old hat by the time these consoles show up. Let's see what Ampere brings at GTC.
 
That's why gears comparison of traditional gaming performance is the one that makes sense.
Gears 5 doesn't make that much sense I am afraid too, it doesn't present a good case for memory contention (very weak load on the CPU), this is a very important factor in determining whether Series X can approach 2080 performance in traditional rasterization in the first place or not.
 
Reminds me of "we are mandating 720p on 360"

I honestly find these kinds of claims annoying. Devs have their own priorities that these hw manufacturers cant account for. And thats how it should be. If a dev wants to prioritize a higher fps or sacrifice fps for cpu power for other things, that should be the standard on console

Nobody should be in business of forcing devs to do anything. Especially as they push the hardware with things like rt as the gen goes on.

Yes Ubi seems to have made the 30 fps choice the only game on Xbox site without 60 fps on the page

https://www.xbox.com/en-US/games/assassins-creed-valhalla
 
Reminds me of "we are mandating 720p on 360"
Isn't the important word mandating there?
I've not heard ms say mandating 30fps but could've missed it.
They have made a point of saying that they wanted to make a system where bellow 60 is more about design choice than limitation of the system example cpu

Standard output just means that it is their expectation the majority will be 60fps
 
Isn't the important word mandating there?
I've not heard ms say mandating 30fps but could've missed it.
They have made a point of saying that they wanted to make a system where bellow 60 is more about design choice than limitation of the system example cpu

Standard output just means that it is their expectation the majority will be 60fps

"Itll much easier to attain 60fps or higher while making great looking and playing games than ever before. But we will leave the details of what devs want to do up to them based on their creative goals" is what should have been in that tweet.

Theres no reason to assume any sort of standard will exist any more than any other gen. Games could have been standardized for 60fps at the outset of development and built around that this gen or last gen or the gen before that. Hell people still go on about dreamcast being a largely 60fps machine. Less things are possible but its a doable thing.

The thing is that devs can clearly do a lot more than that...which is something that doesnt change regardless of the power of the console. Ive said it for a while but dont expect 120fps or 60fps to be the standard for this gen especially when devs want to get creative.

Especially those people who have been crying about the crossgen games looking like current gen affairs. Its a tradeoff as usual for closed boxes.
 
Reminds me of "we are mandating 720p on 360"

I honestly find these kinds of claims annoying. Devs have their own priorities that these hw manufacturers cant account for. And thats how it should be. If a dev wants to prioritize a higher fps or sacrifice fps for cpu power for other things, that should be the standard on console

Nobody should be in business of forcing devs to do anything. Especially as they push the hardware with things like rt as the gen goes on.
Yea I don't think it's a mandate. It's probably what they want devs to target though. I don't think 1080p on PS4 was a mandate, but most games decided to be 1080p for instance.
 
Status
Not open for further replies.
Back
Top