Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I think we're all a bit prone to only accepting the information that conform to our individual biases.

We do have practical evidence though; so far the PS5 is performing beyond expectations if you're looking at the tflop numbers alone.

Until contrarian evidence are provided it's difficult to accept a significant change in favour of the Xbox. I find it hard to believe that performance will increase for one because of a change in tools and that similar increases wouldn't happen for the other one.

Both are likely to get performance increases over the months, and unless a miracle happens, a 20-30% swing in favour of Xbox Series X is unlikely.

I'm just happy that both are performing broadly similar to one another.
 
This take always galls me. I like it in spirit, its good to push back on the audience's incessant demands for constant expensive tech progress, but also it's pretty clear you're not closely involved in art or graphics if you think we're anywhere close to any kind of a ceiling.

Sure, there's a ton more R&D left to optimize and invent new ways around problems on current hardware, but its also very easy to think of essential things to spend extra rendering budget on. (And it's not rare at all to see smaller devs with less staff make games that push hardware limits!)

He just said games visual is mostly the result of game budget in AAA games. At the end people will be impress by a mix of rendering and art talent.

Imo Demon's soul's is the best title visually of the launch and funny things it is a mix of "old" technology but with very high assets quality because because of the possibility to stream GB/s of data from a SSD. There are nothing fancy out of the froxel dynamic GI probes technology reminiscent of RTX GI(probes usage too) and SSDO not an everyday technology on consoles. The game uses well "old" technology, high quality shadow maps, capsule shadow, screen space shadows for small objects, dynamic tesselation, screen space reflection, volumetric lighting, great particle, PBR and so on but with higher fidelity than last generation consoles.

After it share some qualities with Unreal Engine 5 without being as innovative and impressive on the geometry part of the engine high level of geometry with a dynamic GI engine and great texture assets.

After some games do things better or as good like the RT shadows in COD black Ops Cold war, the character model and hair strand technology of Spiderman MM with good reflection RT, as good dynamic GI system and good reflection RT on Watchdog side.

But as a whole package, imo Demon's souls looks better.
 
Last edited:
I admittedly haven't watched much NXGamer content because he comes off as not very knowledgeable on the topic matter and more of an excited fan of technology. I'm actually surprised to see anyone on Beyond3D reference his content in any serious way.
You shouldn't. In NXGamer video about COD he has noticed a few things DF completely missed. On XSX: the missing muzzle flash, the different implementation of RT and flickering in some shadows.

And VGTech was the first to notice some framerate drops were actually random drops so unrelated to pure performance. He also gives us precise pixel counts (while providing his screenshots) and exhaustive stats about framerate.

Both those channels are starting to get known and valued as good source of technical accuracy for good reasons.
 
I admittedly haven't watched much NXGamer content because he comes off as not very knowledgeable on the topic matter and more of an excited fan of technology. I'm actually surprised to see anyone on Beyond3D reference his content in any serious way.
Yep -- this linked video was the first i've watched, and comparing a dated pc gpu to a console that was actually a target platform is incredibly spurious.

Aside from being a pointless exercise in the first place, I'm not sure how it has anything to do with comparing two consoles (with very similar specs, no less), even nxgamer's silly premise is very clearly not relevant to comparing these two machines.
 
Aside from being a pointless exercise in the first place, I'm not sure how it has anything to do with comparing two consoles (with very similar specs, no less), even nxgamer's silly premise is very clearly not relevant to comparing these two machines.
This is an incredibly disingenuous take on the video. The point being made is that the number of teraflops in isolation is irrelevant to performance and a number of factors have greater sway on what is deliverable which is what is said in the video you watched so you should known this.

I'd assert that the GPU guts of PS5 and Series X are not actually that alike. It is known that Series X is RDNA2 but that PS5 has differences but those differences are not known. Without those differences it's difficult to calculate how they would impact performance of 36 CUs at 2.3Ghz vs 52 CUs at 1.8Ghz, and whether the increases to cache and rasterisations delivers more performance.

Right now, analysis yields very little difference at all.

Funny I have never seen or heard any of these complaints when NX was giving glowing reviews and analysis on Xbox One X games in comparison to PS4 Pro. All of suddenly he's biased and untrustworthy... :LOL:

Digital Foundry have also even accused of being biased. For both consoles. Sometimes in the space of a month. It's much easier for some fragile personalities to attack people delivering the news they do not relish than accept the truth.
 
This is an incredibly disingenuous take on the video. The point being made is that the number of teraflops in isolation is irrelevant to performance and a number of factors have greater sway on what is deliverable which is what is said in the video you watched so you should known this.

I'd assert that the GPU guts of PS5 and Series X are not actually that alike. It is known that Series X is RDNA2 but that PS5 has differences but those differences are not known. Without those differences it's difficult to calculate how they would impact performance of 36 CUs at 2.3Ghz vs 52 CUs at 1.8Ghz, and whether the increases to cache and rasterisations delivers more performance.

Right, but the by far biggest factor in difference between a console and pc are not covered in the video: one platform's specific hardware limitations were optimized for, the other was not. It's likely key parts of all the games sampled use specific code paths for the xbox one version in key areas for performance, whereas the pc gpu is running a path more general to other pc parts. Not to mention different architecture!


Also, his weaker PC had... Less CUs, much smaller bus, much less bandwidth (and higher fillrate!) -- it's a completely apples to oranges comparison to the ps5 vs xsx, where the xsx is on the "fast" side of all of those measurements from his comparison, but you're clearly not seeing the kind of performance advantages over the ps5 that you see in his pointless comparison. Really can't see how you can watch that video and think its whatsoever relevant to what we're seeing right now.
 
He was probably knowing the surprising performance of PS5 compared to XSX and write this tweet and medium article. After maybe in two years XSX will be a bit above PS5 but at the end the gap is close.
Not going to say your wrong.
But an alternative view of what he said is the tools, profilers, etc are what's important and has a big impact. Maybe he wrote it knowing the state of xbox gdk. :runaway:
 
As you would expect, there are a lot of factors to contribute to this: configuration of the GPU, functional computation units (regardless of the architecture), clock speeds, cache, memory, bandwidth, CPU and APIs. None of these things are equal between PS5 and Series X so why are people focussing on just one metric and expecting the higher to result in more performance?

There's a lot more than the TF that makes the situation regarding XSX and PS5 interesting. For a start, the base architecture is the same, and it's likely the L0 cache arrangement per CU is similar if not the same. So in terms of hardware there's possibly something going on wrt to L1 or L2 bandwidth, or perhaps the fixed function units and their ability to feed the CUs.

Whatever the reason, a huge drop in IPC observed in current games is interesting, and it will have a reason. I don't understand the desire to shut down speculation about this.

My observations over the years are that the amount of compute per pixel and per primitive moves in only one direction. It's unstoppable. This will necessarily cause changes in the way rendering pipelines are stressed. The interesting question is how this will be reflected in console performance as we move through the generation. I think it is more likely to favour XSX (relatively) especially with dynamic clocks in the mix, though I also think it's likely that XSX won't ever match PS5's IPC and show a difference that fully reflects the "TF difference".

Nobody knows how optimized PS5's tools are either. The thing about optimizing is that effective techniques only comes with experience and both consoles are brand new. What techniques work better than other techniques and how tools will adapt to help developers exploit said techniques is something that will take a while. What we do know, because Dirt 5's technical director said so, is that Xbox tools easy use and mature.

We also know from DF that not everyone has found the transition to GDK straight forward or pleasant. IMO it's best not to take a single opinion as being representative of industry wide experience. DF have also hinted that some developers are unhappy with the current state of the GDK.

There's no doubt that things will improve on both consoles as time passes though.

I think we're all a bit prone to only accepting the information that conform to our individual biases.

We do have practical evidence though; so far the PS5 is performing beyond expectations if you're looking at the tflop numbers alone.

Until contrarian evidence are provided it's difficult to accept a significant change in favour of the Xbox. I find it hard to believe that performance will increase for one because of a change in tools and that similar increases wouldn't happen for the other one.

Both are likely to get performance increases over the months, and unless a miracle happens, a 20-30% swing in favour of Xbox Series X is unlikely.

I'm just happy that both are performing broadly similar to one another.

I can't see the XSX ever achieving a 20 - 30% performance gain over the PS5 at this point. Even VRS isn't likely to make that happen.

I'm pretty keen to see how game workloads change over time though. We're about 7 years in to games being targetted at PS4 resolutions, geometry and shaders. Things like mesh shaders, ray tracing and more complex pixel shaders are likely to up the amount of compute needed relative to some of the fixed function stuff. I guess we'll see how that changes things, if at all.
 
Also, his weaker PC had... Less CUs, much smaller bus, much less bandwidth (and higher fillrate!) -- it's a completely apples to oranges comparison to the ps5 vs xsx, where the xsx is on the "fast" side of all of those measurements from his comparison, but you're clearly not seeing the kind of performance advantages over the ps5 that you see in his pointless comparison. Really can't see how you can watch that video and think its whatsoever relevant to what we're seeing right now.

Because, for about the forth time, the video was only intended to demonstrate that teraflops are not the be all and end of metrics. The reason, I assume, two different GPU architectures was to emphasise this.

The is a whole bunch of folks here who think that because the maximum theoretical performance (as measured in teraflops) of Series X is higher than PS5 that is will performance better. And yes, Series X has 10Gb GDDR6 rated higher than PS5's GDDR6 but what is the realworld read/write performance of each system's bus? How many hits from 52 CUs miss the cache compared to Sony's 36 CUs and what is a real difference in performance? Without access to each console's profiling tools. it's all guesswork.

Some key important factors are simply unknown but a lot of folks have made of their mind. In some way, Series X is held back. Nobody can explain why and it's some magical reason not impacting PS5 even though some people claim they're really similar. This is just WTF logic.
 
Not going to say your wrong.
But an alternative view of what he said is the tools, profilers, etc are what's important and has a big impact. Maybe he wrote it knowing the state of xbox gdk. :runaway:

As I said the gap is close. Imagine Xbox Series X has 50% advantage in flops and in memory bandwith we wouldn't have this discussion.

And he said himself the gap is not widening but closing into the medium article.

Imo the XSX will be a bit better than PS5 not because of the Tflops but the memory bandwith maybe it is currently a bottleneck in the API.
 
What next gen games have we power consumption data?
Just to provide some updates:
Demon Souls will rail to 205-210, this is for both 30 and 60 fps modes
Call of Duty Warzone holds 116-123.

Gears 5 is 190-196 which is still currently the leader in power consumption on XSX.
167 -191 on FH4
153 - 191 on Warzone 60Hz
172- 197 on Warzone 120Hz

Other metrics in my signature under XSX Optimized titles.
Sheet 1 is for XSX.
Sheet 3 will be the same thing for PS5, Just need to get more time to fill out the values
But it looks constant around 205-210 for PS5 once it gets to temperature, so the fan is spooling up and does take a bit of time to catch up before it hits that rail.
I'll try a couple more titles.

I've reached out to my BIL who works at Ubisoft and ordered up WD and AC Valhalla. I can do some testing.

in the mean time, I need to get a new wattage meter and hook it up to a raspberry pi for constant polling so that I can get power measurements per frame at 16ms or something like that. I just need to find a device that can pin out it's data, and the raspberry can do the results polling.
 
Status
Not open for further replies.
Back
Top