Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
Because the bottleneck is elsewhere. Series X isn't utilising all its bandwidth at the moment (or all its ALU).
Maybe the bottleneck is simply the whole design. No others AMD GPUs have so many CUs by shader array. Because of this we know the L2 caches are quite busy with Tflops (compared to PS5). And you have that unique split memory (that no others recent machine use for good reason because it causes additionnal memory contention). Finally there is some customization on PS5, some we know that helps with bandwidth, others stuff we probably don't know yet, particularly about the CPU caches.
 
And don't forget that some of xbx ram is slower than ps5's

Not much, and not by much. I don't think it will be much of a factor at all for a properly structured game.

Maybe the bottleneck is simply the whole design. No others AMD GPUs have so many CUs by shader array. Because of this we know the L2 caches are quite busy with Tflops (compared to PS5). And you have that unique split memory (that no others recent machine use for good reason because it causes additionnal memory contention). Finally there is some customization on PS5, some we know that helps with bandwidth, others stuff we probably don't know yet, particularly about the CPU caches.

I don't think the memory setup particularly causes contention issues, at least not so much more than normal. And I don't think MS/AMD would have made a larger chip with a fatter bus that performs like a smaller one (not even talking about the PS5 here - thinking of the 5700XT).

Most likely thing is that early software is just that. Final Xbox dev kits were late, dev environment was new, unfamiliar and late (and not universally liked). MS being MS. They're run by diktat and make decisions that seem odd to most of us.

Developers don't actually care about most of the things we talk about here. Their favourite platform is generally the best one for making their game on. At the start of last gen that was PS4. At the start of this gen it appears to be PS5.
 
Maybe we can learn more from the dGPU variants soon, what their doing and performing. Atleast, they all seem (very) high clocked, or are able to.
 
But still, when I read xsx has more bandwitdh, eeeeh, it's a little more complex than that.
The slower portion of memory in the XBSX is still pretty fast though. At 336 GB/s its still faster than the One X which was 326GB/s, so nothing to sneeze at I guess. I don't think we've heard from developers yet about XBSX's memory setup but I am curious what their thoughts are. It could be the slower memory is easily used up by things that don't require massive amounts of bandwidth, or maybe it does make things a little more difficult for devs. Either way it's something that will get worked out over time and at least it's no where near as complicated as last gen with Esram for XB1.
 
What part of Sackboy uses UE4 on PS5 and is out now don’t you understand exactly? ;)

My response was not about your post, which is why it wasn't quoted.

It was for the general discussion about UE5 and the next-gen games with crazy geometry and full GI. For the developers who don't have the time budgeted from management to create all of that for themselves, they will have to rely on when UE5 is available.

think Demon's soul's is is a taste of a part of true next generation games with crazy geometry level nearly 1 polygon per pixel, great texture 1 texel per pixel and a fully dynamic GI system. After maybe we will add some RT reflection and/or shadow coming in the mix or not(UE5).
 
Yes but it should be clear that the new IO stack is there for next-gen purposes so that UE4 games can use the benefits right away. I’m sure you don’t get all possible benefits until you do some work, but the way UE4 is set up to be scaleable and abstract some parts, it can be updated transparently to some extent and I’m sure that work has been underway for a year or more.
 
For me also it was telling how little XSX game footage was shown pre launch, you had Sony showing off R&C gameplay which looked great and MS didn’t have anything to show off this power they kept banging on about.

Assuming rumors of the GDK not being in a good, possibly even useful, state prior to this summer, it'd be hard to show games running if developers only got a decent GDK sometime this summer.

It'd be interesting if part of the delay for Cyberpunk 2077 was the seemingly rushed nature of MS' GDK.

Regards,
SB
 
If true that GDK is not up to snuff yet it gives me some hope... Then maybe even BC would be better than it already is? ironing out the those 50-60fps games such as Sekiro would be sweet.
 
if anything, PS5 seems to have the more "exotic" architecture, so the learning curve could be higher on PS5, but that's just supposition of course.
It's said that devs are not even using the GE like they could and won't for at least one year.
 
if anything, PS5 seems to have the more "exotic" architecture, so the learning curve could be higher on PS5, but that's just supposition of course.
The I/O system is supposedly transparent. Clock speeds changing are something most multiplatform developers will be used too, though, it's the norm on PCs and mobile devices.
 
if anything, PS5 seems to have the more "exotic" architecture, so the learning curve could be higher on PS5, but that's just supposition of course.
It's said that devs are not even using the GE like they could and won't for at least one year.

I dunno; the thing with PS5 is, a lot of its "exotic" features are either simply Sony's variants for certain RDNA 2 features, or things like cache scrubbers which aren't an exotic concept all that much in the space of computing, though maybe for a games console they're new (I'd have to do some research and see if older games consoles had them or some variation of the concept implemented at the hardware level). The SSD I/O even isn't terribly exotic; there's a lot of mostly repurposed Zen 2 cores (most likely) for things like the Coherency Engines, and the technology at the end of the day is commonly understood; it's still interfacing with NAND chips, etc.

Even Tempest Engine isn't too "exotic"; repurposed CU designed to act more like a SPE from Cell, I would say vast majority of 3P devs (and certainly Sony's 1P devs) are familiar to a large extent with Cell and the SPE after having worked on it for several years. There's always been some talk about to what extent it could be used for "boosting" graphics rendering but realistically how would that function in practice? There will always be some audio to work on, though I understand PS5 has a regular DSP in it too for more standard audio tasks. Even so, what could Tempest Engine actually do for graphics when it is a stripped down CU?

From everything Sony have said regarding things like the SSD I/O handling a lot of the work for devs on its own (or automating a large part of that process), to the information (however scant) that PS5 devkits are a lot like supercharged PS4 devkits (and using similar, but much expanded, APIs), I don't see where the large learning curve is. Then again that is probably also because I'm not on the "GE has RDNA 3" train, either xD, because at least IMHO there are no logical grounds to conclude any specific RDNA 3 features in the GE as all speculation is incredibly barren, and I think the majority of it has come from people who were expecting Sony to talk on more of what features the system has after AMD's RDNA 2 presentation, or looking for confirmation of PS5 features in the revealed RDNA 2 presentation, and completely missed the obvious clues (AMD Smart Access, which seems like their version of PS5's variable frequency. Altho to more recent knowledge, underlying support for implementing a feature like Smart Access was always technically there in the PCIe 4.0 (and even PCIe 3.0) standard) for whatever reason.

OTOH Series X is already a bit tepid out of the gate because the SDKs have been running late in maturity. The "split" (not really split in a way like XBO or PS3's memories were, let alone older consoles) memory might be a bit of a learning curve to it (though not very much; the slower pool should be used for CPU and audio like intended but if there's a way graphics assets can spool over into it, while I don't think content of bandwidth would be anything significant, it would have to be managed well by the developer). While I wouldn't say saturating a wider GPU is necessarily difficult (AMD, Nvidia, Intel etc. are all going wider and wider so there must be some benefits to workload scalability to encourage this otherwise why waste the money?), there may be certain workloads that require more optimization to do so versus having a narrower GPU that's clocked faster like the PS5's. And with XvA there are things like the mip-blending hardware in the GPU for SFS that has its own calculated risk since it needs a miss in order to trigger its use, that has to be calculated/accounted for by the developer on their end, that probably will need its own learning curve to master.

Basically I think PS5 well captures the relative ease-of-use (and dev environment maturity for growth in optimizations) of the PS1 or 360. Not so much the Series X; I wouldn't say it's anywhere near PS3 levels of challenge potentially, but there's maybe not another system to directly compare it to. The PS2 wasn't necessarily easy-to-use but whatever challenges it presented were very much overcame relatively quickly. And while I don't think the Series X is at as much of a disadvantage as the Sega Saturn was in its day, I do think there are a few parallels atm. For starters, like Saturn the Series X had a somewhat tepid visual presentation coming right off of Sony having a very strong visual presentation (Halo Infinite vs. UE5 demo, caveats aside). Like Saturn, the SDK environment for Series X has been comparatively slow to mature compared to Sony's, who seem to have everything up and running quickly. And like Saturn, we're seeing the Series X perform a bit under-bar in some 3P multiplat in this early period, in a lot of ways due to that second point but also due to the potential factor that it's just a somewhat more difficult design to master. There's probably a few more parallels I could touch on but those seem like the main ones IMO.

The big difference between Sega in 1995 and Microsoft today, though, is that Microsoft actually has a wealth of money and resources to drive at resolving all of these problems, and they're just a much larger company and have stronger presence in the console market (in terms of financial power at least). While there's maybe been some confusing marketing regarding the Series systems (analogous with some of Sega's confusing Saturn marketing at the start of that system's Western life), I think they're ironing it out now. I guess that means Microsoft needs their own "Virtua Fighter 2" moment, something to show the system can actually do all of the things it's said it can do, and outperform whatever's on Sony's platform at the time at least in some key visual metric. Natural instinct would be to assume that game is Halo Infinite, but realistically it could be BMI, Scorn, or Exo-Mecha (imo), it just depends on how much MS are helping those studios. It could also be their Series X port of FS2020, which hopefully should drop in H1 2021 (and with a bevy of extra content).

I just don't want them to take too long before showing such a thing off, because Sony won't be waiting too long to give people R&C Rift Apart (which is the closest to a Pixar film I've seen any game look), or Horizon Forbidden West. Even GT7 is looking pretty nice graphically, though I think that's prob still coming to PS4 as well (nothing wrong with that; cross-gen isn't the death knell it used to be earlier in the year didn't 'ya know ;) ).
 
Status
Not open for further replies.
Back
Top