Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
Could Xbox Series X's 12 teraflop GPU deliver even more power than we expected?
Confirmed: RDNA 2 tech offers more performance than any AMD graphics card available today.

Is this the first attempt to comprehensively answer the big question? Fundamentally, what is next-gen? In Xbox chief Phil Spencer's latest blog for Xbox Wire, we get a smattering of tech specs for the new Series X, reaffirmation of a frictionless future for gaming thanks to solid-state storage and a reminder that as powerful as raw power is, technological innovation is just as important.

However, despite that focus on new ideas, there is still room for Microsoft to clarify and indeed emphasise the extent of the processing power crammed into the Xbox Series X GPU. In a GameSpot story at the tail end of 2019, Spencer invited us to 'do the math' based on the notion that the new console had twice the graphics power of Xbox One X and over 8x that of Xbox One. The implication was that the console packs a 12 teraflop GPU - but muddying the waters somewhat is the fact that innovations in GPU architecture meant that Microsoft wouldn't need 12TF to deliver 2x Xbox One X performance - our tests showed that a ballpark 9-10TF could conceivably get the job done.

The new blog clarifies the situation and it's only good news. With 12TF unambiguously confirmed, Microsoft may well have twice the basic level of GPU compute on tap, but actual gaming performance should exceed that handily. However, the firm goes further in explicitly stating that AMD's latest RDNA 2 architecture is at the heart of Series X, meaning that there may well be some further optimisations in the upcoming AMD Navi design implemented in the console that we are not yet aware of, simply because PC parts based on the latest architecture are not yet available for us to experiment with.


Read the entire DF Article on 12TF RDNA2 Xbox Series X here https://www.eurogamer.net/articles/digitalfoundry-2020-xbox-series-x-power-play-analysis

Reading through that and thinking about how MS can improve the quality of back compatibility titles without developer interaction made me think of what things MS could do.

And it struck me that since MS has control of the hardware, can't they (at a minimum) do things similar to what AMD/ATI and NV were and continue to do with display drivers? For example,
  • Various forms of AA.
  • AO in the past could be forced through display drivers.
  • NV infamously did shader replacement through the display drivers.
  • Anistropic filtering in the drivers. I wonder if maybe MS is doing something similar with the forced 16x AF in older games?
  • AMD once tried checkerboard rendering for multi-GPU rendering.
  • Likely quite a few other things I'm not remembering (I haven't messed with driver forced rendering settings in years) as well as things that neither AMD nor NV tried.
Regards,
SB
 
DigitalFoundry was talking about DirectML and experiments from Turn10 etc using upscaling. Maybe they'll do something like DLSS and upscale all of the BC games.
 
DF Direct: Xbox Series X 12TF RDNA 2.0 GPU Power Confirmed + Much More!

Good viewpoints that are being considered here. Sounds like Richard knows more but isn't willing to share just yet.
side note: John looking like he's been consistent with some sort of dieting or gym stuff here; definitely looking a lot younger here. kudos man, keep going.
 
The ones I quoted I expect them to stay clear of. Messing about with studios games in this way is a no no. And is bordering on why they get permission to do the BC games.
The other stuff I could see them doing. It's a fine line.

Oh yeah, I don't expect those things to be utilized. There's no real reason to. XBSX isn't going to have multiple GPUs and pretty much everyone frowns on shader replacement. That was just a list of things that have been done though the driver because the driver sits between the game and GPU. Basically showing that if you control the hardware, you can theoretically do a lot of things outside of a game that directly impacts a game's IQ and performance.

Regards,
SB
 
Isn't RT in Series X and PS5 expected to be significantly less performant than current high-end Turing cards? It doesn't seem reasonable given the known sizes of these chips for there to be much room for RT cores. And why I think both companies have been touting the audio use cases since that would require significantly fewer rays in the scene.
 
Isn't RT in Series X and PS5 expected to be significantly less performant than current high-end Turing cards? It doesn't seem reasonable given the known sizes of these chips for there to be much room for RT cores. And why I think both companies have been touting the audio use cases since that would require significantly fewer rays in the scene.
Expected by who?

If the xsx soc is 7nm+ there's lots of room in there for RT cores.
 
Isn't RT in Series X and PS5 expected to be significantly less performant than current high-end Turing cards? It doesn't seem reasonable given the known sizes of these chips for there to be much room for RT cores. And why I think both companies have been touting the audio use cases since that would require significantly fewer rays in the scene.

No one knows yet. I guess it depends on how naive the Turing implementation is. Looking back, initial implementations of radically new features on GPUs have not always been done in the most efficient manner. This will be AMDs first shot so im not expecting miracles.
 
Console ray tracing doesn't have to match PC implementations to make an impressive visual improvement. Console consumers have proven that they accept 30FPS as a performance baseline where in the PC space, at least in terms of reviews and public facing opinion, that's unplayable. I think if you look at the shader performance Series X, and the fact that a 2060 Super can do 4k30 with RT on with a bit of tweaking, I think most people are expecting performance along those lines. Maybe a bit better, actually, because I'm not sure RT is the limiting factor on a 2060 Super at 4k. It's performance is usually in the 40-50 range there in most games on higher settings.
 
Ray-tracing fundamentally relies on de-noising right now, so ray counts can be kept low. I have my doubts about how well it'll work at 30fps, but maybe there are some good videos of Metro or something with a 30fps cap. But you're right. Console users accept terrible image quality, so it may not matter.
 
No one knows yet. I guess it depends on how naive the Turing implementation is. Looking back, initial implementations of radically new features on GPUs have not always been done in the most efficient manner. This will be AMDs first shot so im not expecting miracles.

One thing I haven't seen brought up yet, is that it's also possible that AMD has licensed IP from another company WRT RT. I seriously doubt something like this has happened, but who knows at this point.

As well, while unlikely for AMD's first attempt to be significantly more performance than NV's first attempt, there is historical precedent for it. ATI's first attempt at DX 9.0 features was significantly more performant than NV's. Likewise, NV's first hardware tessellation implementation was more performant than AMD/ATI's despite AMD/ATI having had multiple iterations of hardware tessellation over the years.

What I would expect is something of roughly similar performance but differing in implementation.

Regards,
SB
 
What I would expect is something of roughly similar performance but differing in implementation.
This is what I expect I guess too, but the one thing that is throwing a wrench in it is the idea of them handling incoherent rays with a different performance profile. Or, the in32fp32 split in turing giving it a different performance characteristic in some RT titles than RDNA 2.0 has.

Something along those lines.
 
Status
Not open for further replies.
Back
Top