The scalability and evolution of game engines *spawn*

The architecture of the Xbox Series X is packed with a bunch of performance-enhancing stuff. For example, thanks to SFS memory management, the difference will be even bigger than what the teraflops numbers show.

Multiplatform games will show this.
Assuming the PS5 omitted such enhancements.
Surely Sony's silence raises suspicions. Sony has been too silent this gen. Especially on areas that they cant compete like the BC feature.
 
How much important is the ratio Flops per pixel? I suppose more istructions can be executed on every pixel and higher the quality
Series S can, being a 1080/1440P, be comparable?

Series X 4K = 1.502 Flops per Pixel
PS 5 4K= 1.162 Flops per Pixel (this is hypotetical, because we don't have a "sustained" speed, take it with a grain of salt)
Series S 1440P= 1.085 Flops per Pixel
Series S 1080P= 1.929 Flops per Pixel

and how much storage is required in RAM for frame, back, etc, buffers?
I mean, Series S can use 8 GB of fastest block ram, Series X 10 GB, the 2 GB Difference can be explained with ~1/4 of buffers compared to 4K buffers or will hit some assets?
 
(I brought this conversation here, which is where it belongs)

I think we're going to need more proof of whether or not a game developed for 10-12 TFLOPs and 16GB RAM can really be scaled down to a platform with 4 TFLOPs and 10 GB RAM, without pushing back the IQ potential on the larger consoles (as heavily suggested by several devs so far), and without a significant investment on re-engineering and QA (as seen on Switch ports).
So far we have no proof of this.

I imagine it no worse than the xbox one and its 8 GBs of DDR3 holding back RTX's DLSS and raytracing features.

These consoles are nothing but custom PCs and some of the PC games that will release this holiday will support GPUs that span generations and range from <2 to 35 Tflops. You think anybody is going to drop Navi and GTX 1600 series cards anytime soon? We talking about PC cards that lack hardware raytracing and tensor support.

In a gaming market that in the past has simultaneously supported a wide variety of exotic archs across multiple devices, you believe that scalability is now a problem due a 4Tflop console where the only real differences are deltas between basic performance metrics?

Whats a Tflops difference when in the past devs had to worry about esram, edram, SPUs, unified vs non-unified shader archs, and a whole host of other features?

AAA gaming hardware is probably the most uniformed than its ever been and if basic performance is the most influential factor in terms of scalability, we will probably be just fine.
 
Last edited:
DF write up of Ori and the Will of the Wisps @ https://www.eurogamer.net/articles/...of-the-wisps-switch-inside-an-impossible-port

From my perspective, it's a complete success. This is the complete Will of the Wisps experience with minimal compromise. It runs like a dream on Switch, while offering nearly the same level of visual fidelity as the other versions of the game. Yes, if you look closely you might spot some lower resolution artwork, extra aliasing or some slowdown but, by and large, it's extremely well made. As I mentioned at the beginning of the piece, this game is right up there as one of the best examples of the genre on any system - the fact that it's essentially the same on Switch is astonishing. I also think it's worth pointing out that the Xbox One version is enormously improved from launch: it's still not quite 100 per cent perfect in performance terms, but any dips in frame-rate won't unduly impact your enjoyment of the game and the HDR implementation is superb. I really recommend revisiting the game, and don't forget, it's also on Game Pass.

Moon Studios isn't done with Ori and the Will of the Wisps either. A patch is planned for the Xbox version adding support for native 4K rendering at 120Hz. For a game with fast, lateral scrolling, this should push its fluidity even further. I can't wait to test this out myself but the scalability the developer delivers in this title is highly impressive: from 4K120 on a 12 teraflop next-gen console to 720p60 on a handheld, it's still the same excellent game and I highly recommend it.

 
I remember all of the "I don't care about resolution just give me detailed graphics" comments. Well in a weird way that is what Microsoft has done with the Series S. They've scaled the hardware to basically force developers to give you next gen @1080p.

I mean if you can pick up a Series S in a few years on Black Friday for $200 and it plays UE5 games even @900p the dream is alive in my opinion.....
Exactly. I remember when the One X came out and I passed on it because all it offered was 4k and I had just purchased a 75" 1080p set that I sit 10 feet from.

I ended up pre-ordering an XSX as I'm planning to upgrade my set next year, but otherwise the XSS is what the One X should have been.
 
Assuming the PS5 omitted such enhancements.
Surely Sony's silence raises suspicions. Sony has been too silent this gen. Especially on areas that they cant compete like the BC feature.
Like I said earlier: There's no secret sauce in either of these machines. If Sony isn't talking about it, they don't have it. Same goes for MS and why they talk about Velocity Architecture instead of raw SSD speed. Their drive is slower.

But there's a reason Gabe Newell didn't hesitate when he said the XSX was the superior machine...
 
How much important is the ratio Flops per pixel? I suppose more istructions can be executed on every pixel and higher the quality
Series S can, being a 1080/1440P, be comparable?

Series X 4K = 1.502 Flops per Pixel
PS 5 4K= 1.162 Flops per Pixel (this is hypotetical, because we don't have a "sustained" speed, take it with a grain of salt)
Series S 1440P= 1.085 Flops per Pixel
Series S 1080P= 1.929 Flops per Pixel

and how much storage is required in RAM for frame, back, etc, buffers?
I mean, Series S can use 8 GB of fastest block ram, Series X 10 GB, the 2 GB Difference can be explained with ~1/4 of buffers compared to 4K buffers or will hit some assets?

Please provide the calculation you used to get these numbers.
 
Like I said earlier: There's no secret sauce in either of these machines. If Sony isn't talking about it, they don't have it.
Wrong, both consoles have lots of secrets that are only disclosed in their NDA'ed SDKs.

Case in point, last–gen Sony kept mum about their superior graphic APIs:
Metro Redux: what it's really like to develop for PS4 and Xbox One said:
Digital Foundry: DirectX 11 vs GNMX vs GNM - what's your take on the strengths and weakness of the APIs available to developers with Xbox One and PlayStation 4? Closer to launch there were some complaints about XO driver performance and CPU overhead on GNMX.

Oles Shishkovstov: Let's put it that way - we have seen scenarios where a single CPU core was fully loaded just by issuing draw-calls on Xbox One (and that's surely on the 'mono' driver with several fast-path calls utilised). Then, the same scenario on PS4, it was actually difficult to find those draw-calls in the profile graphs, because they are using almost no time and are barely visible as a result.

In general - I don't really get why they choose DX11 as a starting point for the console. It's a console! Why care about some legacy stuff at all? On PS4, most GPU commands are just a few DWORDs written into the command buffer, let's say just a few CPU clock cycles. On Xbox One it easily could be one million times slower because of all the bookkeeping the API does.

But Microsoft is not sleeping, really. Each XDK that has been released both before and after the Xbox One launch has brought faster and faster draw-calls to the table. They added tons of features just to work around limitations of the DX11 API model. They even made a DX12/GNM style do-it-yourself API available - although we didn't ship with it on Redux due to time constraints.

https://www.eurogamer.net/articles/...its-really-like-to-make-a-multi-platform-game
 
Wrong, both consoles have lots of secrets that are only disclosed in their NDA'ed SDKs.

Case in point, last–gen Sony kept mum about ....

Not secret sauce. Not secret even.

My point is that with every new generation fans on forums always hope that their favorite console has something special and secret locked under the hood.

There isn't.

Of course SDKs improve over time, but we're not all of a sudden going to find out that Sony has SFS in hardware IMO or that MS can unleash clock burst mode etc...
 
Last edited:
but we're not all of a sudden going to find out that Sony has SFS in hardware IMO
What about Sampler Feedback?

They've done their road to PS5 and chose what features to focus on, and have not discussed anything since.

Apart from finally saying BC is limited to PS4 gen.

For the record i expect SF to be part of RDNA2 as its part of DX12U.
 
Last edited:
What about Sampler Feedback?

They've done their road to PS5 and chose what features to focus on, and have not discussed anything since.

Apart from finally saying BC is limited to PS4 gen.

For the record i expect SF to be part of RDNA2 as its part of DX12U.

Pretty sure sampler feedback will be in everything new, it's sampler feedback streaming Microsoft are screaming about. This is the console sauce added to the vanilla feedback.
 
Pretty sure sampler feedback will be in everything new, it's sampler feedback streaming Microsoft are screaming about. This is the console sauce added to the vanilla feedback.
That is my view also, but Sony hasn't talked about it.

Even SFS as a whole doesn't seem to be xbox exclusive going by what Mr Beard Guy said recently.

It has some hardware customizations though. Filtering and tile management possibly auto fetching also.

My point was just because its not been discussed doesn't mean it doesn't have it, and i wouldn't lable it secret sauce either. SF not SFS.
I believe most of the benefit is from SF.
 
Yes, no secret sauce, because the SFS
Not secret sauce. Not secret even.

My point is that with every new generation fans on forums always hope that their favorite console has something special and secret locked under the hood.

There isn't.

Of course SDKs improve over time, but we're not all of a sudden going to find out that Sony has SFS in hardware IMO or that MS can unleash clock burst mode etc...
Yes, no secret sauce there, only because the SFS is no secret, MS has already presented it. :)

Sampler Feedback has now been added to the DX12U on PC. SFS, on the other hand, is only present in hardware on Xbox Series consoles at the moment. The Xbox hardware architect talked about that it could be included in PC hardware in the future, but so far no info on that.

SFS allows three times more efficient data movement compared to simple SF over the same bandwidth. While it will really only be available on Xbox hardware for years, this is certainly a significant hardware advantage in the hands of MS.
 
Last edited:
OpenGL had Sampler Feedback since close to a decade:
https://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/6
It's only new for DirectX.
Partial Resident Texture != Sampler Feedback Streaming, the difference is in “sampler feedback”.

Say you want to render light map to a texture and use it later. With SF you can record where the light map will be sampled, so you can avoid wasting resource rendering unused lighting info. For streaming, SF let’s you record what will be sampled and load those chunks only. With other PRT methods, you either approximate in your engine or rely on GPU virtual memory. SF is an all new capability for GPUs, and Xbox Series have extra stuff for SF based streaming.
 
Partial Resident Texture != Sampler Feedback Streaming, the difference is in “sampler feedback”.
MS themselves call Sampler Feedback a variant of PRT:
Later even you do it yourself:
With other PRT methods, you either approximate in your engine or rely on GPU virtual memory.
PRT is well-known and it's a strong possibility that others have their own efficient implementations.
Or that Sony picked that likely RDNA2 feature, too.
SF is an all new capability for GPUs
Nvidia has Sampler Feedback since 2 years (Turing) but didn't really advertise it to the public.
However, their pitch for it is different:
“By using Sampler Feedback, we can more efficiently shade those objects at a lower rate (say, every third frame, or perhaps even lower than that) and reuse the object’s colors (or “texels” as they’re referred to) as calculated in previous frames. ”
https://www.nvidia.com/en-us/geforce/news/geforce-rtx-ready-for-directx-12-ultimate/

In my understanding:
Nvidia advocates doing a lot less shading since the quality loss is minuscule and use that power somewhere else (cue raytracing)
MS wants to do the very best shading with the biggest textures because they can (seeing how Series X's Compute Unit only can do either shading or raytracing that sounds questionable)
 
MS themselves call Sampler Feedback a variant of PRT:
Later even you do it yourself:

PRT is well-known and it's a strong possibility that others have their own efficient implementations.
Or that Sony picked that likely RDNA2 feature, too.

Nvidia has Sampler Feedback since 2 years (Turing) but didn't really advertise it to the public.
However, their pitch for it is different:
“By using Sampler Feedback, we can more efficiently shade those objects at a lower rate (say, every third frame, or perhaps even lower than that) and reuse the object’s colors (or “texels” as they’re referred to) as calculated in previous frames. ”
https://www.nvidia.com/en-us/geforce/news/geforce-rtx-ready-for-directx-12-ultimate/

In my understanding:
Nvidia advocates doing a lot less shading since the quality loss is minuscule and use that power somewhere else (cue raytracing)
MS wants to do the very best shading with the biggest textures because they can (seeing how Series X's Compute Unit only can do either shading or raytracing that sounds questionable)
A missile is a rocket but a rocket isn’t necessarily a missile, just because rocket existed doesn’t mean there were also missiles.

“Brand new capabilities” was comparing to non D3D feature level 12_2 GPUs, those with D3D feature level 12_2 are still relatively new.

Why don’t you read what Sampler Feedback is first? D3D specifications are out there.
 
Back
Top