The scalability and evolution of game engines *spawn*

Ok I get that my previous post was moved because it referenced the Bethesda acquisition, but it's only relevant within this context of engine scalability:


What do you guys think of these comments from ID Software? Is a lower rendering resolution and texture resolution not enough? Does the RT BVHs need the same amount of memory regardless of rendering resolution?


Really bummed about this RAM situation on the Series S. This isn't easy to compensate and drags down base spec quite a bit for next gen multi platform. RAM increase was already small compared to previous gen, now it's almost non existent. RT BVHs also need a lot of mem on top.
— Axel Gneiting (@axelgneiting) September 10, 2020

The memory situation is a big issue on the S. The much lower amount of memory and the split memory banks with drastically slower speeds will be a major issue. Aggressively lowering the render resolutions will marginally help but will not completely counteract the deficiencies.
— Billy Khan✨ (@billykhan) September 10, 2020



Those two id Software employees deleted those tweets.
Considering recent events, they didn't delete those tweets because they changed their mind about the memory limitations.
They deleted those tweets because Phil Spencer literally just became their boss.
 
Meanwhile you have others saying they have no issues at all with Series S, like the Tech interview with DigitalFoundry for Dirt 5.

Some developers love to do nothing more than bitch about anything new, while others like the new. Like all things in life, the reality is somewhere between the extremes.
 
Meanwhile you have others saying they have no issues at all with Series S, like the Tech interview with DigitalFoundry for Dirt 5.
It was nice to hear a dev genuinely enthusiastic about Series S. :yes:
 
Ok I get that my previous post was moved because it referenced the Bethesda acquisition, but it's only relevant within this context of engine scalability:






Those two id Software employees deleted those tweets.
Considering recent events, they didn't delete those tweets because they changed their mind about the memory limitations.
They deleted those tweets because Phil Spencer literally just became their boss.

He looks excited now...


Axel hasn't posted about it. LOL

Tommy McClain
 
He looks excited now...

I think there's a good chance he's genuinely excited about the acquisition. Like I said in the acquisition thread, I think current Microsoft is a much better manager of studios / IPs than Zenimax, the later which at some point apparently got driven by stockholders and board of directors with little to no knowledge about videogames. I mean a quick look at the company's executives and other than Christopher Weaver, we see a bunch of 70-year-old lawyers and businessmen related to mass media like Les Moonves and real estate like the late Robert Trump. I don't know how these guys would look at videogames with anything other than an excel sheet, as they definitely knew nothing about software development.

As for the tweets, they most probably describe their actual opinion of the Series S' hardware (which is 100x more valuable and honest than any comment from a 1st party dev making a promo video for the console). They didn't further discuss the issue by saying stuff along the lines of "I thought of it better and studied the specs a bit more and it doesn't seem so bad". They literally just deleted the tweets and blocked the people who tried to further the discussion with them.
And it's not like Phil Spencer called them in the middle of the night saying I'm ya boss now motherfucka, you delete those tweets right now or Ima shove my foot up ya ass!. I think once they knew the deal was taking place they just took the initiative to delete the tweets to avoid unnecessary confrontations. We all have to deal with too much unnecessary drama in this world.
 
And wrongly so, cause evidently it is possible to scale up/down (...)

(I brought this conversation here, which is where it belongs)

I think we're going to need more proof of whether or not a game developed for 10-12 TFLOPs and 16GB RAM can really be scaled down to a platform with 4 TFLOPs and 10 GB RAM, without pushing back the IQ potential on the larger consoles (as heavily suggested by several devs so far), and without a significant investment on re-engineering and QA (as seen on Switch ports).
So far we have no proof of this.

All I see from Microsoft's messaging so far is they'll be forcing/pushing devs to always hit "pure" 3840*2160 native resolution on the SeriesX so that the SeriesS can run the same games at a mix of 1920*1080 and 2560*1440. And forcing the SeriesX to render at native 3840*2160 throughout its whole life is a complete waste of compute resources.

For example, if the PS5 renders a game at 2880*1620p + upscale (like e.g. Battlefield 1 on the Pro), that's 4.7 MPixels. If the SeriesX renders the same game at 4K native, that's 8.3MPixels.
That's 77% more pixels that the SeriesX has to render, for which the 17% advantage in higher compute throughput is definitely not enough to compensate.
In this scenario, one of three things can happen:

1 - The SeriesX needs to render 8.3MPixels and needs to lower IQ effects or framerate compared to the PS5 (not enough compute power per pixel available). This means the SeriesS can run at 2.8MPixels = ~2222*1250p which is around what Microsoft has been claiming for the SeriesS (between 1080p and 1440p);
2 - The SeriesX matches the PS5 in rendering 4.7MPixels and shows similar or higher IQ effects or framerate, but then the SeriesS renders the same scene at 1/3rd the amount of pixels, at e.g. 1680*945 which is 1.6MPixels. At this point the SeriesS isn't even hitting 1080p and this is actually close to PS4 levels of rendering resolution;
3 - The SeriesX runs at ~17% over the resolution of the PS5 with 4.7MPixels, meaning 5.5MPixels which is ~3100*1743. 1/3rd the resolution of this hits ~1787*1005, still below 1080p;
4 - The framerate hurts instead, though that is only possible when the big consoles are running at over 30FPS. Which won't be the most graphically intensive games.


And this isn't even the worst case scenario for the SeriesS.
I (and many others) think the most impressive demo of next-gen capabilities we saw so far was the Unreal 5 real-time demo. At least I haven't seen anything remotely like it.
That game demo ran on a PS5 at 2560*1440p 30 FPS, which is 3.7 MPixels. Assuming the new Xboxes can get an I/O effective throughput fast enough, we'd then see the SeriesX running at the same 3.7 MPIxels 1440p 30FPS with higher IQ effects, or maybe we'd see it running at ~17% higher resolution which is 4.3 MPixels.
Now going with the same SeriesS = SeriesX / 3 logic, then 1/3rd of 4.3 MPixels is 1.44 MPixels, which is exactly 1600*900. At this point, we're looking at a typical XBone S rendering resolution, with a level of jaggies that no amount of post-processing can save on a larger TV.



So from my perspective, the only way to prevent the SeriesS from rendering at sub-1080p is to:
1 - Guarantee that the games on the SeriesX need to run at native 4K or close to it, effectively reducing the IQ capabilities of the console by rendering many more pixels than needed, all so that the SeriesS doesn't go too low when going for 1/3rd the resolution.
2 - Spend a great deal of development time handpicking the effects and assets that can or cannot be reduced or taken down.

I'd say most 1st parties will go with option 2 (because they need to..), and most 3rd parties will go with option 1. The problem with option 2 is that games become more expensive and take longer to develop.
 
If the PS5 can render at 1440p and reconstruct towards 4K, XSS can render to 900p and reconstruct towards 1080p or 1440p. It'll be fine. Output resolution for PS5 after reconstruction won't be targetting anything as low as 1440p.

"Hand picking effects" is basically what the options on a PC game allow you to do. It's not some huge engineering effort for consoles as you have to have this for the PC anyway. The fact that the Dirt 5 developers aren't even at the stage where they're worrying about how to fine tune all versions of the game (it's not just XSS, it's all versions, the Technical Director made that clear) highlights that it's actually a pretty run of the mill thing to do.

XSX has an 18.2% advantage over PS5 peak boost, but reality will normally be greater. Still not enough to hit 4K native with same settings as, say, an 1800p native PS5 game. But upscaling and reconstruction will be just as big a thing on XSS / XSX as on PS5.

Even for first party games from MS, dynamic res and temporal reconstruction are a thing. Halo 5 has dynamic res and upscales, Gears 5 uses dynamic res and temporal reconstruction.

Memory is likely to be the real pressure for XSS rather than being 4TF.
 
If the PS5 can render at 1440p and reconstruct towards 4K, XSS can render to 900p and reconstruct towards 1080p or 1440p. It'll be fine.
It'll be fine on small sub-43" TVs and it would be great on a 10-12" handheld.
Give it 3 years into the new generation and we'll see how happy SeriesS owners with 55"+ 4K TVs are with their games.


"Hand picking effects" is basically what the options on a PC game allow you to do. It's not some huge engineering effort for consoles as you have to have this for the PC anyway.
The developers don't do QA and performance testing on every scene / stage / level for every combination of every setting available for the PC. They need to do that for the console. Dismissing multi-SKU console optimization as something similar to changing one global setting in the menu and then running a 2 minute benchmark on a PC is an oversimplification.


XSX has an 18.2% advantage over PS5 peak boost*, but reality will normally be greater.
* - in compute throughput. Unless there are more ROPs on the SeriesX then the PS5 has faster pixel fillrate by up to 20%, and so are the ACEs / GCP, Geometry Processor, etc.
Saying the SeriesX will be at least 18.2% faster but will normally be greater than the PS5 is true if the consoles are running the AIDA64 GPGPU benchmark. Which they will not.



Even for first party games from MS, dynamic res and temporal reconstruction are a thing. Halo 5 has dynamic res and upscales
And so will Halo Infinite. We all know how that went.
 
It'll be fine on small sub-43" TVs and it would be great on a 10-12" handheld.
Give it 3 years into the new generation and we'll see how happy SeriesS owners with 55"+ 4K TVs are with their games.

Games aren't going to make their target resolutions lower than this gen for XSX and PS5. Has never happened before, won't happen this gen.

The developers don't do QA and performance testing on every scene / stage / level for every combination of every setting available for the PC. They need to do that for the console. Dismissing multi-SKU console optimization as something similar to changing one global setting in the menu and then running a 2 minute benchmark on a PC is an oversimplification.

They build in these options for a reason. They work the way they do for a reason. Console versions have their settings tweaked - always have. It's normal. It's commonplace.

* - in compute throughput. Unless there are more ROPs on the SeriesX then the PS5 has faster pixel fillrate by up to 20%, and so are the ACEs / GCP, Geometry Processor, etc.
Saying the SeriesX will be at least 18.2% faster but will normally be greater than the PS5 is true if the consoles are running the AIDA64 GPGPU benchmark. Which they will not.

Yeah, they'll mostly be limited by compute and bandwidth. And in stressful situations PS5 will throttle, adding to its ALU deficit. That's fine, it's what it's designed to do. Literally how it's built. Literally how it's planned to operate.

And so will Halo Infinite. We all know how that went.

That's just shitpost trolling. Not technical in the slightest.

You're posting in a thread about game engine scaling in an entirely partisan way. It adds nothing to the conversation.
 
Yeah, they'll mostly be limited by compute and bandwidth.
They will?
So in one thread there's people claiming that games are starting to take minimal returns from higher compute throughput, but for you it's all about compute throughput and bandwidth.


You're posting in a thread about game engine scaling in an entirely partisan way.
I think you should check your own biases, and perhaps read I don't know, my posts in this very same page?
I think there's a good chance he's genuinely excited about the acquisition. Like I said in the acquisition thread, I think current Microsoft is a much better manager of studios / IPs than Zenimax (...)

Or do you only consider my opinion nonpartisan if I take one special side to your own liking?


That's just shitpost trolling.
Halo Infinite is the literal embodiment of the concerns raised by the idea of a multi-SKU launch with very different performance levels.
Perhaps it'd be good for your argument if this fact was hid away, but it won't.


It adds nothing to the conversation.
Yes, of course.
I just wrote a post above calculating expected render resolutions by relating compute throughput (which is something you apparently believe is the right thing to do), to which you respond with a couple of useless and obnoxious one-liners, and I'm the one adding nothing to the conversation.
 
* - in compute throughput. Unless there are more ROPs on the SeriesX then the PS5 has faster pixel fillrate by up to 20%, and so are the ACEs / GCP, Geometry Processor, etc.
Saying the SeriesX will be at least 18.2% faster but will normally be greater than the PS5 is true if the consoles are running the AIDA64 GPGPU benchmark. Which they will not.

???
Sry, but what? Since years there is a total ROP overhead and everything is bandwidth and compute limited. And you want to tell us, that the xbox SOC will have lower performance because of that?
Just think of it for a second, this doesn't make sense at all only in very, very limited situations, this might be true.
Btw, there are still variable clocks on PS5 side, which means, ROPs, Caches, .... will also clock lower. Much lower according to Cerny (that 2GHz cannot be reached stable in all situations).

It is what it is. At the end we will just see small resolution advantages on multi platform titles on xbox. But because of the overall high performance of both machines and good reconstruction techniques DF must now zoom in even further. At least I hope that this happens. It totally depends on the average clockspeed PS5 has.

And so will Halo Infinite. We all know how that went.
I think we can all agree, that the Halo Infinite demo has flaws, that were not a hardware problem. It even ran on PC and looked outdated.
But one thing of the demo they nailed was the lighting of the star. The color the light had when it goes through the trees. Funny thing I saw the exact (as far as I remembered) same light conditions when I drove past a fir forest a few days ago ^^
 
Last edited:
Sry, but what? Since years there is a total ROP overhead and everything is bandwidth and compute limited.
Check Ampere's thread, then. We have a 3090 that has 20% higher compute throughput and 23% higher memory bandwidth than the 3080, just for barely getting 10% higher performance in 4K games.
Same architecture (it's essentially the same chip even), just 20-23% more resources doing only 10% higher performance. Then on AMD's side there's Vega 56 getting the exact same performance as the Vega 64 if they're at ISO clocks, despite the later getting 14% higher compute throughput. In fact, a Vega 56 @ ISO compute throughput (i.e. clocked 14% higher than Vega 64) gets better performance than a Vega 64, exactly because the GPU is working at higher clocks.

Also, for the better part of the last decade we had GCN GPUs with inferior gaming performance to Kepler/Maxwell/Pascal GPUs that had lower compute throughput and lower bandwidth.
Bandwidth+Compute was never a final nail in a coffin. Pascal vs. Vega especially showed us how a narrow and high clocked architecture can get substantially higher performance than a wider and lower clocked one.

Yes, I know that between SeriesX and PS5 we're talking about architectures that are much closer, but the point of narrower + higher clocked > wider + lower clocked still stands to some degree (as seen by Vega 56 vs 64 and RTX3080 vs 3090), and this has been explicitly mentioned by Cerny and others in the industry.


And you want to tell us, that the xbox SOC will have lower performance because of that?
No, I actually only wrote that the SeriesX's performance will be higher than the PS5's so far and I presented my calculations as such.
What I do argue against is @function's dogma that the SeriesX will always be at least faster than the PS5 by their 17% difference in compute throughput, as if nothing else in the pipeline mattered. This is factually not true.
 
I remember all of the "I don't care about resolution just give me detailed graphics" comments. Well in a weird way that is what Microsoft has done with the Series S. They've scaled the hardware to basically force developers to give you next gen @1080p.

I mean if you can pick up a Series S in a few years on Black Friday for $200 and it plays UE5 games even @900p the dream is alive in my opinion.....
 
I think the interesting part is, when will the 2nd wave jump into the next-gen pool. Usually its early adaptors and fanboys that goes first. Then the more price sensitive ones, maybe year 1+ or 2+ after. Will the XSS get them to jump in before?

I agree, this is the rub for the lower cost Series S I'm not expecting it to post big numbers on launch shipments because the kind of people who buy on launch tend to be quite price insensitive <coughs in launch PS3 regrets>. If MS can persuade the "I'll wait for when it's $XXX cheaper" crowd to jump in earlier that will be a very big get for them, you can bet your bottom dollar Sony is leaning on their supply chain hard to reduce costs to narrow that price gap between XBSS and PS5 DE
 
The XSS is already well within that second price drop that certain people wait for.

This is why I say, what we think of as early adopters could be very different.
They may not buy launch day, but that could be more so due to it not hitting mass social awareness yet.

I still have a certain reservations around memory, but I feel the benefit of reaching a wider audience from early to throughout (due to cost reduction issues) is a worth it.

Any game that can't scale will simply be a poorly thought out game and should be few and far between. As long as people except people buying an XSS is knowingly buying an entry device where there will be a certain level of compromise excepted.
 
SeriesX will always be at least faster than the PS5 by their 17% difference in compute throughput, as if nothing else in the pipeline mattered. This is factually not true.

It’s more then 17% when the ps5 gpu is operating at a max of 2.23ghz.
The xsx has a more powerfull gpu, at that also more cu’s which probably is the better way to achieve then extreme violent clocks, where diminishing returns occur.
And thats assuming max boost.
The cpu is slower too even at max boost, and 100gb/s less in memory b/w.
Its the more powerfull machine, in a smaller box and less powerdraw.
 
It’s more then 17% when the ps5 gpu is operating at a max of 2.23ghz.
The xsx has a more powerfull gpu, at that also more cu’s which probably is the better way to achieve then extreme violent clocks, where diminishing returns occur.
And thats assuming max boost.
The cpu is slower too even at max boost, and 100gb/s less in memory b/w.
Its the more powerfull machine, in a smaller box and less powerdraw.
The architecture of the Xbox Series X is packed with a bunch of performance-enhancing stuff. For example, thanks to SFS memory management, the difference will be even bigger than what the teraflops numbers show.

Multiplatform games will show this.
 
Back
Top