The scalability and evolution of game engines *spawn*

They've already indicated the main spec is Series X and then scale down.
Considering that both PS5 and XSX is closer together, that should be the lead spec.

im thinking they will spec for lockhart and then scale up for ps5 and xsx cause this is the easiest way of doing it.
 
im thinking they will spec for lockhart and then scale up for ps5 and xsx cause this is the easiest way of doing it.
Would be the hardest way to do it. PS5 will sell more than Xboxes combined. PS5 which is close to XSX will be the lead platform. But because XSX shares nearly everything with PC, XSX should be the lead platform overall.

Not to mention, you can't make discless SKU your lead platform.
 
Last edited:
im thinking they will spec for lockhart and then scale up for ps5 and xsx cause this is the easiest way of doing it.

Scaling down is easier than scaling up if all your target platforms share the same feature sets.

Scaling up is easier if you have disaparate features sets. For example, in the PC space there are so many different features spread across so many different hardware configurations that many developers choose to target the largest pool of hardware that share common features. Or for developers that target PC and consoles, consoles have historically been the limiting factor due to lacking advanced hardware features as a console generation goes on.

In that case it's easier to add in or tack on advanced features that are missing from what you've chosen as the base hardware feature set. Notice that I say hardware feature set and not hardware speed. You can always scale across performance targets fairly easily (within reason). It's much harder to scale across different disparate supported hardware features.

That's why DirectX allowed PC gaming to explode like it did back in the late 90's early 2000's. Programmers could code to a common hardware feature set and then scale according to speed. It wasn't perfect however, as each hardware still had their quirks and then you have the infernal hardware feature support cap bits for non-base hardware features.

Contrast that with say historical cross platform development on console where PS2 was nothing like Xbox or Dreamcast. Same goes for PS3 being absolutely nothing like X360. PS4 and XBO got pretty close but there were still some big sticking points of hardware divergence (ESRAM?).

In the case of XBSX and PS5, with relatively minor differences we have very similar hardware feature sets. With XBSX and XBSS we have virtually identical hardware feature sets. That's about as easy as easy gets in terms of scaling across different performance targets.

What does that mean? It's easier for a developer to scale down than it is to scale up. In other words, it's easier to create rendering and assets targeting higher fidelity and then decrease that quality for a lower performance target. Scaling up would require starting with low quality assets or rendering and then somehow figuring out a way to add in higher detail to base low quality assets. *buzzer sound* Nope, easier to scale down.

And this is made possible because you don't have to worry about whether the lower target supports X hardware feature or can do Y rendering method.

Regards,
SB
 
Last edited:
3rd party stuff should be OK as they typically are not gonna extract the most possible performance.
I think its the first party exclusive games that will suffer on the xbox,
as its 10GB xbox vs 16GB ps

Either way its a risky and bold move for MS. you've gotta admire the cajones in them focused on getting the price down as low as possible, I personally would of gone with a smaller 256GB SSD and 16GB memory (let the user buy more storage if the small size annoys them) but it would make it much easier for the developers
 
I also think it shouldn't matter for 3rd party. 3rd party already need to make sure their games run on PC and the majority of gaming PC will have spec closer to XsS than XsX. Currently on steam survey, 1060 has the biggest percentage at 10% and if you look at the top 10 GPU, it will still be around 1060 performance. Also RTX wouldn't really matter for 3rd party other than for cosmetic purpose since currently it is still irrelevant in terms of market share.
The problem will be 1st or 2nd party games where they can't use XsX power without thinking about XsS. If someone has an idea for something that need to use compute heavily or more likely, ray tracing, like making a mirror world, which they probably will need to use as minimal rays as possible on 4K res, (thus the ray tracing result might not be usable enough if you try to lower it) then it might not be possible for them to do it since it is not possible to do it on XsS while still maintaining playability.
Or maybe a game like Dreams where if they target XsX, they have a lot more RAM to play with, thus can do more stuff. But because of XsS, some of the extra RAM in XsX will be useless since I doubt those edits in dreams is affected by resolution.
 
Microsoft's 1st & 2nd party games have to run on PC too. Think of the X|S as a different tiers of PC. So it's not going to be a problem for 3rd parties but it will for 1st & 2nd parties? Scaling is scaling. No difference who it comes from.

Tommy McClain
 
This counts as a bit of evolution. Warframe are switching to using Oodle Texture to reduce the game size. 6.5Gb initially and 15Gb over the year. The first switch is for lightmaps and then for rest of the textures. The stated install size is 30Gb at the moment.

Some (fairly imperceptible) comparisons at the link

https://forums.warframe.com/topic/1223735-the-great-ensmallening/

via Eurogamer's article.
Well, those are very "greyish" textures ;)
Don't get me wrong, something like oodle is great to use, but it really seems that those "greyish" textures are more or less a best-case scenario ;)

But at least this shows, that next-gen games can be smaller than expected, as oodle works perfectly as preparation for kraken and zlib compression.
 
Scaling down is easier than scaling up if all your target platforms share the same feature sets.

I agree with all of your post except this. Scaling down is much more complicated, especially when your headroom is much lower. The CPU dip for Series X probably isn't going to be a big deal but inevitably, if you code that is hitting the CPU ceiling for producing 60fps on Series X then Series S and PS5 are going to struggle.

Likewise if your art assets are all a certain size that simply require x bandwidth, not having enough bandwidth means you have a problem. Bear in mind that bandwidth is required to access assets regardless of the target render resolution. Sure, there is a lot of cool tech in all nextgen consoles to optimise this, bit they all share a similar features but sometime, somebody is going to hit that wall.

Scaling up is inevitably easier. More resolution, more frames, longer draw distances, more foliage on screen etc.
 
So much for XSS blowing XBO X out of the water...
Some modern video cards can't run crysis still.
Does that mean the older video cards are better?

The software has to match the hardware, games designed using the latest features won't be available on X1X.

How exactly would X1X keep up with the SSD and CPU speeds that XSS will bring?

Then lets touch on what X1X is missing GPU wise (this is 12_2 minimum spec - not what XSS is bringing to the table):

Required driver model WDDM 2.0
Shader Model 6.5 (6.0)
Raytracing tier Tier 1.1 (0)
Variable shading rate Tier 2 (0)
Mesh shader tier Tier 1 (0)
Sampler feedback Tier 0.9 (0)
Resource Binding Tier Tier 3 (Yes Tier 3)
Tiled Resources Tier 3 (Tier 1)
Conservative Rasterization Tier 3 (0)
^^^ these are super important features

There are also less important features missing; but not worth talking about.
 
Last edited:
Many reaction but devs aren't happy but like @zed told us as a game dev. The worst is the RAM on XSS side.

https://www.resetera.com/threads/devs-react-to-the-xbox-series-s-specs.284204/

EDIT:

CTO of Id Software

Another long rant
unknown.png

unknown.png

unknown.png
 
Last edited:
Last edited:
Isn't it the same speed CPU as the PS5 then? I mean when the PS5 isn't variable clocking it.

A little bit slower 3.4 Ghz SMT but Xbox CPU does more job on I/O than PS5. After I think it is nitpicking because the dev is not happy of Xbox Series S. This is not changing from what Digitalfoundry heard of Xbox Series S last E3.

The consensus is on the RAM configuration.
 
Last edited:
Isn't it the same speed CPU as the PS5 then? I mean when the PS5 isn't variable clocking it.

who knows its fort knox over there at sony hq, but at least PS5 is consistent cause they only have one console, i assume devs will use S clocks for the xsx as well and call it a day.
 
Simple thought experiment:

in 2013 Sony released another SKU with 0.6 TF and 5G of ram and called it a 900p machine (really a 720p one but), selling it at 249.

And in 2020 we look at TLOS2 and GOT, running on exactly that hardware.
 
It seems the honeymoon period for XSS is OVER. that was quick.

Simple thought experiment:

in 2013 Sony released another SKU with 0.6 TF and 5G of ram and called it a 900p machine (really a 720p one but), selling it at 249.

And in 2020 we look at TLOS2 and GOT, running on exactly that hardware.

that is SONY of course they are going to take the time and optimize for their consoles. you think EA or UBI is going to spend 1000s of manhours optimizing for both XSS and XSX, aint happening.
 
It's just another specific set of specifications on their PC build.

games are poorly optimized for PC this is why console games look great with lower specs, its not apples to apples comparison. and another point how will the XSS hold up 4-5years from now when IQ goes up will it still be 4k upscaled or will it go down to 720p? honestly I prefer PS5 approach, one console its simple only one HW to optimize for. MS checkmated themselves here. IMO this is worse than the Kinect.
 
Back
Top