The scalability and evolution of game engines *spawn*

No, not particularly. Not when you have the CPU as strong as it is. I don't expect there to be any GPGPU stuff.

Anything that has to do with rendering will stay on the GPU, sure, that's GPGPU to do say culling, it belongs there and the amount culled is dependent on assets and resolution (which should scale).

But if you're talking about GPGPU to do animation or other things, I don't see them going to GPGPU unless the CPU is not up to the task.

It'll be interesting to see what developers do now that both consoles have modern and relatively powerful CPUs. The PC space hasn't seen the CPU get much use in games due to consoles, so this may finally see the CPU getting pushed again.

Of course, this all depends on how the CPU in one of the consoles clocks if the GPU is being pushed hard. But I still hope to see some nice advances WRT more CPU power being used in games.

Regards,
SB
 
Interesting times indeed. Especially in light of the Crysis Remastered RayTracing on even 4Pro and OneX midgen consoles.

There are certainly a large increase in base capabilities for developers to use if they only target next-gen.
 
No, not particularly. Not when you have the CPU as strong as it is. I don't expect there to be any GPGPU stuff.

Anything that has to do with rendering will stay on the GPU, sure, that's GPGPU to do say culling, it belongs there and the amount culled is dependent on assets and resolution (which should scale).

But if you're talking about GPGPU to do animation or other things, I don't see them going to GPGPU unless the CPU is not up to the task.

the CPU is only 4x as powerful, its not all that much honestly. there will always be GPGPU stuff needed cause the GPU can handle that stuff without rendering. I honestly think it will be held back to an extent.
 
the CPU is only 4x as powerful, its not all that much honestly. there will always be GPGPU stuff needed cause the GPU can handle that stuff without rendering. I honestly think it will be held back to an extent.
like what? What have we been doing on GPGPU this generation that you can cite?
 
No idea what they have been doing, but they talked about it a lot early gen. Things that can be done:
Logic/AI
Physics
Animation
ect ect

Pretty much all of that is still done on the CPU in the vast majority (if not all then almost all) of games. While you can do some of it on the GPU it comes with a latency cost as well as the fact that if you are using it for those things, you aren't using them for graphics rendering.

Regards,
SB
 
Pretty much all of that is still done on the CPU in the vast majority (if not all then almost all) of games. While you can do some of it on the GPU it comes with a latency cost as well as the fact that if you are using it for those things, you aren't using them for graphics rendering.

Regards,
SB

apparently GPGPU and Rendering use different pipelines. So you can render something AND have AI done. We need an expert here to settle this.
 
Ok, I understand better now. I was under the impression that raytracing was done from light sources in the scene, as the actual sources of light, but apparently the rays are cast from the camera PoV, as it seems to be less computational intensive that way. So despite the fact that we have Ray Tracing, we are very far away from an actual real life "lighting simulator".

Oh yeah, we're definitely a long way from "real" lighting, but with enough bounces you can get remarkably close, at least within the context of a rendered scene

And yeah what's typically done in graphics is not how it really works in nature! In real life light is emitted from a source and bounces around in an almost infinitely complex way. Each photon will have its own path and own set of interactions. To simply things for computers it's necessary to work backwards, and start with a point you want to know about and see what's illuminating it. Normally that start point is a position on a render target, or maybe a texture you want to use for an effect.

I think we're in a really interesting time for ray tracing, as real time capability - even if still in it's early days - is going to bring a lot more minds and a lot more customers. And as with 3D acceleration itself, that's likely to lead to rapid innovation and more creativity into the mix. I'm glad the XSS is going to be in on this in its own limited way. The more the merrier!

Edit - Then again I've just read that RTX and by extension DXR are not doing primary ray tracing from the camera, but actually secondary ray tracing from light sources?

I'm not really sure what that's referring to. Could you share the link?

RTX can definitely do ray tracing from screen space, and invoke calls from a shader, but if there's some new stuff out there that they're playing around with it'd good to have a read.
 
apparently GPGPU and Rendering use different pipelines. So you can render something AND have AI done. We need an expert here to settle this.
same pipelines and hardware.

You can fit in some GPGPU jobs using asynchronous compute, and hopefully it'll find space for it if there are gaps. But there is no separate path where GPGPU goes and doesn't share resources with the rendering pipelines.
 
same pipelines and hardware.

You can fit in some GPGPU jobs using asynchronous compute, and hopefully it'll find space for it if there are gaps. But there is no separate path where GPGPU goes and doesn't share resources with the rendering pipelines.

Well there you go. Some things don't scale, like AI and Physics.
 
Well there you go. Some things don't scale, like AI and Physics.
AI isn't generally done on the GPU. Or at least it hasn't. The GPU is not well designed for the way AI is done today at least.
Physics can be done on the CPU there sufficient power for that.

When referring to GPGPU, it's mainly about assisting on the rendering side of things. The CPU is often doing work for the rendering pipeline before submission, GPGPU allows that work to be done on the GPU and then rendered from the GPU. This would scale with resolution.

If your'e hunting for exception cases, you're unlikely to find them at least from a GPGPU perspective. Anything that would cripple XSS would cripple the other two. You have so many TF available, and the consoles PS5 and XSX also have their fidelity and resolution targets.
 
Last edited:
How much extra work will this entail for developers now that we have two different consoles essentially different gpus and ram, i bet they are thrilled about this.
 
How much extra work will this entail for developers now that we have two different consoles essentially different gpus and ram,

If you actually read what's already posted in this thread you'd likely have your answer. *ahem*

It's simplified from how console development used to be, even from as recently as developing for PS4 and Xbox One. No completely different memory scheme of a few Megs of fast memory to deal with from the Xbox One and One S. It's really not that much different than developing for PCs. It's likely a bit easier than PC development. You have the same feature levels, and won't have to deal with different feature sets (Nvidia, Intel, and AMD GPUs) and then multiple versions of each brand all with different features and quirks of their own.

The only developers not targeting PCs are First Party, and even that is questionable now that Sony has begun releasing titles on the PC.
 
the CPU is only 4x as powerful, its not all that much honestly. there will always be GPGPU stuff needed cause the GPU can handle that stuff without rendering. I honestly think it will be held back to an extent.

"only 4x" doesn't fully describe it. Zen 2 has 4x the per-core SIMD computing resources of Jaguar, making the new consoles 8x in SIMD theoretical performance. It negates the need for GPGPU outside of the most extraneous stuff. Even still, the current gen really didn't do anything special or interesting with GPGPU aside from Horizon: Zero Dawn AFAIK. PC version of the game probably just runs the PS4 version's GPGPU procedural placement system on the CPU since PC cores are pretty beefy (or were they turned into DirectCompute shaders?).
 
Ok, I understand better now. I was under the impression that raytracing was done from light sources in the scene, as the actual sources of light, but apparently the rays are cast from the camera PoV, as it seems to be less computational intensive that way. So despite the fact that we have Ray Tracing, we are very far away from an actual real life "lighting simulator".

Edit - Then again I've just read that RTX and by extension DXR are not doing primary ray tracing from the camera, but actually secondary ray tracing from light sources?
It seems quite common misconception that ray tracing is a word that describes a singular method of full light transport simulation.

It's helpful to think it as two parts to make something.

Ray casting/tracing
An intersection routine for a ray/sphere/cone etc. to something (voxel, box, distance field, polygon, curve.. 2D/3D/4D.. etc.)
Simple test of if there is hit or in case of volumes density etc. (Current hardware acceleration methods accelerate part of this.)

Light transport methods.
A methods which use ray casting/tracing to actually do something with the intersections.
This is where the magic happens in terms of light simulation and if the RT algorithm is able to handle global illumination, reflection/refraction or caustics properly.

I recommend watching the raytracing essentials series from nvidia to find out basics.

Edit: Fixed silly light casting/tracing typo.. should have been ray casting/tracing from the start...
 
Last edited:
So let me ask you. What happens when a game runs at 1440p on RTX 2080 Ti and you want to run it on a Radeon 5500 XT?

It's simple, you scale some things down until the 5500 XT runs the game fine at the resolution you want it to. It could even be 4k on the 5500 XT if you wanted. Not at the same IQ settings obviously, but that's the point.

You can scale just about everything graphics related in order to hit the target performance you want at the resolution you want.

And XBSX - XBSS will be far FAR simpler scaling than going from RTX 2080 Ti to Radeon 5500 XT as the latter has features that exist on one GPU that don't exist on the other GPU.

Only difference is that on PC, the user does the scaling. On XBSS, the developers will choose what is scaled down (resolution, IQ levels, etc.) So a 1440p XBSX title could easily run at 1080p on XBSS with the reduced resolution and perhaps reduced shadow quality, less dense foliage, or any other graphical tweak that the developers feel would represent the least noticeable difference.

Graphics are the ONLY thing that has to scale as the CPU is exactly the same. So, physics, for example, wouldn't have to be touched. 3D audio wouldn't have to be touched. AI wouldn't have to be touched. Etc.

Again, if a developer can't target the XBSX and scale down to XBSS easily, then that is one fail developer.

[edit] Sorry if this should have gone into another thread. I was responding to a post that was a page or two before Brits post mentioning the scaling thread.

Regards,
SB

Point was 1440p 30fps will get scaled down to something in the range of 720p 30fps, instead of 1080p.
1440p scaling down to 1080p only decreases the resolution by a factor of 1.77 and by current knowledge isn't doable with 1/3 of the same FLOPS on the same architecture.

My point was to expect 720p~900p on the XBSS, not 1080p across the board.
Hi, next gen graphics with 720p/900p.
What happens when XBSX and PS5 struggle to hit 1080p on an unknown future game?
will we be expecting SD definition?
Sure, I an probably run crysis on integrated graphics at something like 240p, but is that what we want?
There is a floor there in which you can scale. Scale the resolution down to some point and it breaks down.

Memory is another issue but apparently everybody thinks scaling down textures will fix the missing 6GB, which I don't think makes much sense.

4K=>2K is 1/4 the size, so 6GB=3/4 of the size, implying 8G of 4K textures for PS5/XBSX?

...And apparently we're back to the "lazy devs" argument 10 years ago.
 
Last edited:
Memory is another issue but apparently everybody thinks scaling down textures will fix the missing 6GB, which I don't think makes much sense.

I'm not as sceptical about the S as you are, but can't say I get the reduced texture argument either. Surely next gen titles will predominantly using using virtual textures? There won't be large 4k textures resident in memory on the XSX/PS5.
 
I'm not as sceptical about the S as you are, but can't say I get the reduced texture argument either. Surely next gen titles will predominantly using using virtual textures? There won't be large 4k textures resident in memory on the XSX/PS5.

That only makes the situation worse. So what data are you going to cut from the RAM to free up 6 gigs?

Given that 4k=>2k saves 75% on textures it was a good candidate. The more 4k textures, the easier time the XBSS has in shaving off the required ram. Other stuff won't scale nearly as well.

Faster SSD probably exacerbates the problem because it should reduce the proportion of the ram reserved for textures as devs load in/out textures as they need it.
 
Last edited:
Back
Top