Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
And I suppose it's not on the SoC because it will free up some extra cpu bandwidth?
Might be one of the fixed function dedicated blocks of the APU, can be part of a controller/codec chip for the memory subsystem (if doing NAND soldered to the board, for instance) or might be a totally discrete chip.

All of those would free up CPU time.
 
So if none of the system is going the cache approach from where uncompressed data could stream, then it will be a battle of which one has the better decompression system in place.

Dedicated decompression chip. There is a patent about it.

If we assume both next-gen systems have an SSD at 2gb/s - 4gb/s and a very powerful real-time decompression chip, can we conclude that if they have a paltry 20GB of RAM then there's absolutely no scenario where a long loading time will still occur? Even if the 20GB of RAM needs to be filled entirely from clean slate?
 
Something we apparently looked over, AMD pretty much confirming hardware RT in the Xbox Series X coming from them.

This processor builds upon the significant innovation of the AMD Ryzen™ "Zen 2" CPU core and a "Navi" GPU based on next-generation Radeon™ RDNA gaming architecture including hardware-accelerated raytracing

https://community.amd.com/community/gaming/blog/2019/06/09/amd-powers-microsoft-project-scarlett

So they made a specific statement for Xbox, but they never did the same for PS5, which could suggest the PS5 RT solution is custom based.
 
A quote from a dev making a game for PS5.
"If games would stay the same in terms of scope and visual quality it’d make loading times be almost unnoticeable and restarting a level could be almost instant [in PS5 games].

However, since more data can be now used there can also be cases where production
might be cheaper and faster when not optimising content, which will lead into having to load much more data, leading back into a situation where you have about the same loading times as today."

I do not see any scenario where the above bolded part would happen in a PS5 with 16gb - 24gb of RAM if:
  1. SSD is 4gb/s used as cache to stream uncompressed data. (No HDD cold storage)
  2. SSD is 2gb/s used to stream compressed data with a powerful ASIC decompression chip.

But I do see a scenario where it could happen if:
  1. SSD speed is not as fast as hyped. Fast loading times is achieved through the same method today.
  2. PS5 RAM is unbelievably huge. Perhaps using large amount of DDR4 or Intel Optane at the same speed as DRAM.
  3. A relatively small amount of ultra highspeed SSD is used as cache and paired with an HDD/SSD. - (exactly like the early "leaks")

How small and fast could it be though?
  • If it is used as cache then it should be really really fast otherwise a 1TB SSD would make better sense.
  • It should also have good write endurance otherwise it will not last long.
  • May I suggest 3DXpoint PCM by Intel as a good candidate for this cache. Maybe they can make a deal to use it with AMD CPUs.

Aside from the unoptimized content that the dev said would lead to long load times, if the cache is used to store insane details of immediately-needed data then it could lead to a situation where fetching from HDD/SSD will create long load times if not managed properly.
 
https://community.amd.com/community/gaming/blog/2019/06/09/amd-powers-microsoft-project-scarlett

So they made a specific statement for Xbox, but they never did the same for PS5, which could suggest the PS5 RT solution is custom based.
All three companies have their own pr schedule that have to be worked through.

So I wouldn't take that to mean PS doesn't use standard AMD RT, if I was was going to say anything it would be, it still leaves it open to be custom.

(I know you said could suggest, I'm just putting my view)
 
So I wouldn't take that to mean PS doesn't use standard AMD RT, if I was was going to say anything it would be, it still leaves it open to be custom.
At least we now have an official confirmation of a direct RT solution from AMD for the Xbox. And since we have patents for that, we should start a conversation about it's potential.
 
Fight me!

- next gen consoles will be mid range pc’s. You’ll buy into the hype though so here we are
- ray tracing will be limited like it is on RTX today but marketed heavily *if* it’s a platform distinguishing point
- SSD’s on the pc are more than fine. There’s no tears about load times on the PC. If your experience is limited to consoles, anything in that area will feel light years ahead
- In a couple of years once the toolset matures, current cpu ‘s that are 6 cores and esp those without HT will start to suffer
- RDNA 2 will still be well behind Ampere at the high end
 
At least we now have an official confirmation of a direct RT solution from AMD for the Xbox. And since we have patents for that, we should start a conversation about it's potential.
I assume the largest difference that matters to devs is the option for traversal shaders.
LOD is surely the most interesting application for that, but what else? Is there more potential than that?

Another difference is (which is more about console vs. PC): On consoles there will be no need to black box BVH.
I would use this to enable LOD using geomorphing / progressive meshes. Bounding boxes would only need to be large enough so they always fit the slightly changing geometry. Little extra work - maybe adding some BVH subdivision on the lowest level if triangle count increases too much.
Opposed to stochastic LOD using traversal shaders this solution would remain cache efficient. Such techniques are not used in games yet, but i could generate the data with the preprocessing tools i currently work on, so i want open BVH on PC too. Vendor extensions would be fine, but it could well be what i want is not possible.

I guess other applications that could benefit from open BVH are fluids or triangle alternatives.
Also scenes with not much being static, like earthquake and heavy destruction (could bake the physics / GI and also the BVH to animation, stream from SSD... pretty sure we will see such impressive movie alike stuff with next gen).
Finally open BVH will make it easier to compromise tracing vs. building performance, control distribution of work between CPU / GPU, etc.




Performance wise i still guess next gen console will be about GTX 2060 or less, maybe also a bit more now with those 12 TF rumors.
But i also think that's enough, and options for good LOD support are more worth than >2080 perf would be.
I also think there will be no FF unit to process outer traversal loop (once you have LOD you don't want to turn it off), and so it's very unlikely they could beat current raw (but restricted) RTX perf.

I still hope AMD pulls something great out of the hat for next gen, like they did with GCN for current gen. Maybe RDNA2 will be bigger change than the naming indicates.
Considering this, and RTX being much simpler then i have initially assumed, AMD could also surprise with better RT performance. But i would not bet on this. (hoping for more compute powaaa instead :D )

What i believe the least is Sony doing a custom RT solution. Will be all the same i guess.
 
...
What i believe the least is Sony doing a custom RT solution. Will be all the same i guess.
Why ? Historically we should not be surprised if some hardware RT stuff is exclusive to PS5. On Pro they included some custom silicon (whatever who designed it: it's not already existing stuff that still doesn't exist elsewhere) to help their CBR rendering (not only, can be also used to improve TAA quality). They could do exactly the same for RT rendering.
 
I assume the largest difference that matters to devs is the option for traversal shaders.
LOD is surely the most interesting application for that, but what else? Is there more potential than that?

Another difference is (which is more about console vs. PC): On consoles there will be no need to black box BVH.
I would use this to enable LOD using geomorphing / progressive meshes. Bounding boxes would only need to be large enough so they always fit the slightly changing geometry. Little extra work - maybe adding some BVH subdivision on the lowest level if triangle count increases too much.
Opposed to stochastic LOD using traversal shaders this solution would remain cache efficient. Such techniques are not used in games yet, but i could generate the data with the preprocessing tools i currently work on, so i want open BVH on PC too. Vendor extensions would be fine, but it could well be what i want is not possible.

I guess other applications that could benefit from open BVH are fluids or triangle alternatives.
Also scenes with not much being static, like earthquake and heavy destruction (could bake the physics / GI and also the BVH to animation, stream from SSD... pretty sure we will see such impressive movie alike stuff with next gen).
Finally open BVH will make it easier to compromise tracing vs. building performance, control distribution of work between CPU / GPU, etc.




Performance wise i still guess next gen console will be about GTX 2060 or less, maybe also a bit more now with those 12 TF rumors.
But i also think that's enough, and options for good LOD support are more worth than >2080 perf would be.
I also think there will be no FF unit to process outer traversal loop (once you have LOD you don't want to turn it off), and so it's very unlikely they could beat current raw (but restricted) RTX perf.

I still hope AMD pulls something great out of the hat for next gen, like they did with GCN for current gen. Maybe RDNA2 will be bigger change than the naming indicates.
Considering this, and RTX being much simpler then i have initially assumed, AMD could also surprise with better RT performance. But i would not bet on this. (hoping for more compute powaaa instead :D )

What i believe the least is Sony doing a custom RT solution. Will be all the same i guess.
Next gen consoles will be SIGNIFICANTLY better then 2060. If even one of them lands 12TF RDNA2, it will likely be better then 2080, let alone 2060.
 
Why ? Historically we should not be surprised if some hardware RT stuff is exclusive to PS5. On Pro they included some custom silicon (whatever who designed it: it's not already existing stuff that still doesn't exist elsewhere) to help their CBR rendering (not only, can be also used to improve TAA quality). They could do exactly the same for RT rendering.
I think custom CBR is peanuts in comparison to custom RT, even with current RT likely being simple and no complex reordering like ImgTech had.
I don't think it's worth for Sony to do this investment, considering nobody will deliver a 'good' RT solution anyways, because RT algorithm has bad memory access pattern which can't be fixed.
So Sony may choose from some options AMD proposes, picking what suits their visions best.

Just my 2 cents. No expert, and it also depends on if we call 'adding some ACEs or CBR to GCN' is indeed chip design made by Sony, or just some minor customization.

Next gen consoles will be SIGNIFICANTLY better then 2060.
Do you think this also applies to RT performance alone?
Thing is, even if they do 12TF console which makes sense having a cheaper base, AMD beating NV significantly with their first RT implementation just sounds optimistic to me.
But who knows... never underestimate AMD! :)
 
Do you think this also applies to RT performance alone?
Thing is, even if they do 12TF console which makes sense having a cheaper base, AMD beating NV significantly with their first RT implementation just sounds optimistic to me.
But who knows... never underestimate AMD! :)
I dunno about RT, talking strictly raster performance. I think 12TF Navi next year will be comfortably ahead 2080, let alone 2060.
 
Fight me!

- next gen consoles will be mid range pc’s. You’ll buy into the hype though so here we are
- ray tracing will be limited like it is on RTX today but marketed heavily *if* it’s a platform distinguishing point
- SSD’s on the pc are more than fine. There’s no tears about load times on the PC. If your experience is limited to consoles, anything in that area will feel light years ahead
- In a couple of years once the toolset matures, current cpu ‘s that are 6 cores and esp those without HT will start to suffer
- RDNA 2 will still be well behind Ampere at the high end
All true, but I think a RDNA2 card that competes with at least a 2080ti for half the price is something we should all wish for, then nVidia might start pricing to sell their best stuff instead of ripping us off, just like Intel have been forced to lately.
 
Status
Not open for further replies.
Back
Top