Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
That was in reference to consoles: PS5 and Scarlet. Per AMD's own words, hardware RT is only present in RDNA 2. The question is, does PS5 and Scarlet have RDNA 2 or RDNA 1 with custom RT blocks?


And what does "select lightning effects" in that slide mean?
Yes, sorry if that wasn’t clear.

Even RTX HW does limited RT. You can do it for only select things like shadows. I believe Metro does RT for shadows (amongst other things, but that was a focus of one of their demos).

As Ryan said, it’s probably Navi (as both MS and Sony mentioned explicitly) with some RDNA 2 features brought forward. Perhaps they are slightly different, as Sony seems to want to do kd trees and photon mapping rather than BVH.
 
As Ryan said, it’s probably Navi (as both MS and Sony mentioned explicitly) with some RDNA 2 features brought forward. Perhaps they are slightly different, as Sony seems to want to do kd trees and photon mapping rather than BVH.
We know for sure there both navi based.

RDNA 2 is probably going to go by the navi branding also.

I'm currently open to consoles being RDNA 1 or 2 based.

Thanks for the MS VRS link, I'll look at it a bit later.
I know they've added it to DX12 with a few tiers.
 
We know for sure there both navi based.

RDNA 2 is probably going to go by the navi branding also.

I'm currently open to consoles being RDNA 1 or 2 based.

Thanks for the MS VRS link, I'll look at it a bit later.
I know they've added it to DX12 with a few tiers.
We may be seeing VRS in Halo Infinite with their DOF style. It's not always blurred, sometimes you can catch it when it looks sharp but the pixels are completely unshaded.
 
I would bet MS has custom VRS implementation.

https://patents.google.com/patent/US20180047203A1/en

Filed in 2016 and has their lead GPU architect on it amongst others.
Funnily enough MS in regards to support mentions Nvidia has it, and upcoming Intel chips.
No mention of AMD, know why for certain now.
https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/

Have to admit I thought it would definitely be in navi (RDNA 1), RTRT not so much as that could be demanding so may only make sense in top end products (RDNA 2).

VRS makes sense throughout the stack for multiple reasons, thought it would at least be tier 1.

We may be seeing VRS in Halo Infinite with their DOF style. It's not always blurred, sometimes you can catch it when it looks sharp but the pixels are completely unshaded.
Have no reason to believe it was running on Scarlett dev kits, they would have said.

Probably PC (as we know it's getting PC release) with Nvidia gpu that does support VRS, if what your seeing looks like it.
 
Last edited:
I meant to state that current SW doesn’t implement all possible RT scenarios because performance would be crap. AMD focusing on a subset of use cases thus isn’t far-fetched.
That seems a reasonable assumption. There are two possible interpretations of that though, one is that they are targeting very specific use cases for their hardware augmentation (more bang for less silicon, less generality) the other that the performance basically dictates what can be done, but the hardware design is more open to software innovation.
The first could be very viable in a console, the second more desireable to drive general algorithmic development.
(I’m hoping for the second.)
 
Yeah, that's logical. They often make sure to say it that way (like... most powerful console we've ever made) since they can't possibly predict or know about the competition. But they do have a history of competitive PR statements as soon as the specs are in the open. It's good PR only if it's true and a couple times they went too far and it backfires.
I think that statement fine in regards to the announenment and the next-console generation. They don't know when PS5 is releasing, or what hardware it has, so their statement is valid based on information already known. Like a Kickstarter promising for the first time ever wireless earbuds with in-built touchscreens, only for an unknown Chinese company to release the same concept months before they release...that doesn't make the statement a lie, but perhaps premature. Or an 800m runner who says he's going to win, and all expectations are he's going to win, but then some new runner overtakes in the end.

As it's not an important point (such as claiming RT exclusivity, "we are the only machine with realtime raytracing," would have been a serious marketing fault), it's not worth MS going over those words with a fine-toothed comb and fretting about wording.
 
That seems a reasonable assumption. There are two possible interpretations of that though, one is that they are targeting very specific use cases for their hardware augmentation (more bang for less silicon, less generality) the other that the performance basically dictates what can be done, but the hardware design is more open to software innovation.
The first could be very viable in a console...
I think that's nonsense. How do you develop partial raytracing hardware? Raytracing cast rays. It's very simple. You then use that data in certain ways, but it's all about rays and intersects. Either your hardware accelerates ray tests, or it doesn't, but I can't see how you can accelerate ray tests for shadows but not lights, or AO but not secondary illumination.

The obvious interpretation, which one has to have a particular mindset to miss, is realtime hardware RT sits between RT acceleration on shaders (compute) for content generation which is 'slow' but functional, and full-on scene rendering in the cloud with supercomputers, with HWRT providing raytracing functionality but not enough to generate a whole scene in realtime, only enough to embellish through a hybrid renderer. Any slight ambiguity of the slide comes from the fact it's a slide and not a technical document, presented with someone explaining what it all means.
 
I think that's nonsense. How do you develop partial raytracing hardware? Raytracing cast rays. It's very simple. You then use that data in certain ways, but it's all about rays and intersects. Either your hardware accelerates ray tests, or it doesn't, but I can't see how you can accelerate ray tests for shadows but not lights, or AO but not secondary illumination.

The obvious interpretation, which one has to have a particular mindset to miss, is realtime hardware RT sits between RT acceleration on shaders (compute) for content generation which is 'slow' but functional, and full-on scene rendering in the cloud with supercomputers, with HWRT providing raytracing functionality but not enough to generate a whole scene in realtime, only enough to embellish through a hybrid renderer. Any slight ambiguity of the slide comes from the fact it's a slide and not a technical document, presented with someone explaining what it all means.
Can't you limit the number of ray bounces in order to make it much less demanding and possible for the whole scene ?
 
Ray tracing is inherently recursive. Each iteration has the same requirements as the first with a full scene search. Thinking about it, the only way I can see to speed up the process would be to reduce the areas tested against. You could either only have coherent directional rays, enabling reflection and lights and that's it, or you could have secondary rays perhaps, maybe, work within a limited cone of possible bounces, reducing the memory footprint and search area of bounced rays. Dunno. That still sounds like bollocks to me.

Possibly, only direct rays could be accelerated notably faster. You'd lose everything devs want from raytracing though.
 
Due to the nature of ray-tracing, could the console APU (or APU's in general) be more suited to it than normal GPUs if they can provided a larger coherent memory (cache) between the CPU and GPU and then share the workload.

Use a CPU like an RTX core for the intersect tests and the data structure traversal, and finally send the results to the GPU. Being on chip, the data path there should be extremely fast and we've seen the Ryzen CPU come with incredibly large L3's.
 
Use a CPU like an RTX core for the intersect tests and the data structure traversal, and finally send the results to the GPU. Being on chip, the data path there should be extremely fast and we've seen the Ryzen CPU come with incredibly large L3's.
I remember an AMD employee talked about CPU + GPU RT (a while before Lisa Su's initial comment on RT).
But all i find now is this gernan page https://www.tweakpc.de/news/42883/amd-raytracing-mit-kombination-aus-cpu-und-gpu/ referring the japanese source https://www.4gamer.net/games/300/G030061/20180921078/
 
So wait, AMD's long-term plan for RT is to utilize the cloud? :???:
 
So wait, AMD's long-term plan for RT is to utilize the cloud? :???:
No. That's not a roadmap. It's showing the AMD solutions scales with workloads, from non-realtime local resolves through compute, to realtime RT aspects enabled through hardware acceleration, to full-on RT image construction using the immense processing power of the cloud. The cloud is also a concept that benefits studios, because instead of each having their own render-farm, conceptually they could all share in a larger, faster resource. I don't know how realistic or cost effective that is though. How much down-time is there, if any, on a modern render-farm? Anyway, this image is just showing three tiers of AMD raytracing support, from AMD's raytracing engine on compute supported on GCN, through their new RDNA hardware to local acceleration, to super-compute clusters, all through the AMD raytracing solution(s).

For a moment, I wondered where HW RT fits in with content creation. It may not be realtime but accelerating any rendering is still valuable from hardware. Is the hardware not up to that task? But of course that's covered in the first tier, where RDNA is shown as an option for raytracing content creation in non-realtime (unrealtime?) tasks where we can expect RT acceleration.
 
Last edited:
The way I understand Select Lighting Effect is for example the RT cores would only RT the scene based on distance, so the closest object to the player's view port gets RTed while the distant scene is ignored and replaced by Rasteriser. Or a priority raytracer when you only RT the most prominent and visible component of a scene while omit the rest to Rasteriser. Something like this.
https://www.researchgate.net/public...e_distribution_in_spaces_with_venetian_blinds
 
Status
Not open for further replies.
Back
Top