How to prove this? You can't patent DXR, it's just an extension of DX12. We have no information how long DXR has been scoped/developed/in discussion with IHVs.Sony and NVIDIA's effort for RT predates DXR.
I think some would argue that base DX12 is based heavily around the way GCN works.AMD lagging so far behind NVIDIA (RT wise) doesn't support this theory
Why would something be good enough for MS but not good enough for Sony? If AMD's solution is only a 'select' set of things it's capable of, then that doesn't represent DXR support. By default even fall back compute based DXR is able to support anything the developers want to do with respect to ray tracing. I can't see how they would have a gimped hardware that would need to restrict what can be accomplished in the API.AMD lagging so far behind NVIDIA (RT wise) doesn't support this theory, Sony wouldn't use their own custom solution if AMD had a capable solution ready, the way I see it, either AMD didn't have something at that time (which is supported by them being so late), or their solution didn't satisfy Sony's requirement.
Reflections and hard shadows would be possible if you have a robust mapping between detailed and simplified geometry, for example UVs. Accurate normals from the detailed geometry could be used, so only position would cause some error. (Though, all this is much harder than it sounds - i doubt devs would be happy.)
Shading evaluation could happen on GPU, so the process could be: While generating GBuffer also send packets of rays to RT unit, get packets of ray hits back after some time (maybe nicely sorted by some ID like material), update / shade frame buffer using those results, eventually recurse. Requires pending tasks while waiting on tracing results?
Maybe the RT unit is also programmable, so it could launch a shadow ray automatically. But without access to texture and material no form of importance sampling would be possible. But programmable shaders + texture units... this would end up as a second GPU almost.
How could this look like for real? Tight coupling to GPU totally necessary?
Maybe Sony started on this before the big progress in denoising? What were their expections back then?
With Jim Ryan saying Sony wants fastest transition to next gen console ever, and considering you will get ~30% more chips per wafer by going from 390mm2 > 320mm2, how likely is that Sony went for gigantic die instead of narrower and faster one as has been leaked throughout the year?
If 7nm wafer situation is tight at TSMC, even though AMD will be their biggest customer in 2020, then it must be reasonable to assume Sony wanted to maximize wafer allocation and not find themselves tight on stock.
All depends on what the projection was back in 2017/2018 when these were designed.Depends on how the yields are at the obscene clocks that where in the GitHub leaks. You could get more dies and still end up with less or a similar amount of usable SoCs.
We can construct a scenario where a dedicated chip can save the GPU some TF, which i tried to do in my last post.I feel like XsX’s design would be overall more versatile wouldn’t it? In RT situations the extra 3 TF would help out to equalize PS5’s dedicated RT chip while in non RT cases it blazes pass the PS5 as the latter’s RT chip becomes useless? Unless that chip could also help out in rasterizer to equalize a 3 TF higher console? Is this the narrative we’re at today?
To add to this one, here is aquariuszi answer to question on how big Navi 10 and Navi 14 are (from January)AquariusZi on Renoir (4000 series just announced). This is from 2019/06/14
https://www.ptt.cc/bbs/PC_Shopping/M.1560496597.A.CC8.html
From todays Anandtech
https://www.anandtech.com/show/1532...oming-q1?utm_source=twitter&utm_medium=social
Now obviously, he predicted bigger GPU and 4 cores, and what we got is smaller GPU (11 > 8CUs) and more cores, but this tells you (along with his Navi 10 and Vega leaks) that his info is on point.
Oberon, constant revisions, "one size smaller then Arden" (50mm²)...I would give him a benefit of a doubt. It fits perfectly well with what good intern from AMD has leaked to us.
In fact, not too fat ... Well mention here, although not say 100% sure but it should not be too unfamiliar NV10 die size of about 250 ~ 260mm2 14 of about 160 up and down
It's Microsoft that started to talk about it a year before launch and now he's lamenting that people are discussing about the info that you provided?
Its actually out of the context tweet. It was started after someone asked him about actual specs on CES and he told guy to wait for official reveal, to which he got called "Asshole" lolIt's Microsoft that started to talk about it a year before launch and now he's lamenting that people are discussing about the info that you provided?
But i agree a chiplet approach (or worse dedicated chip) would cause extra worries, mostly about latency. Adding RT block close to GPU seems more efficient.
Very curious about it...
How to prove this? You can't patent DXR, it's just an extension of DX12. We have no information how long DXR has been scoped/developed/in discussion with IHVs.
Yeah, and Vulkan too. Both have some roots in Mantle.I think some would argue that base DX12 is based heavily around the way GCN works.
Maybe Sony wanted a certain performance level not available to current AMD solutions, or may be AMD's solution wasn't even available at that time.Why would something be good enough for MS but not good enough for Sony?
Wouldn't it cause some inefficiencies on Xbox Series using Navi GPU ?How to prove this? You can't patent DXR, it's just an extension of DX12. We have no information how long DXR has been scoped/developed/in discussion with IHVs.
I think some would argue that base DX12 is based heavily around the way GCN works.
Why would something be good enough for MS but not good enough for Sony? If AMD's solution is only a 'select' set of things it's capable of, then that doesn't represent DXR support. By default even fall back compute based DXR is able to support anything the developers want to do with respect to ray tracing. I can't see how they would have a gimped hardware that would need to restrict what can be accomplished in the API.
AMD must provide a fully functional support for DXR or not. There should not be any in-between states. the API is literally, cast a ray on intersected triangles, perform shading, followed by denoising. All the different tier levels is just supporting flexible methods for developers to approach problems with DXR.
It may depend on priorities. Unfortunately we are not privy to what is happening there. I agree that AMD being behind the feature set is telling of something; at the very least a telling of how intertwined Nvidia may be with DirectX at least from a feature set perspective. I'm not necessarily sure that correlation of being behind means they weren't being communicated with from the start.For NVIDIA's case, we infer this through the fact that AMD was late to the party, you don't see an IHV falling behind in a major DX revision like that unless they weren't ready for it, especially with the fact that a console is using AMD hardware with ray tracing already. If AMD was ready for DXR, they would have released it along side Navi, we also infer this through NVIDIA's rapid deployment of RT, not only on API levels (DXR, CUDA, Vulkan, fallback layer), or demo levels (RT demos running exclusively on NVIDIA hardware), but also through rapid game deployment and integration as well, which requires early drivers and early API libraries.
DXR was quite weird in it's introduction, it came out of the blue with no previous preparations or alpha stages, unlike DirectML, or VRS, or Mesh Shading or even DX12 itself, all of which got released officially after many months of their early introduction and developer awareness campaigns.
For Sony, we infer this if they were indeed using a custom RT solution, as it wouldn't make sense for them to use another solution if AMD had a working one already.
indeed, forgot about this as well.Yeah, and Vulkan too. Both have some roots in Mantle.
If MS feels that AMDs RT feature is sufficient, for this to go forward, it would have to imply that it's cheaper to go with a different 3P solution or to create their own solution than to go with AMD, or lastly it's a completely different method of approaching RT. Which I see as being a challenging sell to build just for a console, unless the plan is to build a product and license out RT tech into a market with established leaders.Maybe Sony wanted a certain performance level not available to current AMD solutions, or may be AMD's solution wasn't even available at that time.
No it's just the API. The API is an abstraction. Drivers are responsible for performance. Although the architecture could have an impact as well; but GCN -> Navi shouldn't be as big as a deal as comparing GCN to Maxwell for instance.Wouldn't it cause some inefficiencies on Xbox Series using Navi GPU ?