Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Sony and NVIDIA's effort for RT predates DXR.
How to prove this? You can't patent DXR, it's just an extension of DX12. We have no information how long DXR has been scoped/developed/in discussion with IHVs.
AMD lagging so far behind NVIDIA (RT wise) doesn't support this theory
I think some would argue that base DX12 is based heavily around the way GCN works.

AMD lagging so far behind NVIDIA (RT wise) doesn't support this theory, Sony wouldn't use their own custom solution if AMD had a capable solution ready, the way I see it, either AMD didn't have something at that time (which is supported by them being so late), or their solution didn't satisfy Sony's requirement.
Why would something be good enough for MS but not good enough for Sony? If AMD's solution is only a 'select' set of things it's capable of, then that doesn't represent DXR support. By default even fall back compute based DXR is able to support anything the developers want to do with respect to ray tracing. I can't see how they would have a gimped hardware that would need to restrict what can be accomplished in the API.

AMD must provide a fully functional support for DXR or not. There should not be any in-between states. the API is literally, cast a ray on intersected triangles, perform shading, followed by denoising. All the different tier levels is just supporting flexible methods for developers to approach problems with DXR.
 
Reflections and hard shadows would be possible if you have a robust mapping between detailed and simplified geometry, for example UVs. Accurate normals from the detailed geometry could be used, so only position would cause some error. (Though, all this is much harder than it sounds - i doubt devs would be happy.)

Shading evaluation could happen on GPU, so the process could be: While generating GBuffer also send packets of rays to RT unit, get packets of ray hits back after some time (maybe nicely sorted by some ID like material), update / shade frame buffer using those results, eventually recurse. Requires pending tasks while waiting on tracing results?

Maybe the RT unit is also programmable, so it could launch a shadow ray automatically. But without access to texture and material no form of importance sampling would be possible. But programmable shaders + texture units... this would end up as a second GPU almost.

How could this look like for real? Tight coupling to GPU totally necessary?
Maybe Sony started on this before the big progress in denoising? What were their expections back then?

Yes.
A secondary "GPU" massively dedicated to Sony's taylored RT... the doubt is on how it could phisically connected to the rest of the APU. In the same chip ? Maybe. Some sort of Cell2 ?[emoji1][emoji1][emoji1]...

Sony by having some piecese of sort of strange, exotic, HW also build up a barrier towards Google & MS services that may land on PlayStation forced by antitrust authorities. They just cannot run on the HW so easily: totally in Sony's interest......

Sony should build a PS5 = PC = Series X = Stadia ???

Of course NOT !!!
Many people have a PC and a PS4 now just to play something is NOT on the PC.
 
Last edited by a moderator:
I feel like XsX’s design would be overall more versatile wouldn’t it? In RT situations the extra 3 TF would help out to equalize PS5’s dedicated RT chip while in non RT cases it blazes pass the PS5 as the latter’s RT chip becomes useless? Unless that chip could also help out in rasterizer to equalize a 3 TF higher console? Is this the narrative we’re at today?
 

With Jim Ryan saying Sony wants fastest transition to next gen console ever, and considering you will get ~30% more chips per wafer by going from 390mm2 > 320mm2, how likely is that Sony went for gigantic die instead of narrower and faster one as has been leaked throughout the year?

If 7nm wafer situation is tight at TSMC, even though AMD will be their biggest customer in 2020, then it must be reasonable to assume Sony wanted to maximize wafer allocation and not find themselves tight on stock.
 

With Jim Ryan saying Sony wants fastest transition to next gen console ever, and considering you will get ~30% more chips per wafer by going from 390mm2 > 320mm2, how likely is that Sony went for gigantic die instead of narrower and faster one as has been leaked throughout the year?

If 7nm wafer situation is tight at TSMC, even though AMD will be their biggest customer in 2020, then it must be reasonable to assume Sony wanted to maximize wafer allocation and not find themselves tight on stock.

Depends on how the yields are at the obscene clocks that where in the GitHub leaks. You could get more dies and still end up with less or a similar amount of usable SoCs.
 
I feel like XsX’s design would be overall more versatile wouldn’t it? In RT situations the extra 3 TF would help out to equalize PS5’s dedicated RT chip while in non RT cases it blazes pass the PS5 as the latter’s RT chip becomes useless? Unless that chip could also help out in rasterizer to equalize a 3 TF higher console? Is this the narrative we’re at today?
We can construct a scenario where a dedicated chip can save the GPU some TF, which i tried to do in my last post.

Notice the GPU would be completely free from RT work if it submits rays and gets hits back. If those hits are sorted by material, texture, even UVs, the hit shading would be fully efficient. With DXR the hits from diffuse reflection are totally scattered, resulting in worst case data and code divergence. Also AMDs TMU patent hints CUs busy with handling outer traversal loop (unlike NV).

Additionally, Sonys close to the metal API should allow to reuse BVH and RT for physics collision detection, not just audio. Current DXR prevents physics in practice becasue BVH is black boxed and no box query is possible for broad phase.
But this is independent from dedicated yes or no, and MS could do this too if they want, at least on console.

But i agree a chiplet approach (or worse dedicated chip) would cause extra worries, mostly about latency. Adding RT block close to GPU seems more efficient.

Very curious about it...
 
AquariusZi on Renoir (4000 series just announced). This is from 2019/06/14

https://www.ptt.cc/bbs/PC_Shopping/M.1560496597.A.CC8.html



From todays Anandtech

https://www.anandtech.com/show/1532...oming-q1?utm_source=twitter&utm_medium=social



Now obviously, he predicted bigger GPU and 4 cores, and what we got is smaller GPU (11 > 8CUs) and more cores, but this tells you (along with his Navi 10 and Vega leaks) that his info is on point.

Oberon, constant revisions, "one size smaller then Arden" (50mm²)...I would give him a benefit of a doubt. It fits perfectly well with what good intern from AMD has leaked to us.
To add to this one, here is aquariuszi answer to question on how big Navi 10 and Navi 14 are (from January)

https://www.ptt.cc/bbs/PC_Shopping/M.1547568660.A.88A.html

In fact, not too fat ... Well mention here, although not say 100% sure but it should not be too unfamiliar NV10 die size of about 250 ~ 260mm2 14 of about 160 up and down

Actual size

Navi 10 - 251mm2
Navi 14 - 148mm2
 
It's Microsoft that started to talk about it a year before launch and now he's lamenting that people are discussing about the info that you provided?
Its actually out of the context tweet. It was started after someone asked him about actual specs on CES and he told guy to wait for official reveal, to which he got called "Asshole" lol
 
But i agree a chiplet approach (or worse dedicated chip) would cause extra worries, mostly about latency. Adding RT block close to GPU seems more efficient.

Very curious about it...

Me too [emoji1]
Latencies is the big dilemma in what seems Sony's approach to RT on ps5
 
How to prove this? You can't patent DXR, it's just an extension of DX12. We have no information how long DXR has been scoped/developed/in discussion with IHVs.

For NVIDIA's case, we infer this through the fact that AMD was late to the party, you don't see an IHV falling behind in a major DX revision like that unless they weren't ready for it, especially with the fact that a console is using AMD hardware with ray tracing already. If AMD was ready for DXR, they would have released it along side Navi, we also infer this through NVIDIA's rapid deployment of RT, not only on API levels (DXR, CUDA, Vulkan, fallback layer), or demo levels (RT demos running exclusively on NVIDIA hardware), but also through rapid game deployment and integration as well, which requires early drivers and early API libraries.

DXR was quite weird in it's introduction, it came out of the blue with no previous preparations or alpha stages, unlike DirectML, or VRS, or Mesh Shading or even DX12 itself, all of which got released officially after many months of their early introduction and developer awareness campaigns.

For Sony, we infer this if they were indeed using a custom RT solution, as it wouldn't make sense for them to use another solution if AMD had a working one already.

I think some would argue that base DX12 is based heavily around the way GCN works.
Yeah, and Vulkan too. Both have some roots in Mantle.
Why would something be good enough for MS but not good enough for Sony?
Maybe Sony wanted a certain performance level not available to current AMD solutions, or may be AMD's solution wasn't even available at that time.
 
Last edited:
How to prove this? You can't patent DXR, it's just an extension of DX12. We have no information how long DXR has been scoped/developed/in discussion with IHVs.

I think some would argue that base DX12 is based heavily around the way GCN works.


Why would something be good enough for MS but not good enough for Sony? If AMD's solution is only a 'select' set of things it's capable of, then that doesn't represent DXR support. By default even fall back compute based DXR is able to support anything the developers want to do with respect to ray tracing. I can't see how they would have a gimped hardware that would need to restrict what can be accomplished in the API.

AMD must provide a fully functional support for DXR or not. There should not be any in-between states. the API is literally, cast a ray on intersected triangles, perform shading, followed by denoising. All the different tier levels is just supporting flexible methods for developers to approach problems with DXR.
Wouldn't it cause some inefficiencies on Xbox Series using Navi GPU ?
 
For NVIDIA's case, we infer this through the fact that AMD was late to the party, you don't see an IHV falling behind in a major DX revision like that unless they weren't ready for it, especially with the fact that a console is using AMD hardware with ray tracing already. If AMD was ready for DXR, they would have released it along side Navi, we also infer this through NVIDIA's rapid deployment of RT, not only on API levels (DXR, CUDA, Vulkan, fallback layer), or demo levels (RT demos running exclusively on NVIDIA hardware), but also through rapid game deployment and integration as well, which requires early drivers and early API libraries.

DXR was quite weird in it's introduction, it came out of the blue with no previous preparations or alpha stages, unlike DirectML, or VRS, or Mesh Shading or even DX12 itself, all of which got released officially after many months of their early introduction and developer awareness campaigns.

For Sony, we infer this if they were indeed using a custom RT solution, as it wouldn't make sense for them to use another solution if AMD had a working one already.
It may depend on priorities. Unfortunately we are not privy to what is happening there. I agree that AMD being behind the feature set is telling of something; at the very least a telling of how intertwined Nvidia may be with DirectX at least from a feature set perspective. I'm not necessarily sure that correlation of being behind means they weren't being communicated with from the start.

Communications can be near instant, for instance this forum post discussion happening globally in different time zones and geographical locations. But how long it takes for a team to build a product vs another team elsewhere building another product. Without knowledge of their goals, the resources available, funding etc, a straight line cannot be drawn. It's quite possible that because nvidia only focus on GPUs where AMDs must divide their resources, we difference in the time to market.

Yeah, and Vulkan too. Both have some roots in Mantle.
indeed, forgot about this as well.

Maybe Sony wanted a certain performance level not available to current AMD solutions, or may be AMD's solution wasn't even available at that time.
If MS feels that AMDs RT feature is sufficient, for this to go forward, it would have to imply that it's cheaper to go with a different 3P solution or to create their own solution than to go with AMD, or lastly it's a completely different method of approaching RT. Which I see as being a challenging sell to build just for a console, unless the plan is to build a product and license out RT tech into a market with established leaders.
 
Wouldn't it cause some inefficiencies on Xbox Series using Navi GPU ?
No it's just the API. The API is an abstraction. Drivers are responsible for performance. Although the architecture could have an impact as well; but GCN -> Navi shouldn't be as big as a deal as comparing GCN to Maxwell for instance.
 
Status
Not open for further replies.
Back
Top