Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I don't know how it's possible to read less into "dedicated ray tracing cores" than having cores... which are dedicated... to ray tracing.

Maybe it's the 14+4 thing again. :runaway:



Where the devs can allocate tasks however they see fit with the GOU Resources.
 
I have a feeling there are two different hardware solutions for how RT is implemented in the next-generation systems. I think Scarlett's RT solution will be more of AMD's engineering efforts, but with a certain amount of bespoke features or instructions sets (e.g., DirectX feature sets) that Microsoft's wants implemented directly in. I think Sony's RT solution will be either in-house related tech or something from an unnamed/unannounced third-party partnership.
 
Last edited:
Maybe it's the 14+4 thing again. :runaway:



Where the devs can allocate tasks however they see fit with the GOU Resources.
If having RT as part of each compute unit makes them only a little bigger, it would be a clear choice (don't know, we can see nvidia rt being a big addition, but it's not integrated much). But imagine if it made them twice as big. I could see a reason to have only a few with the capability, or having them separate as almost an ASIC function. Or closer to the CPU since traversal is more fitting to the CPU memory access than the GPU.

Also a separate unit design could make it simple to add some IP from the companies who spent decades trying to make ray tracing ASICs.
 
I have a feeling there are two different hardware solutions for how RT is implemented in the next-generation systems. I think Scarlett's RT solution will be more of AMD engineering efforts, but with a certain amount of bespoke features or instructions sets (e.g., DirectX feature sets) that Microsoft's wants implemented directly in. I think Sony's RT solution will be either in-house related tech or something from an unnamed/unannounced third-party partnership.
We know AMD is cooking a hardware based solution for RDNA2. Designing GPU features (hardware + software) is expensive. Sony and Microsoft are most likely going to use the exact same hardware solution: the cheaper solution designed by AMD.

They better use that money for something else, like more memory, more CUs (so most probably better RT) or better cooling in order to have higher clocks (so again better RT if RT is embedded in CUs like alleged by the AMD texture patent).
 
New Does Dedicated RT Cores mean they will have to balanced the die area between RT cores and normal compute cores? I wonder how much risk they decided to take. It needed a lot of silicon on nvidia rtx.
From the difference between TU106 and TU116, it seemingly doesn't need a lot of silicon.
The TU106 has practically 50% more functional units than the TU116 everywhere, 50% more shader cores, 50% more TMUs, then 33% more ROPs and memory channels.

The TU116 has 6.6B transistors and 284mm^2 die size. TU106 is 10.8B transistors and 445mm^2.
A TU116 scaled by 150% would be 9.9B transistors and 426mm^2.

Considering the TU116 also lacks the AI cores, the RT cores are really not taking a lot of space.



I was hoping AMD would find a way to unify the RT circuitry into to the CUs, without having to make special cores just for RT. I guess that wasn't possible or otherwise worse performance.
The patents are pointing to the RT hardware being embedded into the TMUs.
 
I have a feeling there are two different hardware solutions for how RT is implemented in the next-generation systems. I think Scarlett's RT solution will be more of AMD's engineering efforts, but with a certain amount of bespoke features or instructions sets (e.g., DirectX feature sets) that Microsoft's wants implemented directly in. I think Sony's RT solution will be either in-house related tech or something from an unnamed/unannounced third-party partnership.

I would actually expect the opposite is more likely because I don't think Sony is in the position nor willing to do any complex design work since the PS3 disaster. PS4/Pro were pretty straight forward AMD designs with a few tweaks. I see MS far more willing to do custom design work which their past consoles since the 360 showed.

The Sony of the 80s/90s with a broad engineering power or ambition doesn't really exist anymore at least in the areas I care for like Audio/TV.
 
The patents are pointing to the RT hardware being embedded into the TMUs.
I suppose if there is customization and moving it closer to the CPU is a rumour; that wouldn’t show up in a patent and for obvious reasons would not be part of the main dGPU line.

I think square said they were working on their own RT API that would hook into both DXR and whatever Sony is working on with their RT API. Gives weight to the probability that we are seeing likely similar implementations.
 
I think square said they were working on their own RT API that would hook into both DXR and whatever Sony is working on with their RT API. Gives weight to the probability that we are seeing likely similar implementations.
Your logic sounds backwards to me.
If they was the same there would be less reason to separate it out, not more.

Not that I personally believe it either way.
 
It can never be the same from a software perspective regardless of having identical hardware because DXR will never be on Sony platforms unless they opt to license it from Microsoft if they're willing to.
 
We know AMD is cooking a hardware based solution for RDNA2. Designing GPU features (hardware + software) is expensive. Sony and Microsoft are most likely going to use the exact same hardware solution: the cheaper solution designed by AMD.

They better use that money for something else, like more memory, more CUs (so most probably better RT) or better cooling in order to have higher clocks (so again better RT if RT is embedded in CUs like alleged by the AMD texture patent).

I don't believe at the moment both are using the same Navi solution or same iteration of it. I think Sony's Navi architecture maybe the earliest version of the architecture, while Microsoft will have the more robust featured one with RT logic. I think Sony had its version locked last year, only updating clocks (if possible) up to this point.

So, my tinfoil hat is leaning more towards in-house technology or another vendor partnership other than AMD's .

I would actually expect the opposite is more likely because I don't think Sony is in the position nor willing to do any complex design work since the PS3 disaster. PS4/Pro were pretty straight forward AMD designs with a few tweaks. I see MS far more willing to do custom design work which their past consoles since the 360 showed.

The Sony of the 80s/90s with a broad engineering power or ambition doesn't really exist anymore at least in the areas I care for like Audio/TV.

Sony has facilities, engineers, finances and possible knowhow on coming up with valid designs/tech, even if it requires partnering. Sony isn't a cu** hair short of being bankrupted... maybe a bushel though. j/k :p
 
Last edited:
It can never be the same from a software perspective regardless of having identical hardware because DXR will never be on Sony platforms unless they opt to license it from Microsoft if they're willing to.
I agree, but I'm talking from a general engine design point of view. I. E. If it was the same then you would have less reason to separate it into its own module compared to having it structurally implemented in the same fashion.

Sony wouldn't use DXR, it also wouldn't use DX12 either.
Just highlighting that his logic seems backwards to me. I'm not reading anything more into it apart from them saying they are ready for both systems personally.
 
I agree, but I'm talking from a general engine design point of view. I. E. If it was the same then you would have less reason to separate it into its own module compared to having it structurally implemented in the same fashion.

Sony wouldn't use DXR, it also wouldn't use DX12 either.
Just highlighting that his logic seems backwards to me. I'm not reading anything more into it apart from them saying they are ready for both systems personally.
I guess the assumption is that If the 2 RT APIs are functionally the same; then the hardware supports the same features/functions. If there wasn’t enough overlap between the two hardware wise, we would/should see this show up in massive differences between Sony and MS API functions; if this were true then Square would not waste be effort of creating their own API

edit:
Last but not least, while the ‘Back Stage’ demo runs on Windows PC, it uses Microsoft’s DXR API. However, the folks at Luminous Productions and Square Enix managed to abstract the functions that perform ray tracing. This was done to ensure compatibility with next-generation consoles and Sony’s next PlayStation in particular since that hardware won’t have DXR compatibility for obvious reasons. As such, the developers will just need to replace the abstraction layer to deliver ray traced graphics for the next PlayStation console (which has been confirmed to feature hardware support for it, just like the next Xbox).
https://wccftech.com/square-enix-sh...minous-engine-prepared-for-next-gen-consoles/
 
Last edited:
MS have more incentive to design a higher level API mostly compatible between their windows platform and their console offering.

Sony have no such limitations. They would simply add in GNM what is required for ray tracing based on what the ps5 hardware can do.

Third parties will have trouble developping their cross platform engines if the underlying algorithmic capabilities are very different between the two platforms. The api itself seems to be less of a problem. As long as the capabilities can be mapped without having to reinvent the wheel.
 
I don't know how it's possible to read less into "dedicated ray tracing cores" than having cores... which are dedicated... to ray tracing.

I read it as the guy at coalition knows scarlet has harware ray tracing support like everybody else does because it's public info (MS e3) but he is a software developer and not a hardware engineer and not a Beyond3D nerd and he is speaking in broad strokes because his salary will still come in idependently of if some no-lives like us take his exact words to exteapolate if AMD's implementation of DXRT for scarlet relies on discrete cores for RT or texture units or whatever else.
 
I would actually expect the opposite is more likely because I don't think Sony is in the position nor willing to do any complex design work since the PS3 disaster. PS4/Pro were pretty straight forward AMD designs with a few tweaks. I see MS far more willing to do custom design work which their past consoles since the 360 showed.

The Sony of the 80s/90s with a broad engineering power or ambition doesn't really exist anymore at least in the areas I care for like Audio/TV.
Not for GPUs if we use historical precedent since 2013. PS4 and Pro have both in-house and exclusive custom hardware features in their GPUs not present in any other APU nor GPU AFAIK.

XB1 and XBX GPUs are 100% off the shelves AMD hardware. What it custom in them is the number of CUs, cache size, memory controller size and such, but nothing really exclusive to MS platforms. And it makes totally sense as they support 2 platforms. Custom GPU hardware stuff wouldn't make sense with their PC + console software strategy.
 
The biggest mistake ms could make is designing Scarlett with pc in mind.
The base tech/IP will be the same so 90%+ will be the same and that includes ps5.
The reason I think it would be such a big mistake is because then you compromise your design. End up either more expensive or less performant than it needed to be.
The pc will either brute force it, catch up in time, work around it.
In the beginning if it means pc doesn't get certain games so be it (let's be honest few and far between would be due to architecture).
But to limit first and third party devs would be a mistake especially when you know ps5 wouldn't be making those kinds of compromises.

MS will support DX, toolsets etc, and that's enough to maintain good cross platform support.
 
Sony has facilities, engineers, finances and possible knowhow on coming up with valid designs/tech, even if it requires partnering. Sony isn't a cu** hair short of being bankrupted... maybe a bushel though. j/k :p

I'm not arguing about their bankruptcy:) They are just using other basic technologies now and sell it with some bells&whistles of them+label. Look at their TVs, OLEDs from LG and whatever LCD manufacturer. Their proud Hifi audio division is gone. There was a time people were curious about what they manage to engineer each year and these times are gone for 15+ years.

Not for GPUs if we use historical precedent since 2013. PS4 and Pro have both in-house and exclusive custom hardware features in their GPUs not present in any other APU nor GPU AFAIK.

XB1 and XBX GPUs are 100% off the shelves AMD hardware. What it custom in them is the number of CUs, cache size, memory controller size and such, but nothing really exclusive to MS platforms. And it makes totally sense as they support 2 platforms. Custom GPU hardware stuff wouldn't make sense with their PC + console software strategy.

The XB1 design is more complex in its changes than the PS4 alone. I'm not talking about power here. I see the PS4 as a straight forward AMD GPU/CPU design with slight tunings. Nothing major there. X1X I'm not exactly sure. We all know that MS talked about their extensive profiling of the architecture to locate speed bumps but I'm not sure how that translates into actual design changes outside of buffers+cpu tweaking.

You surely have a point about RT and PC compatibility and I'm not saying that MS will deliver something custom about RT. Just that I consider it "more likely than" Sony for the mentioned reasons.
 
Not for GPUs if we use historical precedent since 2013. PS4 and Pro have both in-house and exclusive custom hardware features in their GPUs not present in any other APU nor GPU AFAIK.

XB1 and XBX GPUs are 100% off the shelves AMD hardware. What it custom in them is the number of CUs, cache size, memory controller size and such, but nothing really exclusive to MS platforms. And it makes totally sense as they support 2 platforms. Custom GPU hardware stuff wouldn't make sense with their PC + console software strategy.

Extremely limited view of what custom means here. The entire kinect/sound space are all custom creations. The changes to the microcontroller that enable executeindirect with state changes, the changes to the microcontroller that can reduce the CPU load on DX12 function calls. Those are exclusive to xbox platforms and continue to be.

As for changes to cache sizes, memory controllers; even if we ignore customizations for the sake of customizations, we're seeing significantly better performance out of 1X over 4Pro in excess of their power difference. Freeing the SoC of the bottlenecks to operate at 4K competently is pretty massive, I think understating this feat is folly in this discussion. Whole architectures are designed around target goals.

We have discussions across many forums about RDNA outputting X% more performance per TF over Polaris in the next gen console discussions.

What MS managed with 33% more TF over their competitor with the same architecture, similar power profile, same NODE should be considered very successful in terms of their customizations considering we see 3P titles operating at double resolution for more titles than it should be - with a lot of titles where 4Pro at 1440p vs the full 2160p on X1X. Lets not get into the discussions where 4Pro is operating at 1080p vs 4K on X1X.

And I dont' really care about whether X1X outperforms 4Pro, ti's not really relevant; but it seems fairly reductive to call X1X just off the shelf components while it's outperforming it's competitor without requiring the usage of a feature like Rapid pack math, or any of the lot.
 
Last edited:
Status
Not open for further replies.
Back
Top