Current Generation Hardware Speculation with a Technical Spin [post launch 2021] [XBSX, PS5]

Status
Not open for further replies.
Also, how does the proportional space dedicated to the GE compare with the space on Nvidia for their AI deep learning die space? I wonder why Sony chose to dedicate so much space to the GE when they could have used something similar to Nvidia...
The tensor cores are a significant silicon investment. I believe 20% die area invested IIRC. And the GA102 is a much larger than XSX APU at over 628.4 mm^2 vs the XSX 360mm^2. It's a whole lot of GPU lol.

I think the Geometry processor is relatively small in comparison. You'd need several geometry processors more than what's existing to increase that ratio.
 
In general - what we have here cannot "reverse deny" PS5s performance advantages so far. Meaning if there ARE some major Architecture disadvantages in comparision to Series X there MUST be something to make up for it or we would live in a impossible timeline..
What's impossible about the current timeline? Nothing impossible about poorly optimized code. Just because the hardware exists doesn't mean the developers have to take advantage of it either.

There are no major architectural disadvantages. I think you're taking things way out of context here.
 
In general - what we have here cannot "reverse deny" PS5s performance advantages so far. Meaning if there ARE some major Architecture disadvantages in comparision to Series X there MUST be something to make up for it or we would live in a impossible timeline..
Do we have them though? I mean the first game releases were not that fair comparison, but in all latest games PS5 generally runs in lower resolution but with slightly better framerate in certain instances (due to probably lower resolution). The bummer is that we don't have the games designed with the next gen architecture in mind on XSX so we cannot compare them for now.
I personally expect something akin PS4 Pro vs Xbox One X again.
 
What's impossible about the current timeline? Nothing impossible about poorly optimized code.

I feel like a lot of posters see the "tools" narrative as a convenient excuse, which I can understand just from how it sounds.

What I think a lot of people don't realize is that something being wrong with the code is actually the most likely thing any time hardware "underperforms". Usually we'd assume some kind of failure at the developer if a game came out and just ran like garbage compared to what we expect... if the same developer is delivering great performance on other platforms though, tools are a reasonable guess even without the leaks or the other prior assumptions we have.
 
I feel like a lot of posters see the "tools" narrative as a convenient excuse, which I can understand just from how it sounds.

What I think a lot of people don't realize is that something being wrong with the code is actually the most likely thing any time hardware "underperforms". Usually we'd assume some kind of failure at the developer if a game came out and just ran like garbage compared to what we expect... if the same developer is delivering great performance on other platforms though, tools are a reasonable guess even without the leaks or the other prior assumptions we have.
Indeed, there's no bottom to shitty code. There is a ceiling to performance, but there really is no depth to a lack of optimization.

Compare something like Doom Eternal now running fully on a switch but can scale sky high with graphics and insane frame rates. It's one of the few titles that is even capable of this feat. Many other titles have tried and failed.

I think people just assume talent, time, and budget are all equally able to 'maximize' the hardware. That's just not true. PS5 is the lead platform. They can also put in caveats like tossing away older APIS to force developers to use newer techniques like primitive shaders only; this could lead to increased performance advantage from the get go strictly on geometry for those games that are mainly traditional render pipeline. On the MS side of things, the GDK doesn't have to force anything, and the nature of code once deploy on all platforms pretty much ensures you're using the older APIs and pipelines; the assumption that they must be all leveraging the same parts or features of the GPU is not validated yet unless a developer tells you otherwise. Therefore you're getting poorer performance from the get go. Couple that with covid and we're in no mans land.

Nothing is impossible. The only thing we know for sure is what the hardware has. But software and how it maximizes the hardware is still going to be a big part of the larger equation. PS3 forced developers to learn an entirely new way of programming. I don't think Sony really gives any sort of care towards forcing developers to use the newer hardware correctly. They want to showcase what the new console is capable of.

Frankly if I were MS I would have done the same thing, ban the older front end pipeline out completely, force Mesh Shaders only for Series S/X consoles. There would be some major drawbacks however lol. But we'd get with the program much faster.
 
Last edited:
Yeah cross gen games on Xbox one and 1X are toast lol.
Yea, they would have to write a second render path for XBO, 1X, and PC. But they would have to do that anyway for PS4, 4Pro. But I doubt Sony would care, this is a new generation. The $10 increased charge is going towards something.
 
Yea, they would have to write a second render path for XBO, 1X, and PC. But they would have to do that anyway for PS4, 4Pro. But I doubt Sony would care, this is a new generation. The $10 increased charge is going towards something.

Sony' tools are much better and their SDK was out for much longer so it works for them. Plus who is going to say no to the overwhelming market leader.
 
Sony' tools are much better and their SDK was out for much longer so it works for them. Plus who is going to say no to the overwhelming market leader.
I'm just spitballing that Sony could do it. Unfortunately I don't know if they did or have. But yes, with more time, and if primitive shaders / NGG pipeline was ready to go for launch, it remains an open possibility that Sony could force developers to use the NGG front end path. MS tools were unfortunately too far behind according to documentation to do the same thing. NGG was not ready in June and their Mesh Shader support was still having some issues for the first official 'launch cleared' GDK.
 
The tensor cores are a significant silicon investment. I believe 20% die area invested IIRC. And the GA102 is a much larger than XSX APU at over 628.4 mm^2 vs the XSX 360mm^2. It's a whole lot of GPU lol.

I think the Geometry processor is relatively small in comparison. You'd need several geometry processors more than what's existing to increase that ratio.

Any idea where that 20% is reported? All I could find was this image:

NVIDIA-GeForce-RTX-3090-RTX-3080-Ampere-Die-Shot-GA102-GPU-_4-scaled.jpg

But to my eyes several of the colour delineated sections contain the same detail behind the colouring. As far I can see colours green, yellow, orange, and grey in the top 2/3 of the image all have the same hardware underneath, just mirrored.

It's an interesting choice for Sony to not use similar AI tech and go with this GE. It's placed literally at the centre of the die space. If that image is in any way correct.

Looks like it might be the same guy annotating both dies.
 
Any idea where that 20% is reported? All I could find was this image:
It’s a number I think posted here that I recall but I think now that may have been just a random shot in the dark.
Trying to reverse engineer the number each SM has 64 CUDA cores, 4 tensor cores and 1 RT core. So 1/16 the of the cores are dedicated for tensor operations.

unfortunately I don’t know how many transistor is in a single tensor core.

To use that diagram you need to separate the tensor cores out from the sm.

the best I could Google from Tom’s hardware for now:
That's a lot of tensor core enhancements, which should tell you where Nvidia's focus is for GA100. Deep learning and supercomputing workloads just got a massive boost in performance. There are some other architectural updates with GA100 as well, which we'll briefly cover here. The SM transistor count has increased by 50-60%, and all of those transistors had to go somewhere.
 
Last edited:
It’s a number I think posted here that I recall but I think now that may have been just a random shot in the dark.
Trying to reverse engineer the number each SM has 64 CUDA cores, 4 tensor cores and 1 RT core. So 1/16 the of the cores are dedicated for tensor operations.

unfortunately I don’t know how many transistor is in a single tensor core.

To use that diagram you need to separate the tensor cores out from the sm.

the best I could Google from Tom’s hardware for now:
That's a lot of tensor core enhancements, which should tell you where Nvidia's focus is for GA100. Deep learning and supercomputing workloads just got a massive boost in performance. There are some other architectural updates with GA100 as well, which we'll briefly cover here. The SM transistor count has increased by 50-60%, and all of those transistors had to go somewhere.
https://www.techpowerup.com/forums/...area-by-22-compared-to-non-rtx-turing.254452/
After analyzing full, high-res images of NVIDIA's TU106 and TU116 chips, reddit user @Qesa did some analysis on the TPC structure of NVIDIA's Turing chips, and arrived at the conclusion that the difference between NVIDIA's RTX-capable TU106 compared to their RTX-stripped TU116 amounts to a mere 1.95 mm² of additional logic per TPC - a 22% area increase. Of these, 1.25 mm² are reserved for the Tensor logic (which accelerates both DLSS and de-noising on ray-traced workloads), while only 0.7 mm² are being used for the RT cores.
 
That link seems to suggest that the difference is ~10% between the non-RT/AI and the one with it, on total die space.

That's well within the same *proportional* range (the Nvidia chips are much larger) as the geometry engine in the PS5. Assuming that the guy correctly analysed the chip.

Looks like the IO hasn't been fully determined in the currently available annotations. Presumably part of the grey coloured area at the bottom of the image.
 
I'm actually pretty surprised we don't have a low cost custom silicone solution to scaling with DLSS-like quality at this point. I was pretty excited about the prospect when they announced that Series S had a custom hardware scaler, but I think it's a pretty feature light, off the shelf solution.
 
Typically, with semi custom solutions you can pick and choose or remove which hardware blocks you’d like on your chips.
In this particular case however, AMD doesn’t have a fixed function tensor processing patent or design, IIRC, that is working, so it’s not something they can just bring in.

to build a tensor processing unit would be a complete custom silicon as opposed to semi custom. I’m not sure that when they were designing these consoles many years ago I’m not sure if tensor processing or dlss was even a discussion. Not sure if there was significant time.

Looking back, it seems like what they have was likely the best they could do outside of semi custom 2070 for consoles which may cost way too much to do with a zen 2
 
Free upgrade for Metro Exodus coming in the form of Metro Exodus Enhanced Edition.. featuring full 4K/60fps support on new current gen consoles, as well as a new RT pipeline.

https://www.4a-games.com.mt/4a-dna/...-for-playstation-5-and-xbox-series-xs-upgrade

PC version include DLSS2.x support as well as advanced RT features. The PC version is also free for owners of the original. It will not be a patch, but rather an "Entitlement" meaning a completely separate download because a RT compatible GPU is required as a minimum.

PS5 version will support Quick Loading as well as DualSense support.

EuRtxs6VkAY6Gv1
 
I can't edit my post because it was moved to this thread from another, but seeing as it was.. I'll also add that there's support for optimized loading on Series X/S as well and support for improved latency reduction on the new Xbox controllers. Seems like they're doing a great job supporting unique features of each device, and that deserves recognition... especially since previous owners are getting these improvements for free.

I bought the game on EGS when it first released, but I think I'm going to support them and get the game on Steam as well. It's only $20CAD at the moment. :smile2:
 
Free upgrade for Metro Exodus coming in the form of Metro Exodus Enhanced Edition.. featuring full 4K/60fps support on new current gen consoles, as well as a new RT pipeline.

https://www.4a-games.com.mt/4a-dna/...-for-playstation-5-and-xbox-series-xs-upgrade

PC version include DLSS2.x support as well as advanced RT features. The PC version is also free for owners of the original. It will not be a patch, but rather an "Entitlement" meaning a completely separate download because a RT compatible GPU is required as a minimum.

PS5 version will support Quick Loading as well as DualSense support.
Big list of updates.
Must've been an oversight regarding 4k textures on XSS.
Hopefully we'll not have to wait 8 months to see it. I'm looking forward to seeing what they accomplish.
 
Status
Not open for further replies.
Back
Top