Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Answer me this though (and this goes for you too @j^aws ): Let's say PS5 doesn't have IC why do you think they wouldn't invest an extra ~22mm2 for 32MB IC, perhaps too low a amount to make a difference in performance that justifies cost?
IC is just a marketing term for SRAM. We have not been discussing performance or cost, rather if a hypothetical die can hold x amount of SRAM.

I've done this with XSX baseline and have said 15-18 sq mm is what is left. I have also shown with your method, you have around 15 sq mm left to account for before IC.

Yes, we do because within that 260mm2 half navi21 IO is included, which would be 18mm2 if it remains the same size as 5700 but could be more (or less) depending upon navi21 actual io size
and again we don't know ps5 io block size to determine whether it needs <18mm2 or >18mm2 (got any guesstimates?)
I suggest you go back to my 1st post and follow the numbers we discussed - we built your hypothetical die with 64 MB IC to be around 333 sq mm. This die does not exist. Removing 43 sq mm for 64MB IC brings the die back to the realms of possibility with 15 sq mm not accounted for.

Its unknown, could be off die like you said
And if it is on-die like XSX? Takes more off the remaining 15 sq mm.

Don't know about SRAM to make an estimate, but i'd guess it's a tiny amount or maybe acting as Sonys take at ic?
I could be wrong but don't the Command Processor, Geometry Processor and ACEs scale with SEs? wouldn't they be using a beefier version with 4SEs
Amount of SRAM is an unknown for the IO Complex, and you could consider it as Sonys version of IC, but I wouldn't call it IC as it's AMDs marketing name. But you have only 15 sq mm to work with. And this is with no IC, so make a guesstimate range of possibilities, min-max.

Command Processor, Geometry Processor and ACEs serve the entire GPU. We don't have scaling data.

I mean for the ~30mm2 estimate using 5700 L2 is included, meaning its bigger than just mc/phy, the layout is different in RDNA2 true.
I hope you're not getting tired/annoyed with me i realize how frustrating these types of discussions can get and im probably wrong anyways, its a loop of mental gymnastics without being certain 100% or having hard facts
No, not annoyed, but I have alarm bells ringing when things are being repeated. Your hypothetical die stands at 333 sq mm and doesn't exist.

Summary:

My method left around 15-18 sq mm for a few elements not accounted for, and no extra IC.

Your method left around 15 sq mm for a few elements not accounted for, and no extra IC.

Both methods are showing 16MB or higher amounts of extra SRAM as IC being unlikely.
 
This thread is track-wreck. Statements of faith rather than analysis. Rampant speculation predicated on there being no evidence to contradict their positions, while oblivious to their being no evidence to support it either.

Just.. WTF people.
:rolleyes: This reads like GAF circa 2007.
 
This thread is track-wreck. Statements of faith rather than analysis. Rampant speculation predicated on there being no evidence to contradict their positions, while oblivious to their being no evidence to support it either.

Just.. WTF people.
:rolleyes: This reads like GAF circa 2007.
Nonspecific commentary does not help anyone if you need clarification.
 
Does anyone here know if Fritz has the PS5 SoC in his pipeline at least?
 
I've got so many on ignore or just skip past posts I was very confused with Dsoups post until I remembered there's ignored posts up there somewhere.:D
 
I've got so many on ignore or just skip past posts I was very confused with Dsoups post until I remembered there's ignored posts up there somewhere.:D
Apparently I don't have enough people on ignore! :nope: Occasionally I get half of thirds or conversations and I can tell from one side of the conversation it's nuts. It would be neatr if there was also an option to hide posts of people replying to people I've ignored.
 
Last edited by a moderator:

So many excuses on how strange it is that the SeriesX isn't noticeable faster, and not one single admission that the PS5 could simply have a similar real-world performance, with each of the $500 consoles having strengths and weaknesses that mostly nullify each other. Well it has more teraflops so this has to be because Microsoft haven't enabled their secret sauce yet!

This just reads like damage control over all the 3rd-party comparisons we've been seeing this past couple of weeks.

I shouldn't have expected anything different the moment I saw it had been written by Tom Warren, so that part is on me at least.
 
There is a huge amount of damage control going on at the moment with apologists daring to even suggest that the XSX and PS5 are evenly matched or, heaven forbid, that the variable frequency approach and customisation that Sony have done might, just might, actually be paying off. You'd think that they'd done this before...

But it will be years on in to this generation before the realisation that it isn't just black and white; this ones a bigger number therefore must be best, thinking dies down. At the moment it's almost embarrassingly desperate sounding from some people.
 
I think there's a lot more fake concern about XBSX's performance than there is real concern. We're talking about extremely small differences on launch games here. Who knows if it's MS being behind on tools or PS5 performing better than expected or whatever but as an XBSX owner I personally feel zero concern about this. The amount of back and forth on this gets annoying. MS on paper has more power and they advertised as such. Do people really blame them for that? They need every sales advantage they can get after what happened with Xbox One, and they can say this without getting sued lol. For all we know the XBSX could still prove to be the better 3P machine in the long run and inside MS they know that will be the case. Personally I don't care if that's the case and games end up looking identical on both consoles. Both PS5 and XBSX owners are paying the same $500 price and getting roughly the same performance, kind of makes sense.
 
So many excuses on how strange it is that the SeriesX isn't noticeable faster, and not one single admission that the PS5 could simply have a similar real-world performance, with each of the $500 consoles having strengths and weaknesses that mostly nullify each other. Well it has more teraflops so this has to be because Microsoft haven't enabled their secret sauce yet!

This just reads like damage control over all the 3rd-party comparisons we've been seeing this past couple of weeks.

I shouldn't have expected anything different the moment I saw it had been written by Tom Warren, so that part is on me at least.
Why did you even read that article? :rolleyes:
 
When I see the "tools" argument I immediately cringe... Like the logic is that the PS5 dev tools are so mature they have hit a performance ceiling and the Xbox tools are so bad they are leaving 30% performance on the table. Its like the "Fine Wine" meme with the difference being that the wine is produced at the same vineyard.
 
But it will be years on in to this generation before the realisation that it isn't just black and white; this ones a bigger number therefore must be best, thinking dies down. At the moment it's almost embarrassingly desperate sounding from some people.

You may have missed the video back in January but NXGamer's video called "teraflops are a lie" explained and demonstrated why the raw teraflops number on its own is meaningless.


The performance are basically equal.

I don't get it either. The performance of the two consoles is neck and neck in almost all cases and the visuals quality settings all seem identical as well, barring the obvious bugs like Dirt 5, which Codemasters fixing. I'm not seeing any controversy (disclosure: big ignore list), gamers all seem happy to be paying games at higher fidelity and higher frame rates.
 
Last edited by a moderator:
I'm also curious if these also affect the CPUs cache, as Cerny's presentation only block highlighted GPU caches.

Well like I said before @3dilettante would probably be one of the best people to ask regarding it, they seem to understand a lot about the technical workings on that type of stuff. I remember them bringing up something regarding line flushes, and that in some instances you'd be better off just clearing the cache rather than doing a pinpoint eviction, but I don't recall a lot of specifics mentioned.

Cache is meant to be transparent in hardware, and 'full RDNA2' is just a marketing term. XSX is more a fulfilment of DX12 Ultimate.

I'm not convinced XSX frontend is RDNA2 specification. You can check the Rasteriser Units in the Hotchips block diagram - there are 4 and each Raster Unit is at the Shader Array level. Compare to Navi21, the Raster Units are at the Shader Engine level.

Guess you are referring to the leaks posted by that person on Twitter? I saw those too, they do have a lot worth asking. But I also saw someone else drawing up a comparison between RDNA 1 and RDNA 2 on the frontend, and a distinction between RDNA 1 and RDNA 1.1. That leak could've been pertaining to 1.1, because in the things they listed in frontend comparisons with RDNA 2 virtually all of it is the same.

If that's what that particular leak pertains to then there's not too much difference between RDNA 1.1 and RDNA 2, at least from what I saw. Wish I could find the image that showed what I was talking about, but it could explain the delienation of RDNA 1 and RDNA 2 in that leak, the RDNA 1 could've been referring to 1.1 but not reflect it in what was provided by that leaker.

Also, check Navi21 Lite in the driver leaks (aka XSX GPU), and its driver entries show no change to SIMD waves (CUs) from RDNA1 and Scan Converters/ Packer arrangements from RDNA1 as well.

That might be the case, but again, it could be RDNA 1.1, not RDNA 1. 1.1 made a few notable changes over 1.0 and shares much more design-wise with RDNA 2 than 1.0 does. Seeing how MS got rolling with production (and likely designing) their systems after Sony, I find it a bit hard to believe they have much if any 1.0 frontend features in their systems. 1.1 though? Yeah, that is more possible; but even there it'd be essentially the same to 2.0 going off some of the stuff I caught a look at (hopefully I can find the comparison listing again).

FYI, PowerVR existed on PC before Dreamcast. Smartphones also use/ used TBDR. And a handheld console, PS Vita also used TBDR and a PowerVR GPU.

Interestingly, Mark Cerny was involved with the Vita, and there are hints that PS5 has analogies. I haven't found anything concrete yet of a TBDR architecture.

Isn't there a patent Mark Cerny filed which covers an extension of foveated rendering with a range of resolution scaling among blocks of pixels, basically what would be their implementation of VRS?

And actually while at it, could that possibly just tie into whatever other feature implementation analogous to TBDR Sony happen to use with PS5? I mean at least to what it seems like to me, techniques like VRS and foveated rendering are basically evolutions of TBDR anyway (or at least are rooted in its concepts and adapts them in different ways). Maybe I'm wrong tho :oops:

Perhaps a GDC presentation or a dev slide deck will mention what data flows or processes are targeted by the cache scrubbers. Using virtual memory allocation to buffer new SSD data could avoid many flushes, and oversubscribing memory is part of the process for the Series X's SFS (allocates a wide virtual memory range, which demand traffic then dynamically populates).

I hope so; Sony seem more willing to open up discussing their hardware at GDC presentations these days, which beats having their E3 press conferences drowned out with all of that for the first hour (here's hoping to E3 2021 being a thing).

Also it'd seem like even though Series devices don't have cache-scrubbers, if they can utilize virtual memory as a buffer and track the data coming in that way through to memory via SFS and the other parts of XvA, then that should work just as well. It also fits better with the way MS wants to implement their take on restructured I/O data management, a more hardware-agnostic approach that, nonetheless, is still just as suitable, long as the underlying hardware isn't completely out-of-step or deficient (and that's something only people on very low-spec, outdated PC systems or the such are in jeopardy of facing).

Don't know about SRAM to make an estimate, but i'd guess it's a tiny amount or maybe acting as Sonys take at ic?

Been thinking the idea from them (I might have interpreted this wrong) is that Sony's I/O setup as a whole is designed in a way where they are applying concepts analogous to the type of features regarding data management IC does, but applied at the hardware level in a different manner than having a large block of 128 MB L2$ on the GPU.

So in theory, they're doing similar things (improving effective system memory bandwidth flow), but do it differently. Since a PC GPU is "just" a PC GPU, it still has to account for other parts of the system design outside of the scope of that GPU card, outside of its control. A console like PS5 doesn't have that as a factor; every part of the system can be designed explicitly around one another.

Similar goals in improving rate of memory bandwidth, similar concepts, but different means of applying it at the hardware level. That's the main reason Sony don't need IC in the way AMD's RDNA2 GPUs need it. The same can be said with Microsoft's systems; the designs as a whole focus on (among other things) improving the flow of memory through the system and ensuring better bandwidth, that's what the hardware/software features of XvA are built around.

Only big difference between them is MS wants to deploy their implementation onto PC and big server markets as well, so it has to account for different hardware specifications, variations, and scalability. But that's why MS are working with companies like AMD and Nvidia to ensure there's standardization. Once DirectStorage is deployed sometime in 2021, they'll likely be expanding that to motherboard manufacturers as well.
 
Status
Not open for further replies.
Back
Top