AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
So 128ROPs and 512-bit memory bus to go along with it? Or 256-bit is enough?

I guess if you think that AMD have rejiggered the ROPs, they can also do 384-bit with 4SEs?
 
5600XT already has 64 ROPs and 192bit memory bus. So it is clear ROPs and bus were decoupled already in RDNA1. IIRC, both ROPs and Memory controller access directly to L2 cache (but I have to look at the block diagram and I am quite lazy today). Question is that IIRC there was an odd configuration in the driver with only 8? partitions and historically AMD's RBE was a 8-block part. So they should have enlarged the single block for fitting the 128 ROPs.
 
Yeah.
ROPs don't spill out of SA and Tonga was 4SE with 384b GDDR5 and that was years ago.

Tonga didn't have a crossbar like Tahiti but I remember both ROPs and memory bus being cut down on it.

5600XT retaining all of them looks fine then for a 384-bit possible config for Big Navi. Still think it seems out of place.
 
I remember both ROPs and memory bus being cut down on it.
I think Apple had the full configs.
Still think it seems out of place.
Kinda.
They just wanna scale the same SE n times over without some specific part choking so tons of "extra" ROPs galore are a go.
Is it tho?
Pretty sure yeah.
16 ROPs per GPC and I doubt it's a 6GPC config (he-he, GA103 is missing; get fucked nVidia and whatever product manager decided to shitcan 103).

GA102 is 12SMs per GPC so 8SM per GPC GA104 would be mildly underweight.
 
Last edited:
Pretty sure yeah.
16 ROPs per GPC and I doubt it's a 6GPC config (he-he, GA103 is missing; get fucked nVidia and whatever product manager decided to shitcan 103).

GA102 is 12SMs per GPC so 8SM per GPC GA104 would be mildly underweight.

It's not without a precedence, that 04's sport 6 GPCs:
https://www.techpowerup.com/gpu-specs/nvidia-tu104.g854

edit: late-edited in here to not derail the thread anymore, since this is not about Navi. So, an '04 with 6 GPCs is not without precedence. Also, Nvidia explicitly said, they re-engineered the placement of the ROPs into the GPC. Surely, they wouldn't do this just for teh lulz.
 
Last edited:
Well, time is running and as AMD said that their new GPUs would have launched before the consoles... And it seems the Xbox is arriving the 10th of November. So an October launch is more and more possible.
 
So 4SEs, 5120 shaders, 128ROPs, 384-bit bus with GDDR6, and linear scaling over 5700XT. In addition to that, higher core clocks plus IPC increase if any.

So it should land close to 3080 performance in October if your one-liners come true.
 
So 4SEs, 5120 shaders, 128ROPs, 384-bit bus with GDDR6, and linear scaling over 5700XT. In addition to that, higher core clocks plus IPC increase if any.
Aye.
(well nothing ever scales linearly, but getting close is trve art)
Also might as well say 8SAs because SEs actual play a weird token role there.
So it should land close to 3080 performance in October if your one-liners come true.
Uhuh.
 
We'll you're the one who said there's no evidence that Navi scales. What evidence are you referring to exactly?
I'm referring to a lack of evidence being presented by those who say that Navi can scale. I'm making the claim that evidence is lacking, not that Navi doesn't scale. I haven't done a scaling analysis.
 
Well I did a very coarse scaling analisys before. I saw no comments about it except a calling for an non-existant CPU limitation. But it was clear that there is a scaling (I'm referring more to the 5500XT->5600XT) and this scaling is somewhere in the middle between shading power and geometry/ROPs. Free to do your personal scaling analysis instead of asking others to do it for you and then trashing it because you feel not satisfied.
 
You claimed 2x a 5700XT would be "barely faster" than a 2080ti earlier.
Clearly a miscalculation on my part.

The XSX power is NOT UNKNOWN at at all and we have a fair idea of the power consumption as they compared it directly to Xbox One X.
Which is also of unknown and variable power, depending on the workloads it can vary a lot, Doom 2016 does 120w, Gears 4 does 170w.

Also consoles have a fixed frame rate mentality, this on it's own reduces power consumption as opposed to desktop chips that have to work all of their parts to maximum utilization to achieve the highest fps.
 
Which is also of unknown and variable power
Allocated statically thus you can safely assume worst case.
Also consoles have a fixed frame rate mentality
I wish.
this on it's own reduces power consumption as opposed to desktop chips that have to work all of their parts to maximum utilization to achieve the highest fps.
PS4p GPU making the thing go jet engine begs to disagree.
Pushing a given GPU envelope is the console thing to do.
 
But even GCN scales with width.
Scaling from a low baseline is no use in today's games. Maxwell utterly destroyed the relevance of old-skool GCN.
Well I did a very coarse scaling analisys before. I saw no comments about it except a calling for an non-existant CPU limitation.
You didn't demonstrate that your analysis was not CPU limited.
But it was clear that there is a scaling (I'm referring more to the 5500XT->5600XT) and this scaling is somewhere in the middle between shading power and geometry/ROPs. Free to do your personal scaling analysis instead of asking others to do it for you and then trashing it because you feel not satisfied.
I've thought about doing so. I couldn't find data that would satisfy me that it was possible.

Threads like this:

https://forum.beyond3d.com/threads/pcgh-pixelshader-shootout-rv770-vs-gt200.43670/

weren't especially conclusive. Lots of interesting nuggets. Oh look:

The obvious suggestion is to go to MAD+MAD to simplify compilation and increase throughput, but this also demands significantly more register file bandwidth so may not happen.
 
It was clearly not CPU limited with comparing a 5500XT and a 5600XT and more than linear scaling with shading power with a 5GHz overclocked 9900K CPU, when other GPUs, even with the same architecture, with the same setup, have much higher framerates. Also it is easy to compare frame rates between the various games on the same setup between different cards on the same architecture. But feel free to demontrate that is was CPU limited
 
Status
Not open for further replies.
Back
Top