AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
you can easily see bulldozers weaknesses,

L1i cache alaising issues
write through L1d
terrible L2 latency/shared L2
extremely terrible inter module latency
very narrow int execution
high FPU latencies from FMA unit
shared fetch/predict/decode

But what can you say about GCN, nothing, games that are compute heavy generally do well on it, games that hit other parts of the GPU don't. So your answer is to throw it away? The other interesting thing to note is , if you address all the above issues will BD you can very easily come out looking like Zen :).

edit: Also the two thing that got worked on the most in steamroller/excavator the SMU 100% went into Zen, that is turbo boost , xrf etc.


wow, you´re so much thrilled by GCN :). I could name a list of GCN weakness just as you did with Bulldozer example, but why keep beating a dead horse ?
 
point taken, but coudn´t be the same said about Bulldozer ? Instead of throwing more money into it, they rather ditched it in favour of new arch. And it was right decision.

Let´s be frank. GCN is more than 8 years (!) old gpu architecture, with just modest updates through the years. It has it´s limits and weakness. It´s development started somewhere between 2007/2008. Sure, they can "tweak it" hard in the end of its live and spend tons of money to make it better, or they rather focus on new GPU architecture which has higher potential. One scenario here is more probable than the other I guess....

But with Ryzen AMD caught up in performance compared to their competition where their GPU division is. They have the same issue in both sectors, lower clocks than the competition.


Then there's the driver pains with new architecture, which could make the hardware improvements not as useful from the start as they should be, 'fine wine' etc.
 
Having hard time pushing the frequency, less power efficient than the green side, having this 4tri/clock seems a GCN limitation (Vega PS is not available so...), but maybe Navi will change that ?
Don't tell me GCN is fine while AMD can't compete for years now in the high end with it. And the "oh they don't have the money to have a good R&D" argument, I get that, I really do, but if GCN cost to much money to be tweaked enough to thrive, then do something else...

An interesting topic, oriented to the power efficiency, that I forgot about : https://forum.beyond3d.com/threads/...r-than-comparable-pascal-maxwell-chips.60558/
 
Having hard time pushing the frequency, less power efficient than the green side, having this 4tri/clock seems a GCN limitation (Vega PS is not available so...), but maybe Navi will change that ?

right, plus front-end scaling and weak geometry ( nobody finds weird, that RX570 has the same front/end as Vega 20 ? )
4SE capable of max. four triangle per clock for mid range chip and use the same configuration for Radeon VII ?
how could possible be that well ballanced architecture ?
With that realization on mind Vega has hard time to keep all of these shaders busy
They keep 4SE/4096SP configuration since Hawaii launch and never got past them
bandwidth efficiency
enormous power consumption ( they need full node jump to be at least partially competetive and drive power consumption down)
some crucial Vega features still not working and some of them having dubious benefit

https://forum.beyond3d.com/threads/amd-navi-speculation-rumours-and-discussion-2019.61042/page-20
 
Last edited:
Wow you two are so predictable, all you can list is outcomes, so what makes gcn so power inefficient , what limits clocks. Why can't those thing change and still be gcn.

I also like how you try to brush off R&D like it's nothing, maybe you guy should become cto's you would save tech companies so much cash because you wouldn't do R&D, you would just do something else that's way cheaper and better because reasons.......

Amd has said there is no hard limit in terms of number of SE's. So given that why haven't we seen more, it's probably R&D related,you know having no money to do it. you know thing like N∗(N−1)2 get hard above 4 real quick.
 
Wow you two are so predictable, all you can list is outcomes, so what makes gcn so power inefficient , what limits clocks. Why can't those thing change and still be gcn.

I also like how you try to brush off R&D like it's nothing, maybe you guy should become cto's you would save tech companies so much cash because you wouldn't do R&D, you would just do something else that's way cheaper and better because reasons.......

Amd has said there is no hard limit in terms of number of SE's. So given that why haven't we seen more, it's probably R&D related,you know having no money to do it. you know thing like N∗(N−1)2 get hard above 4 real quick.


alright, tell me then what is behind all of these things if not architecture itself ?? If architecture is so superb, then it´s badly engineered at AMD or you try to imply that GloFo and TSMC deliberately crippling AMD/RTG work to make them artifically worse than their competitors ?

Why then AMD working on next gen GPU if GCN is so superb ? it just need a " little tweak " , right ?

@ToTTenTranz: You want other prove their claim, but you don't have to prove anything and yet you have nerve to call them trolls.
 
Last edited:
Status
Not open for further replies.
Back
Top