AMD loves their 6Q cadence more than I love my shares.
It would certainly make things interesting if AMD is actually able to deliver.
AMD loves their 6Q cadence more than I love my shares.
Is that a worthwhile upgrade from your 1080? About 20% max performance boost?RX 6600XT and RX 6600 are about to be released. First leaks appear. It looks like my future GPU, but we shall see. TDP is less than 130W for the RX 6600XT and less than 100W for the RX 6600, uses a single 6 pins connector... Both feature 8GB of GDDR6 VRAM.
https://www.igorslab.de/en/amd-rade...emory-same-bandwidth-limit-as-the-rx-5500-xt/
My biggest question mark is if performance is going to be affected. My Asrock B450M Steel Legend only has a PCIe 3.0 port, not 4.0.
From the link:There's no way that PCI Express 4.0 is relevant to the performance you would get.
It's 4.0 x8 which will work as 3.0 x8 in a PCIE 3.0 system. This may affect performance in b/w limited scenarios.With the RX 5500XT one could observe up to 7% performance loss on an older AMD system (or Intel up to generation 10) at that time.
Pretty sure he just wants something cute low power.Is that a worthwhile upgrade from your 1080?
RX 6600XT and RX 6600 about to be released. First leaks appear. It looks like my future GPU, but we shall see.
TDP is less than 130W for the RX 6600XT and less than 100W for the RX 6600, :smile2:uses a single 6 pins connector... both have 8GB of GDDR6 VRAM.
https://www.igorslab.de/en/amd-rade...emory-same-bandwidth-limit-as-the-rx-5500-xt/
My biggest question mark is if performance is going to be affected. My Asrock B450M Steel Legend only has a PCIe 3.0 port, not 4.0.
Nice one , You forgot ∞$ ™
10 teraflops, 90-100W TDP, not bad! This reminds me of the days when I purchased the RX 570, and specially the 1060 3GB -which was a gift from another persona-, and was so happy with the performance and power consumption at 1080p, I showed quite a few pictures in this very forum.
Some articles I read on it mention that it's a 1080p GPU. But I wonder why.... Maybe with RT on, but without RT, 1440p seems easily doable on it. I am currently using a 1060 3GB as my main GPU and sure it's a 100-120$ GPU nowadays, which lacks some important features, low VRAM (even the 6GB version of the 1060 is unappealing to me these days 'cos the performance is what it is), 120W of power consumption and an old 16nm architecture...That's a pretty old article. Going by the mobile Navi 23 variants, the RX 6600 should have only 32MB of IC, not 64 as igor's lab suggests.
https://videocardz.com/newz/amd-radeon-rx-6600xt-and-rx-6600-with-navi-23-gpu-appear-in-the-drivers
I doubt you'll get much of a performance cut from running the GPU at PCIe 3.0 8x, considering it's a GPU designed for 1080p or 1440p at most.
IIRC this is also the chip going into the new Teslas for infotainment.
Cuz that's what is written on the box.Some articles I read on it mention that it's a 1080p GPU
Looks like the number of ROPs is scaled with the number of shader engines, and AMD is putting 32 ROPs per SE. It'll be interesting to see how many ROPs there are in the RDNA2 APUs.
Nah.Looks like the number of ROPs is scaled with the number of shader engines, and AMD is putting 32 ROPs per SE
Looking at profit margins for the various dies/memory sizes AMD has, it easily makes the most sense to concentrate all its limited resources on the biggest dies. The poor Navi 23 die would have to sell at an average of $400 a pop to match whatever margins AMD gets from the Navi 21.
N23 gets them OEM slots.The poor Navi 23 die would have to sell at an average of $400 a pop to match whatever margins AMD gets from the Navi 21.
All and every CPU part wins that contest no questions asked.Looking at profit margins for the various dies
^That's only for short term profits though. AMD must penetrate the notebook market where they find tremendous resistance but they're now able to compete.
sry my ignorance in certain technical matters ,but.., dont you think it could be one of the most unbalanced GPUs ever then?Cuz that's what is written on the box.
Also it has a too small (32MiB) LLC to actually drive 1440p stuff off a puny 128b membus.
sry my ignorance in certain technical matters ,but.., dont you think it could be one of the most unbalanced GPUs ever then?
It has more computation power than say the PS5 and also more than my GTX 1080 (9tf), I think. Is it me or should it do 1440p just fine only judging by the raw teraflops numbers?
I mean...10 teraflops for 1080p seems a bit too much. Although if that means playing many games at 240fps I wouldnt mind returning to my previous 1080p 240Hz monitor, which I liked very much, but still kinda odd. Just curious....
It is pretty damned weird, especially being so utterly close to the Navi 22 die. The only possible target market I can imagine for this is very highly targeting gaming laptops. With that many CUs you can cut clockspeed drastically, think to like 1.1/1.2ghz while still getting performance roughly that or above like, an rx580/GTX 1060 in no doubt quite a low power envelope. Thus making a potentially incredibly power efficient GPU. That a lot of gaming laptops target 1080p already fits nicely.
It is pretty damned weird, especially being so utterly close to the Navi 22 die.