AMD: RDNA 3 Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by Jawed, Oct 28, 2020.

Tags:
  1. troyan

    Regular Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    512
    Likes Received:
    937
    Yeah, its nonsense. Apple delivers 30% more GPU performance with their SoCs but AMD will just increase performance by 2.7x. I dont know why every time these rumors get such attention. Voltage scaling has come to an end. But AMD will deliver what nobody else could.

    RDNA2 is only 30% more efficient with 70% more transistors than RDNA1. RDNA2 is an inefficient architecture for calculations. But RDNA3 delivers nearly 3x more compute performance within the same power envelope.
     
    PSman1700 and xpea like this.
  2. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,564
    Likes Received:
    757
    Yes!
    Unironically.
    Yes!
    Unironically.
    No one else has quite the ballsack to tinker with really funky 3DIC setups.
    ?
    N23 is only 2 days away.
    A bit less ISO power but yes!
    Real world™ FP32@mm^2 is a big-big gfx11 gimmick.
     
    Lightman and Tarkin1977 like this.
  3. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,752
    Likes Received:
    7,766
    I disagree. The bulk of the development effort is going into consoles and RDNA2's raytracing performance.
    On multiplatform titles the software sales numbers don't lie, and the great majority of sales will be coming from Xbox + Playstation. If that's where the money is coming from then that's where the dev money will be thrown at.

    Consoles are by far the most useful benchmark for ray tracing performance.

    I'm also not really counting on the (IMO inevitable) mid-gens applying a great level of effort into more relative RT performance-per-TFLOP.
    So really the best bet for the majority of games will be to offer a proportional performance boost of X times PS5/SeriesX.


    I'm not sure I get this.. You're suggesting that low-level optimizations for raytracing on RDNA2 consoles aren't going to benefit RDNA3 PC GPUs?
    Isn't AMD providing ISA compatibility between RDNA3 and RDNA2?
     
  4. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,564
    Likes Received:
    757
    Uh oh
     
  5. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,052
    Likes Received:
    4,262
    Location:
    Finland
    That's not how it works, they need to have cheap options to make even majority happy. If there's only expensive options, no matter how fast, it's not going to make everyone or even majority happy
     
    Kej, BRiT, ToTTenTranz and 1 other person like this.
  6. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    992
    Likes Received:
    606
    So worse than Ampere?
     
  7. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,564
    Likes Received:
    757
    Cheap is out of question for dGP anymore; baseline costs too high.
    There will be good value stuff galore tho.
    The opposite.
    Instead of moar FP32 per same amount of regs they throw moar everything in a fat SM config.
     
    Nemo and Lightman like this.
  8. Leoneazzurro5

    Regular Newcomer

    Joined:
    Aug 18, 2020
    Messages:
    305
    Likes Received:
    326
    Well, as a gamer I would like it was this way but getting real a 5nm wafer will cost +50% and more with respect to a 7nm wafer, and we are looking at a 650+mm^2 of 5n silicon, plus the I/O and cache (probably on 6nm). The MCM will not be cheap, either. If a 6800XT has a MRSP of 650$ there is no chance for a N31 to be sold at less of 1200-1400$ or even higher even by keeping margins low. Same for N32. N33 will be a "cheaper" option but we will anyway look at a 6nm 400+mm^2 GPU... that is, in the 500+$ range.. "Budget" option would be a N34, and there we will have probably something in the N23 price range... being that already 299-349$.
     
  9. Nebuchadnezzar

    Legend Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,048
    Likes Received:
    309
    Location:
    Luxembourg
    The <$350 segment is likely to be completely dead with no products, AMD including IGP on all future Zen4 CPUs likely covers the "cheap" segment builds.
     
    Lightman, BRiT and ToTTenTranz like this.
  10. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,564
    Likes Received:
    757
    Ehh, ballpark ~440mm^2 but it's also less mem than N22.
    Feasiable for 450 buck.
    Raphael iGP is GT1-tier config for office boxes.
    But yeah, sub $300 is dying out at a rapid pace.

    Chopped N23 is already $299.
     
  11. Subtlesnake

    Regular

    Joined:
    Mar 18, 2005
    Messages:
    333
    Likes Received:
    109
    A 500W+ part with 50% better performance per watt wouldn't be out of the question. It would just be like the old "X2" parts from AMD. Think about the 3870 vs 4870 X2, but with better scaling.
     
  12. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,752
    Likes Received:
    7,766
    And on top of that, they need volume.
    People will only be "happy" if they can actually get their hands on the GPUs, preferably without being scalped into oblivion.


    Unless post-Rembrandt APUs are able to offer close to PS5-level performance, what I take from here is that a large chunk of the PC gaming market will be driven to consoles.

    Covid isn't going to last forever and people won't be forever stuck at home, more willing to spend an increasingly larger portion of their income in a discrete GPU. Much less when post-covid inflation hits and their disposable income gets reduced.
    I get the feeling that both AMD and Nvidia are biting more than they can chew, in this relentless quest for higher ASP and record margins QoQ out of the same market they've been serving for 20 years.


    But just like N22 before it, the N23 release MSRP is severely affected by the current era of IC shortage + mining craze + scalping.
    I have a hard time believing AMD planned for N23 to release at $300-380, when they originally laid out their plans for the chip back in 2018-2019.
     
    JoeJ likes this.
  13. PSman1700

    Legend Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    5,579
    Likes Received:
    2,441
    Thats not what were actually seeing though. PC versions seem to always have the most robust/performant/fidelity RT support. Consoles probably are the worst way to benchmark ray tracing, as they offer the lowest performance RT of any hardware available today for the gaming market.
    Most money is probably made from Steam/PC aswell.

    Seems its the other way around.
     
  14. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,564
    Likes Received:
    757
    But it's not.
    Baby shit.
    Same niche yes radically different things also yes.
    Ta-da!
    Lisa wins either way.
    AMD margin expansion is strictly driven by revenue share gains in laptops and datacenter.
    NV margins are the same?
    Or even lower.
    Ehhh it could've been $329 tops instead of say $349.
    Nope.
    Consult $ATVI ERs or idk
     
    Lightman and ToTTenTranz like this.
  15. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,484
    Likes Received:
    1,844
    Location:
    London
    I'm suggesting two things in terms of ray tracing in games:
    1. fuzzy (low res, low update rate) but amazingly comprehensive console ray tracing with low-level voodoo
    2. comprehensive brute force pristine PC ray tracing
    NVidia won't be standing still and AMD needs >2x uplift to avoid looking bad versus NVidia and what ever wizadry the consoles have.
     
    Lightman likes this.
  16. tsa1

    Newcomer

    Joined:
    Oct 8, 2020
    Messages:
    84
    Likes Received:
    95
    What I can imagine is something like Fury Nano with extra good silicon running at the peak efficiency clocks, that'd be interesting to see.

    I don't get the ARM craze, it's like suddenly everyone decided to believe Apple's marketing team (remember their "stellar" GPU presentation) and some toy tests from an ARM fan at Anandtech. Pretty sure if it was so good, the big boys would already be doing it (and it seems Keller's K12.3 didn't quite pan out, so the magic ARM performance and efficiency wasn't really there). Well, it's good for what it is, but it's too wide to be scalable to proper HEDT/enterprise level, it seems.

    Kinda meh that ATi again abandons the middle-range market, hopefully the potentially successful RDNA3 won't be followed by R600 2.0...
     
    Rootax likes this.
  17. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,564
    Likes Received:
    757
    Cool but probably too niche these days.
    Those small mITX-friendly Pascal designs also died out.
    People aren't actually stroking ARM, they're stroking Apple. Which indeed makes pretty cool h/w.
    ARM shills actual are just an off-breed type of semis mutt that used to shill, say, POWER in eons beforehand.
    It's no ATi anymore, but AMD.
    The new, spoopy kind of.
    Lisa wants her >50% margins and she's gonna get them.

    Also the midrange is just shifting up due to climbing semis costs and all.
    Best value recent GPU on the market (3060ti) is $400.
    Nah.
    RDNA4 is a pretty fast follow-up either way.
     
    Lightman likes this.
  18. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    11,683
    Likes Received:
    2,601
    Location:
    New York
    The default assumption is that an RDNA 3 WGP is at least as fast as 2x RDNA 2 WGPs in flops dependent workloads. How games will scale though is a different matter. Ampere doubled flops and L1 bandwidth per SM but that didn’t result in 2x gaming performance. The 46SM 3070 is only 30% faster than the 46SM 2080.

    RDNA 2 scaled very well with clock speed vs RDNA 1. Comparing the similar 40CU configs of the 6700xt and 5700xt there was a 35% improvement on paper due to higher clocks and actual results in games were pretty close to that number. This is a great result especially considering the lower off-chip bandwidth on the 6700xt. Scaling up RDNA 3 didn't quite hit the same mark. Comparing the 40CU 6700xt and 80CU 6900xt there was a 75% improvement on paper but only 50% in actual gaming numbers. This leads me to believe the 6700xt is benefiting from higher clocks on its fixed function hardware or the 6900xt is hitting a bandwidth wall. As mentioned earlier in the thread it's going to be interesting to see how AMD feeds such a beast.
     
    Nemo, Lightman and DegustatoR like this.
  19. tsa1

    Newcomer

    Joined:
    Oct 8, 2020
    Messages:
    84
    Likes Received:
    95
    I think it is possible to check it now with N21 xtxh SKUs, which apparently have memclock limit of 2450 mhz instead of 2150 mhz (although it seems that either memory chips themselves or IMC can't do much more than 2170 mhz or so)
     
  20. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,752
    Likes Received:
    7,766
    I don't expect anyone to stand still.
    I'm fully expecting for Nvidia to use their richer developer influence to push for #2 as hard as they can because that's where they have an architectural advantage, and for AMD to focus on "console-multipliers" in the expectation that #1 is widely adopted.
    Not much different than Nvidia pushing for more geometry\tessellation in PC games during the Kepler + Maxwell + Pascal eras, while AMD iterated relatively little from GCN1 to GCN4 because the optimization for both consoles was on their side.

    I'm aware that AMD "lost" with their strategy, but I don't think it was the strategy's fault. The HD 7970 eventually did leapfrog over the the GTX 680 in multiplatform game performance, despite the latter having a massive advantage in geometry performance.
    It's just that AMD's execution on chip performance (clocks) and release dates was pretty terrible compared to Nvidia's. They failed to do >1GHz on TSMC 28nm and then with Globalfoundries' 14nm they screwed up clock performance pretty badly, at least compared to Nvidia+TSMC.


    Probably both but more of the latter? The 6700 XT clocks ~12% higher than the 6900 XT on average. The VRAM bandwidth-per-WGP and LLC-amount-per-WGP (and probably the LLC bandwidth too) are all 50% higher on Navi 22 vs. Navi 21.
    OTOH, it doesn't look like Navi 22 is losing all that much from halving the number of Shader Engines, which might be an indicator why Navi 3x is reducing the SEs in general (or increasing the WGPs per SE).
     
    #580 ToTTenTranz, Jul 28, 2021
    Last edited: Jul 28, 2021
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...