Speculation: GPU Performance Comparisons of 2020 *Spawn*

Status
Not open for further replies.
Good catch techuse, 5700 XT is like 43% at 1440p and 50% at 2160p behind 2080 Ti there in latest TPU comparison chart.
 
Let's not get ahead of ourselves. AMD is enjoying a little sunshine right now but they're far off from dethroning Intel. And as we've seen before it can (and probably will) turn back in Intel's favor soon enough. So they shouldn't give up on the lowly gaming market just yet.

Honestly I don't think AMD can ever dethrone Intel, at least in volume terms (AFAIK Intel's production is 800K wafers per month and AMD is sub 100k total). Their stated goal is to reach 20% market share in servers and I can see them reaching that by end 2021 with Milan ramping (They've recently reached 10% from near zero pre Zen). And then Genoa in 2022 should help increase that even further. But to what level? Forget beating Intel, to even reach 50% is a herculean task and will take till at least say 2025, IF AMD executes flawlessly and Intel keeps screwing up. That said, I can see AMD maintaining their CPU lead till Intel 7nm in 2023, though AMD will have access to TSMC 3nm by then so it could still be in AMD's favour.

Point being, and as stated by others, the current prioritization of wafers towards the server market where the ASPs are the in thousands of dollars (and their margins huge thanks to the chiplet strategy) makes a lot of sense over the dGPU market where they have been weak in recent times. If well received and with access to the additional capacity vacated by Huawei, they can ramp production if necessary.
The commentary is an accurate near-term profile of AMD’s market position right now, and I don’t think it is “getting ahead of ourselves” in any way.

Sure, revenue diversification matters in long term with AMD itself as the testament, but so as maximising RoI aka best bang for the bucks. With the finite amount of TSMC production capacity (after fulfilling client deliverables), why would they prioritise their traditionally weak discrete GPU products, over driving further their momentum on server and consumer CPU/APU that are the bigger money pies?

IMO it is not about giving up, but picking fights for the bigger fishes to fry. It is not life and death for them to deprioritise dGPU production/launch at this very moment. That and also Nvidia not being on TSMC 7nm gives them plenty of leeway, if they truly have a great product stack in waiting.

Great explanation. I have similar thoughts.
No one who was paying attention to Banias/Dothan.

Not many would remember that the origins of Conroe and Intel's future success story for the last 15 years came from Dothan.
We'll have to see if they can be fully turned off... If not then raster will be hit.

Why would anything be hit if RT isn't being used? RT being shared with TMU's wouldn't hamper their efficiency I'd think. Just that you couldn't use them concurrently.
N10 breaks at 4k (did anyone ever figure out why?).
N21 does not.

Bandwidth is definitely a large reason? The 2080Ti has 50% more, and correspondingly more ROPs.
@1440p it's not that different, the 2080Ti is a beast of a GPU, it's almost 50% faster than 5700XT too at that resolution. Even if big Navi achieves double performance it's still barely above 2080Ti.

We also need to factor in RT performance, which is now tremendously fast with Ampere, I don't even think big Navi will reach Turing level of RT with the way they've designed their RT hardware.

I'm not following your logic here. If 2080Ti is 50% faster than a 5700XT, then simple math says double of 5700XT is 33% faster than 2080 Ti (i.e. 200% vs 150%). Definitely more than just "barely faster"
 
Honestly I don't think AMD can ever dethrone Intel
Not the idea.
Ideal x86 market is 50/50 swinging into 60/40 per specific product cadence.
though AMD will have access to TSMC 3nm by then so it could still be in AMD's favour.
AMD roadmap is ultra-agressive and one should never ever write them off even assuming Intel 7nm rollout happens flawlessly in '23.
Bandwidth is definitely a large reason?
It chugs more than comparable class nV GPUs.
Why?
No idea.
 
Let us all carefully postpone FF shitflinging until N21 actual crawls onto the stage (which is soon enough anyway).
Then we can continue our fancy e-fistfight.
What's the point? Once all the numbers are available it ain't fun no more. The fun is all in armchair extrapolation from scraps of news. Carry on folks!!!
 
2080Ti is more than 50% faster than 5700XT @4K. Doubling 5700XT will barely get you above 2080Ti @4K.

relative-performance_3840-2160.png

excuse me, but... wouldn't a doubling of 5700XT performance be 200% ??? - thats about 30% faster than a 2080TI - or am I a complete idiot?
 
lol you want him to make a game containing all the same compromises in 5 mins on his own??
In my humble experience in dealing more so on the console side of things: intentionally vague commentary, added to short one liners and a lot of 'you'll see' has generally been a road map to their demise ultimately.
We are all capable of looking at trend data and making fairly reasonable assumptions of where things are headed.
The true hallmark of an insider isn't predicting that trend, it's predicting the outliers on that trend, because those are the things that no one expects. It's about knowing the data points that no one has information about.
It's okay to be a fanboy or whatever, I don't mind optimism, and I don't have a problem with people asking others to stay as objective as possible, but I generally prefer that if someone doesn't know something, it's okay to say you don't know. I don't know a lot of stuff. And that's okay.
And even if you do know something, ti's okay to even say you don't know if it's still coming, because plans change and is all too common in this industry.

I frankly find his posts confrontational and unnecessarily forceful - and that's usually where I feel like I'm being taken for a ride if I trust this person.
 
Well, I guess if I have to guess, I'm going to say navi21, or whatever the hell it's called is going to be about 285W and roughly double the performance of a 5700xt, which should put it in 3080 territory. We'll have to see where 3080 benchmarks vs the 2080ti. I'm going to guess that frequency is not going to see a big boost from the 5700xt because it's an 80CU part. Whatever the smaller RDNA2 parts are called will probably see higher clocks pushing game clocks into 1850MHz or 1950MHz, but game clocks for this 80C part will stay reasonably close to 1755 MHz. My gut still tells me that Nvidia will be well ahead on ray tracing performance because they have many dedicated (2nd gen) units vs shared functionality with TMUs.
 
Trend data says AMD has no chance at mobile yet they're smashing it.
Just write everything off and patiently watch; it's very fun that way.
I'll be frank, I have tons of money invested in AMD, and it's absolutely in my favour for them to perform amazing and to watch the ticker go up. And they have absolutely been doing that since I bought them at their low early 2019.
But I can't help the fact that when I read your comments, none of it makes me want to invest in AMD more. You talk about watching and trends and this and that. But you haven't been able to write any sort of coherent post that takes 1 idea or concept and lead it to the next; not a single person can follow you, but you've been highly effective at stopping all sorts of discussion from proceeding under the guise of knowing things.

I get that people don't like concern trolling, and that probably happens a lot, and I'm sure I've done my fair share so I'm certainly not innocent from doing it. But if you're not going to let that discourse occur, there's no technical discussion here and this forum may as well not exist. We aren't a marketing arm for any single company, so having conversations loaded with heavy pessimism as a counter to over enthusiasm is how we improve the probability around the accuracy of our assertions.

Don't worry, I will sit back and watch.
 
But I can help the fact that when I read your comments, none of it makes me want to invest in AMD more
Great, it's too late anyway.
but you've been highly effective at stopping all sorts of discussion
There is nothing to discuss because nothing concrete is known.
Okay, AquariusZi on ppt said N21 is great but they're not allocating much into it.
Cool enough?
But if you're not going to let that discourse occur
There's nothing to discuss, all the flavours of napkin math have been done three times over since the FAD announcement.
Unless we of course want to wank nVidia, but there's plenty of that shit all over the internet now.
Don't worry, I will sit back and watch
Nice; not much time left to wait anymore.
 
I want AMD to compete or even best NV, its in a pc gamers intrest. Its almost certain amd will offer in the same performance range as nv (20 to 30TF), and thats all they need to do for now i think. A 3090 variant as their highest end will most likely also come, but has a smaller market.
 
As I've said already this and next year will be fun. Probably not in a way which some posters here are expecting but fun nevertheless.

There is nothing to discuss because nothing concrete is known.
Says the man who is pointing to XSX APU at all times.
 
As I've said already this and next year will be fun.
Hell yeah gimme those SoIC GPUs.
Says the man who is pointing to XSX APU at all times.
Yeah it's napkin math that has been done ten times over and no one is interested in it because XSX announcement was ages ago.
This thread is just raw shitlording from both sides, and while fun it is fundamentally pointless.
But I like simple fun.
 
At the same time, I find it hard to believe that an RT system such as the one found on consoles with it's shared resources is even close to match to a dedicated solution like the one in Turing. It looks like AMD approached this the same way they approached Tessellation back in the days.
To reiterate, NVIDIA is calling RDNA2 a minimalistic RT approach:
Without taking names, NVIDIA pointed out that a minimalist approach toward raytracing (possibly what AMD is up to with RDNA2), has a performance impact due to overreliance on SIMD stream processors. NVIDIA's RT cores offer a completely hardware-based BVH traversal stack, a purpose-build MIMD execution unit, and inherently lower latency from the hardware stack.
https://www.techpowerup.com/review/...ture-board-design-gaming-tech-software/4.html
 
Let's not get ahead of ourselves. AMD is enjoying a little sunshine right now but they're far off from dethroning Intel. And as we've seen before it can (and probably will) turn back in Intel's favor soon enough. So they shouldn't give up on the lowly gaming market just yet.

Yes I'm expecting Rocket Lake to be a turning point for Intel. Sandybridge was awesome in it's day but they've stuck with it for too long. I love Zen 2 (I have one) and I'm expecting Zen 3 and 4 to deliver the goods too, but in gaming at least, core for core, Comet Lake is still a decent amount faster. It's just that no-one cares at this point because they're stuck on PCIe 3.0, they're more expensive, and they have fewer cores in the top end halo products. I'm expecting Rocket Lake to bring a significant enough IPC uplift to make the gap between it and Zen 3 much more obvious, while obviously bringing Intel up to the times on the PCIe front. I'm not really expecting them to progress on cores counts though. That may be AMD's saving grace.
 
Status
Not open for further replies.
Back
Top