Speculation: GPU Performance Comparisons of 2020 *Spawn*

Status
Not open for further replies.
Yep, none.
I’d expect more the usual issues on launch and not a Navi10 kind of mess. It seems that bugs in the hardware really were responsible for a part of that. I also expect the drivers to be in a better state in general. Personally wouldn’t buy anything on launch, anyway.
 
navi 10 clocked at 2230Mhz and fast dram could be just 20% faster, so PS5 and XBSX should be both near RTX 2080 Super land, still other 20% left for 2080 Ti or RTX 3070...i don´t count on much IPC gain in RDNA 2 so they would need 60cu aka navi 22 for that.

Don't forget Navi 10 is 40 CU's, the PS5 is only 36. That already makes up for most of the clock speed difference.

And they both have 448GB/s of bandwidth, but the PS5 has to share it's bandwidth with the CPU.
 
Why would anything be hit if RT isn't being used? RT being shared with TMU's wouldn't hamper their efficiency I'd think. Just that you couldn't use them concurrently.
It literally depends on the proportion of transistors that are dedicated to ray tracing.

I haven't read the AMD-specific ray tracing patent documents, so I have no idea if we can establish this proportion.
 
Bro Milan and Rome platforms are one and the same.
The nearly-last-SP3 and all.

Yeah, one has to cope with cache and mem actively being loaded to death while the other is not.
Bro, I thought we're talking about Zen 3 vs Willow Cove on desktop.
 
Even if i got the choice of selling a wafer full of Athlon 200/3000G vs. selling a wafer full of A100?
I'm as skeptical of @Bondrewd 's statements as the next guy, but this comparison couldn't be farther from fair and honest.

A100 is nvidia's flagship chip (i.e. the one probably selling with the highest margins), and you're comparing it to AMD's CPUs with the lowest margins.
AMD's flagship CPU at the moment is the Epyc 7742 which is selling for $5000-7000 each. It contains one 125mm^2 IO die made on the super cheap GF 14nm, and 592mm^2 worth of 7nm Zen2 chiplets, split into 74mm^2 chiplets, meaning they maximize yields.
The A100 is probably getting a higher revenue-per-chip, but its monstrous monolithic size means yields can't be spectacular and nvidia can't sell it without putting it into an interposer together with six HBM2 chips and then in a PCB with voltage regulation. Oh and then they need to put it into a motherboard with a couple of Epyc 7742 CPUs, which they buy from.. AMD.


And then there's the fact that the TAM for Epyc is probably on a completely different order of magnitude compared to an A100.
Intel's quarterly datacenter revenue has been around $7B, which is significantly greater than nvidia's latest projected $1.7B (of which 1/3rd come from Mellanox network hardware sales, meaning GPU sales account to less than $1.2B).


If AMD has the superior product and has successfully gained traction in the server space, it's a no brainer that this should be their focus. Navi is indeed just a child's toy when compared to the revenue potential of Zen 2/3/4.
 
It contains one 125mm^2 IO die
400-something.
Rome sIOD is the size of SKL-SP HCC die, lol.
Oh and then they need to put it into a motherboard with a couple of Epyc 7742 CPUs, which they buy from.. AMD.
Rome is fairly scarce which is why GOOG has deployed A100...
...with Cascade Lake.
If AMD has the superior product and has successfully gained traction in the server space
They finally did, Q2 was double digit unit share (not accounting for Octeon-tier edge networking where Intel now plays and AMD is yet to) and Rome is selling out, more or less.
Navi is indeed just a child's toy when compared to the revenue potential of Zen 2/3/4.
Eh, the mainstream ones (22/23) will make some nice money and will certainly anchor AMD in expensive-ish gaming laptops.
 
If they decide pushing it just to beat 3080 they better not screw it up with another 290X jet furnace. Ain't no one buying that.

If the 290X is a furnace then what are the RTX30 series to you?

This new nvidia gen and all the people who are now super excited for it just proves that absolute power consumption on desktop graphics cards was never a real concern, but one that was manufactured by marketing divisions and further picked up by fanboys.


Now that nvidia cards pull over 300W, let's see how many times we'll have with people - and reviewers - doing those napkin maths of how much more they need to pay in annual power bills by using completely bonkers numbers like assuming everyone will play 8 hours a day every single day of the year.
All of a sudden I also don't see many people complaining how there aren't any mITX cards on the RTX30 range to put in the tiniest cases they saw in the Internet.

What matters is whether or not the card fits the PSU (most decent 650W ones will), and whether or not the cooling is adequate for its chip and silent.

Nvidia seemingly went above and beyond with their new coolers which is great, but it also proves that the FUD they and their fanboys generated over cards that consume 300W+ is completely manufactured.
 
If the 290X is a furnace then what are the RTX30 series to you?
290x was inadequately cooled which resulted in not very nice temps and sound signature.
Nvidia seemingly went above and beyond with their new coolers which is great, but it also proves that the FUD they and their fanboys generated over cards that consume 300W+ is completely manufactured.
Narrative always shifts and goes wherever the market leader moves.
It's okay, only a month left.
 
Said IHV is probably concerned with its own power measurements.
I'm puzzled.
Why would Nvidia distribute PCAT to select NDA partners just now? They are probably concerned about having a single "highest power number" with their new cards without corresponding increase in perf. But at the same time, total system consumption always masks graphics cards power as well.

If Nvidia was particularly concerned about AMD beating them in perf/watt, why would they make available PCAT right now?
 
Status
Not open for further replies.
Back
Top