Nvidia Post-Volta (Ampere?) Rumor and Speculation Thread

Status
Not open for further replies.
What...?
No, you can run both dies up against games... then compare. The only metric that counts is frames per second, […]
In that case, TU102 wins hands down.
Oh, wait - why would you arbitrarily chose the worst member of the RT-Turing family wrt to basically every metric except die space? What exactly are you comparing? Architectures? In that case: RDNA is not proven to scale beyond 2560 cores. ;)
 
In that case, TU102 wins hands down.
Oh, wait - why would you arbitrarily chose the worst member of the RT-Turing family wrt to basically every metric except die space? What exactly are you comparing? Architectures? In that case: RDNA is not proven to scale beyond 2560 cores. ;)

You are changing the goal posts.

If you are trying to compare different architectures, then take similar chips/dies that have the closest amount of transistors. Why are you making this so difficult. We are talking layman's terms here, nothing intellectual depth. On face value you can CLEARLY see how the TU-106 die, has 300,000 more transistors, than navi10. Yet, in modern games (dx12/vulkan), navi10 beats TU-106 in nearly everything.


If that isn't bad enough....
navi10 also beats the 2070 Super (more cut down TU-102) in certain games.

If that is bad enough...
navi10 also beats the 2080 (less cut down TU-102) in certain games.

oddly, the 2070 SUPER & the 2080^ are NOT similar chips to navi10, they are massive chips with millions of more transistors in use, even after they are cut down. My statement stand as proof, that rdna1 is more efficient at gaming, than Turing. You can't argue away the TU-106.



Subsequently, rdna2 is only going to make Turing's "disadvantages" more evident, given what we already know. If you choose to disbelieve Dr Su, then go ahead. But nothing since She took over, has led the public to believe she's a snake-oil salesman.

Greater IPC is not hard to do, AMD claims they have. Then, to include 50% more performance/per watt, makes for some mind blowing theories. For one: Does AMD even need to increase CUs, if they increased IPC by 15% ..? Or if AMD uses it's 50% performance/per watt to make a die with 50% more CUs (not 50% larger die), means a 60 CU navi, slightly bigger than vega20. Rounding up, a 350mm^2 die... using about as much energy as a 5700xt ..?

See how you can play with what we already know.
 
Last edited:
navi10 also beats the 2080 (less cut down TU-102) in certain games.
Which games are those exactly? I am curious?
navi10 also beats the 2080 (less cut down TU-102) in certain games.
The 2060 also beats the 5700XT in certain games .. so, what's your point exactly? There are always going to be edge cases for each vendor.

On face value you can CLEARLY see how the TU-106 die, has 300,000 more transistors
Yes because it does AI, VRS, Mesh Shaders and RT, the Navi die doesn't, and barely comes out ahead, so who wins here -transistor wise- exactly?

It also consumes way less power (if you account for the node difference).
 
Last edited:
You are changing the goal posts.
Cannot change things, that have not been set yet, sorry. You're trying to narrow down that comparison into one little niche that might fit your narrative. It's not as simple as "tha architecture".

BTW, comparing cut down chips is even less meaningful, since you have those carry even more dead weight. So not even approximately comparable transistor numbers are valid any more.
 
Which games are those exactly? I am curious?
The 2060 also beats the 5700XT in certain games .. so, what's your point exactly? There are always going to be edge cases for each vendor.
Yes because it does AI, VRS, Mesh Shaders and RT, the Navi die doesn't, and barely comes out ahead, so who wins here -transistor wise- exactly?

Not sure what games, though almost any in-depth multi-card review you can see where navi overperforms. There are many reviews, where navi10 is beating vega20 (Mi50) in many games. How is that possible..? if rdna1 doesn't have some forsight into future gaming and is issued as a hybrid design, by the CEO herself. We are told that rdna2 is AMD's actual "next-gen" gaming architecture that AMD has hung the house on and rolled out a handful of patents on. The same "secret" gaming architecture that Microsoft and Sony and Samsung has already bought into... behind closed doors, after seeing it..!



But that discussion is for another thread. I was just suggesting that AMpere might not be coming for Gamer's and that nVidia is going to have to use a different strategy, to combat rdna2.
 
Not sure what games, though almost any in-depth multi-card review you can see where navi overperforms. There are many reviews, where navi10 is beating vega20 (Mi50) in many games.
The 2080 is at least 13% faster than Radeon VII, so NO, the 5700XT doesn't beat the 2080 in any game, it used to beat it in Forza Horizon 4, but NVIDIA patched that up and Turing is quite faster now than RDNA in this title.

But that discussion is for another thread. I was just suggesting that AMpere might not be coming for Gamer's and that nVidia is going to have to use a different strategy, to combat rdna2.
That's laughable, why would NVIDIA not update the uarch for gamers?

You still didn't answer how the 5700XT on a 7nm node, is barely ahead of the 2070?
-The 5700XT doesn't have RT, or ML, or VRS or Mesh Shaders, yet it has almost the same transistor count as the 2070, how?
-The 5700XT is on 7nm, yet it consumes the same amount of power as the 12nm 2070, how?
 
I was just suggesting that AMpere might not be coming for Gamer's and that nVidia is going to have to use a different strategy, to combat rdna2.

That's quite possible, IMHO, especially after Nvidia already did establish a separate architecture for HPC, which AMD now claims to follow with CDNA since they cannot compete there with RDNA.
 
That's quite possible, IMHO, especially after Nvidia already did establish a separate architecture for HPC, which AMD now claims to follow with CDNA since they cannot compete there with RDNA.
It was the case with Volta -> Turing but there are several reasons to see this as a one-off thing. As a reminder, this wasn't the case with Pascal, and the upcoming NV GPU gen has a lot of similarities with it.
 
It was the case with Volta -> Turing but there are several reasons to see this as a one-off thing. As a reminder, this wasn't the case with Pascal, and the upcoming NV GPU gen has a lot of similarities with it.
Yes and no - GP100 was in more than one regard quite different from the Gaming-Pascals. Maybe they just started giving different codenames later, but the trend IMHO was clear with Pascal already.
 
Yes and no - GP100 was in more than one regard quite different from the Gaming-Pascals. Maybe they just started giving different codenames later, but the trend IMHO was clear with Pascal already.
Well, sure, I fully expect GA100 to be different too. But this has more to do with chip production costs being suitable for corresponding markets than architectures. Turing has all these tensor cores hardly because they are needed for gaming.
 
Status
Not open for further replies.
Back
Top