Nvidia's 3000 Series RTX GPU [3090s with different memory capacity]

Those who measure a GPU performance by its VRAM size deserve to be confused.

Reminds me of some good old times lol. Some 9600XT's (256mb variants) had more vram vs a 9700pro, so people thought it was 'better'. I think the general consumer knows better these days (right?). Still have a 9600XT HIS excalibur laying around somewhere, it came with 256mb. It was no match for my other pc with a 9700pro 128mb though :p Both teared through HL2 like it was nothing.
 
3060 30% faster than ps5 gpu? my ass ;d it will be very similar performance in standard raster but in rt geforce will be faster for sure
 
That mobile 3080 will likely be slower than desktop 3060Ti.

TDP limited? It's a full GA104 with reasonably high boost clocks so I would be surprised if it's slower than the 3060 Ti.

Seems Nvidia is continuing it's annoying trend of only using perfect dies in mobile parts. I know it shouldn't matter but I have an emotional attachment to fully enabled dies.
 
3060ti is around 30% faster than 5700xt and 3060 will not have same performance as 3060ti ;)

Aha i was under the impression you where talking 3060Ti, which i dont doubt being 30% faster then PS5 gpu. No idea about the 3060 vanilla though, i stand corrected.
 
TDP limited? It's a full GA104 with reasonably high boost clocks so I would be surprised if it's slower than the 3060 Ti.
3060Ti is a 200W part and runs at 1665MHz boost.
A notebook part on the same chip will probably need to be 100W at max which means that it will have to clock significantly less, probably down to 1000MHz less.
Even a fully enabled GA104 at 1GHz would be slower than 3060Ti. It should be close to the just announced 3060 in fact.
 
3060Ti is a 200W part and runs at 1665MHz boost.
A notebook part on the same chip will probably need to be 100W at max which means that it will have to clock significantly less, probably down to 1000MHz less.
Even a fully enabled GA104 at 1GHz would be slower than 3060Ti. It should be close to the just announced 3060 in fact.

According to Anandtech mobile 3080's TDP range is 80-150W+. With proper cooling it might go to 200W...
 
February only, right? Great, so the first discrete GPU launch of 2021 is a paperlaunch.
Here's hoping this time period will be enough to produce a volume of cards that isn't only enough for reviewers and bot-wielding scalpers.


3060 30% faster than ps5 gpu? my ass
Exactly.
RDNA2 has a significantly higher performance-per-TFLOP than Ampere on rasterization, considering the 30 TFLOPs RTX3080 competes with the 20TFLOPs 6800XT.
This can't reach the PS5's rasterization performance, much less beat it by 30%. The 13 TFLOPs RTX3060 would be similar to a hypothetical 8.5TFLOPs PS5.

This is about the same level of bullshit Jen Hsu Huang pulled when he claimed a laptop with a RTX2080 Max-Q (which performs similar to a desktop GTX1070) would be faster than the PS5 and the Xbox Series X.

Nvidia does show a tendency to spew this kind of outrageous claims whenever they're hard to debunk, which is the case of PC vs. console comparisons.
 
RDNA2 has a significantly higher performance-per-TFLOP than Ampere on rasterization

Nah, it hugely depends on workload. Obviously in situations favouring Amperes compute they wont be far off TF for TF.

This is about the same level of bullshit Jen Hsu Huang pulled when he claimed a laptop with a RTX2080 Max-Q (which performs similar to a desktop GTX1070) would be faster than the PS5 and the Xbox Series X.

Dont think they where lying or conspiracy or whatever. If that 2080maxQ equals a 2070 then its true atleast for the PS5.

Nvidia does show a tendency to spew this kind of outrageous claims whenever they're hard to debunk, which is the case of PC vs. console comparisons.

Such as the UE5 demo actually running better on a 2080maxQ laptop equipped with a NVme drive.
 
RDNA2 has a significantly higher performance-per-TFLOP than Ampere on rasterization, considering the 30 TFLOPs RTX3080 competes with the 20TFLOPs 6800XT.
And Ampere has a significantly higher performance-per-rasterized pixels, performance-per-texture filtering, performance-per-triangle setup, etc on rasterization, considering the 172 Gigapixels/s RTX 3080 beats 294 Gigapixels/s 6800XT.
Total performance in rasterization is a weighted sum of all those metrics. Obviously, TFLOPs is still the most important metric, which is likely equal to the sum of all the rest metrics and it's even more important in compute heavy workloads, such as Ray-tracing, so I wouldn't be surprised if 3060 can beat PS5 by 30% in RT heavy games and I wouldn't be surprised if it's slower in pure rasterization (depends rather on GPC count than TFLOPs).
 
And Ampere has a significantly higher performance-per-rasterized pixels, performance-per-texture filtering, performance-per-triangle setup, etc on rasterization, considering the 172 Gigapixels/s RTX 3080 beats 294 Gigapixels/s 6800XT.
Total performance in rasterization is a weighted sum of all those metrics. Obviously, TFLOPs is still the most important metric, which is likely equal to the sum of all the rest metrics and it's even more important in compute heavy workloads, such as Ray-tracing, so I wouldn't be surprised if 3060 can beat PS5 by 30% in RT heavy games and I wouldn't be surprised if it's slower in pure rasterization (depends rather on GPC count than TFLOPs).

RDNA2/PS5 does RT through its shaders, the gap would be much higher in RT heavy games i think. What exactly did NV quote? 30% faster in normal rendering?
Ampere (or even turing) would do quite well in UE5-like workloads i think. 3080 specced as 30TF is not a lie at all, modern engines will like the extra compute-power.
Still, a close to 25TF 6900XT is also very impressive, when OC'ed, it has the potentional to achieve around 30TFs worth of rasterization abilities.
 
And Ampere has a significantly higher performance-per-rasterized pixels, performance-per-texture filtering, performance-per-triangle setup, etc on rasterization, considering the 172 Gigapixels/s RTX 3080 beats 294 Gigapixels/s 6800XT.
Total performance in rasterization is a weighted sum of all those metrics. Obviously, TFLOPs is still the most important metric, which is likely equal to the sum of all the rest metrics and it's even more important in compute heavy workloads, such as Ray-tracing, so I wouldn't be surprised if 3060 can beat PS5 by 30% in RT heavy games and I wouldn't be surprised if it's slower in pure rasterization (depends rather on GPC count than TFLOPs).

Yes you love Ampere and the 3060 and how much it would hypothetically beat the PS5 in a hypothetical RT-heavy scenario that will never happen because games made for the PS5 won't run RT-heavy scenarios that only run decently on RTX30 hardware.



The slide in question makes no reference to raytracing whatsoever and it even refers to the 3060 Mobile with lower clocks and TDP.
Though since some have now been fully indoctrinated into thinking RT performance is the only metric that matters and it's in every game in existence, I guess it's normal for those to assume RT performance is in all circumstances forever.
 
Here's hoping this time period will be enough to produce a volume of cards that isn't only enough for reviewers and bot-wielding scalpers.
December Steam hardware survey shows 3080 at 0.48%, which is a very fast 2-month ramp. Wish I had historical data to compare vs. prior $700 GPU launches, but comparing that number with other current numbers, e.g., 1650Ti (a 2019 GPU with both desktop and mobile parts) at 0.57% and 5700XT at 0.89% tells me that there are a large number of 3080s in the hands of gamers. Maybe they were all bought from scalpers, I don't know, but the numbers are high. There's no question that both Nvidia and AMD have supply problems, and demand is so high that to end users hitting F5 it seems like the cards don't exist. Sucks to be us, sucks to be them.

On a side note, I wonder what happens towards the latter half of the year once the vaccines kick in and everyone is up and about partying like its 2019 all over again. Do we see a huge crash in the at-home entertainment business? A glut of chip inventory?
 
Yes you love Ampere and the 3060 and how much it would hypothetically beat the PS5 in a hypothetical RT-heavy scenario that will never happen because games made for the PS5 won't run RT-heavy scenarios that only run decently on RTX30 hardware.



The slide in question makes no reference to raytracing whatsoever and it even refers to the 3060 Mobile with lower clocks and TDP.
Though since some have now been fully indoctrinated into thinking RT performance is the only metric that matters and it's in every game in existence, I guess it's normal for those to assume RT performance is in all circumstances forever.

Ease of the fallacies...
If a game has DXR, I will use it.
Why?
Because it looks BETTER than the alternative.
Simple as that.
 
I wonder how far downmarket this generation will go. The $100 segment can definitely benefit from HDMI 2.1. I want a cheap 2.1 card to stream at 120fps.
 
Back
Top