Benetanegia
Regular
Basing perf/w on TDP instead of actual measurements seems pointless too, especially since they're apparently reporting TGP instead of TDP. 2080 S, 2080 Ti and Titan RTX all had 250 TDP. Power consumption was not the same.
Basing perf/w on TDP instead of actual measurements seems pointless too, especially since they're apparently reporting TGP instead of TDP. 2080 S, 2080 Ti and Titan RTX all had 250 TDP. Power consumption was not the same.
Basing perf/w on TDP instead of actual measurements seems pointless too, especially since they're apparently reporting TGP instead of TDP. 2080 S, 2080 Ti and Titan RTX all had 250 TDP. Power consumption was not the same.
Wow Kool aid much?Not it matters anyway, im seeing 20-36TF gpus with 80 to 100% increase over Turing, aside from RT, dlss etc huge improvements, at good prices.
Indeed biggest leap ever. Even going from a 2080Ti to a 3070 is a huge upgrade, 2080 to 3080 even more so. Im sure we will see 3080Ti/s later on.
Or have they learned a valuable lesson from the Turning’s “cryptomania-Ievel prices during crypto correction” fiasco?Is nvidia anticipating stronger competition this time?
Wow Kool aid much?
Nvidia has always used TGP. 2080 TI has a TGP (per Nvidia) of 250 watts and a 3070 of 220.Basing perf/w on TDP instead of actual measurements seems pointless too, especially since they're apparently reporting TGP instead of TDP. 2080 S, 2080 Ti and Titan RTX all had 250 TDP. Power consumption was not the same.
Not it matters anyway, im seeing 20-36TF gpus with 80 to 100% increase over Turing, aside from RT, dlss etc huge improvements, at good prices.
Indeed biggest leap ever. Even going from a 2080Ti to a 3070 is a huge upgrade, 2080 to 3080 even more so. Im sure we will see 3080Ti/s later on.
Are people really denying that consoles are more efficient? Its pretty much a fact.
Anyway, as for the architecture itself: This really seems to show the weakness of having a single arch going for machine learning/compute and gaming at the same time. The IPC has dropped dramatically, probably bottlenecked by lack of SRAM and/or bandwidth to memory, showing the fundamental arch isn't designed around graphics but instead looks more like AMD's own CDNA split, concentrated on compute without the need for a giant amount of cache to go with it.
Consoles are more and more resembling PC configs, and PC hardware/APIs come closer to console efficiency (DX12_2, RTX IO). So it's less of an issue than in the past.
I think devs on this very forum have stated that "low level" PC APIs are not even close to what is available on consoles, particularly PS4. There are several facets of console design that allow a level of efficiency the PC can never match. I dont think we can claim anything about RTX IO at this point in time.
I think this is where the cheaper Samsung 8nm process comes to play. I believe that they went with 7nm TSMC, they PPW would be better, but at 799 instead of 699. Which would you choose?
This gen Nvidia flops became AMD flops and viceversa.It's probably premature before seeing an SM diagram, but assuming the "slapped another fp32 SIMD in there" rumour is true, we're seeing IPC go down as a direct result of changes between gaming and HPC ampere. One math instruction issued per clock, 3 SIMDs (int32, fp32, fp32) that each take 2 clocks - there is an obvious bottleneck there.
I dont think we can claim anything about RTX IO at this point in time.