Intel and NV are not in consoles, they are bad.
Intel is in Google Stadia...
Intel and NV are not in consoles, they are bad.
https://wccftech.com/nvidia-geforce-rtx-3080-graphics-card-specs-leak-2100-mhz-gpu-19-gbps-gddr6x-memory/Additional specifications of NVIDIA's GeForce RTX 3080 Ampere Gaming graphics card have leaked out over at UserBenchmark which were spotted by Rogame.
...
Moving over to the specifications, the NVIDIA GeForce RTX 3080 was spotted with 10 GB of VRAM which was running 19 Gbps GDDR6X memory dies (4750 MHz QDR effective) across a 320-bit bus interface. This should deliver a memory bandwidth of 760 GB/s. This is almost a 53% jump in the memory bandwidth over the GeForce RTX 2080 SUPER which is very impressive. As for the GPU clocks, the card has a BIOS limit set to 2100 MHz so we should be looking at clock speeds similar to the Turing GPUs which also peak around 2.1 GHz.
...
Moving on to the GeForce RTX 3080, the rumor reports that the card will be featuring the GA102-200-KD-A1 SKU. This cut down SKU will feature the same 4352 CUDA cores as the RTX 2080 Ti that will be arranged in a total of 68 SMs. The card is reportedly going to feature up to 20 GB of memory that is also going to be GDDR6X. Assuming the memory is running at 19 Gbps across a 320-bit bus interface, we can expect a bandwidth of up to 760 GB/s.
...
In addition to the GeForce RTX 3080 graphics card specifications leak, Chiphell has posted what seems to be an alleged performance chart of the GeForce RTX 30 series lineup which includes the GeForce RTX 3090, GeForce RTX 3080, GeForce RTX 3070 Ti, GeForce RTX 3070 & the GeForce RTX 3060. The chart claims to be an average performance measurement of the graphics cards purely in gaming benchmarks at various resolutions from 1080p and all the way up to 4K. It also showcases the performance per watt gains for each respective generation. Again, this chart should be taken with a grain of salt as the information on performance is yet to be verified.
I think this performance graph will end up being very accurate regardless of how legitimate it is.NVIDIA GeForce RTX 3080 ‘Ampere’ Graphics Card Maxes Out at 2.1 GHz GPU Clock, Features 19 Gbps GDDR6X Memory
August 16, 2020
https://wccftech.com/nvidia-geforce-rtx-3080-graphics-card-specs-leak-2100-mhz-gpu-19-gbps-gddr6x-memory/
NVIDIA GeForce RTX 3080 ‘Ampere’ Graphics Card Maxes Out at 2.1 GHz GPU Clock, Features 19 Gbps GDDR6X Memory
August 16, 2020
https://wccftech.com/nvidia-geforce-rtx-3080-graphics-card-specs-leak-2100-mhz-gpu-19-gbps-gddr6x-memory/
Agreed. Even wccftech mentioned to take the performance chart "with a grain of salt".I'm not buying that performance chart. There's been no mentioned to date that I'm aware of, of standard and Ti versions of each performance tier.
Yeah.NVidia core clocks are basically unchanged
>expectwhile we expect AMD's clocks to increase by 20%?
So, if true, they're adamant about 390-400 Watts for Titan Ampere?
edit: Which in turn means - following the linked graph - that Titan Ampere is roughly 50% faster than 2080 Ti but will use >55% more power (390+ vs. 250 watts) despite being on a more advanced node and using more power efficient memory? I find that hard to believe.
Bold.
Lisa likes winning too much, sorry.
Either way the perf wars thread is two blocks down.
Of course, why would basic organization change?Series X pretty much revealed how RDNA2 looks like, I don't see major changes besides addition of RT.
Lisa likes winning more than you think she does.Expecting 3080 to be around 15+20% better than 2080ti, which is where I expect Big Navi to be.
Of course, why would basic organization change?
Lisa likes winning more than you think she does.
A thousand cuts made whole.Without the change, tranditional performance won't be anything more than 5700x scaled up.
It can.Without the product that can "win"?
A thousand cuts made whole.
Wait and witness.
It can.
Oh, very much can.
You sincerely underestimate her will to win.
Like, she won the CPUs already, mop-up time there now.
Describing their very arcane VLSI flow is hard and dumb (and you'll get your ISSCC presentation next year anyway).Maybe you should explain how AMD can improve efficiency by 2x with Navi #2 instead of writing those onelines...
Engineering.Can you come up with any better reason than Lisa's "will to win"?