Don't worry about that, Advanced and Pro are the same save for some command line tools, Pro is meant for enterprise usewith the Advanced - not Pro - upgrade.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Don't worry about that, Advanced and Pro are the same save for some command line tools, Pro is meant for enterprise usewith the Advanced - not Pro - upgrade.
Apparently the 5070Ti is the same performance as the 4080 Super.
You could *try* to get oneI've a hard time believing that given the relatively tiny jump from there to the 5080 but significant price increase. If true though I would very likely get one - mostly for the 16GB VRAM but the performance boost would be nice too.
$750 to $1000 is a +33% price increase but it's the same chip and the same VRAM and all so the difference should be somewhat small, likely less than that on price.I've a hard time believing that given the relatively tiny jump from there to the 5080 but significant price increase. If true though I would very likely get one - mostly for the 16GB VRAM but the performance boost would be nice too.
I've a hard time believing that given the relatively tiny jump from there to the 5080 but significant price increase. If true though I would very likely get one - mostly for the 16GB VRAM but the performance boost would be nice too.
NVIDIA definitely runs tensor and fp32 ops concurrently, especially now with their tensor cores busy almost 100% of the time (doing upscaling, frame generation, denoising, HDR post processing, and in the future neural rendering).
Latest NVIDIA generations have become exceedingly better at mixing all 3 workloads (tensor+ray+fp32) concurrently, I read somewhere (I can't find the source now) that ray tracing + tensor are the most common concurrent ops, followed by ray tracing + fp32/tensor + fp32.
![]()
Concurrent execution of CUDA and Tensor cores
Yes, that is what it means. I don’t know where you got that. If the compiler did not schedule tensor core instructions along with other instructions, what else would it be doing? NOP? Empty space? Maybe you are mixing up what the compiler does and what the warp scheduler does. The warp...forums.developer.nvidia.com
![]()
I need help understanding how concurrency of CUDA Cores and Tensor Cores works between Turing and Ampere/Ada?
There isn’t much difference between Turing, Ampere and Ada in this area. This question in various forms comes up from time to time, here is a recent thread. It’s also necessary to have a basic understanding of how instructions are issued and how work is scheduled in CUDA GPUs, unit 3 of this...forums.developer.nvidia.com
Nah, running those lines is actually a lot more expensive than regular cables, and presumably you’d need to have a bunch so people could plug things in anywhere they wanted. Also a 20A outlet can run 1800W continuously (80% of max load).Will we see a future where high end gaming PCs have to be connected to 240V outlets like you use for your dryer? A PC with a 5090 and 14900K could already use half the continuous capacity (80%) of a 120V 20A breaker.
Informative threads. Confirms my understanding of how instruction issue works on Nvidia.
It's interesting that Nvidia never talks about the uniform math pipeline while AMD marketing makes a big deal about the scalar pipe. Presumably they perform similar functions.
They might be obligated due datacenter / prosumer stuff to keep sufficient supply available for quite some time longer though? If there is a 96GB version it's not GeForce for sure.Why would they make anything on an AD102 when it's EOL since November and there's GB202 in production?
Yeah pretty obvious, 3GB Modules + 512-bit bus = 48GB/96GB. That said I expect RTX 60 GeForce to use 3GB modules across the board (maybe 6090 has 42GB and a 6090 Ti with 48GB? And hopefully RDNA 5/UDNA Gen 1 has a top end option as well).The discussion presupposes that it's possible to fit 96 GB on an AD102, which it just isn't. 12 memory channels, two DRAMs per channel, 2 GB per DRAM = 48 GB.
A GB202 with 96 GB makes sense and is almost certainly gonna be the RTX 6000 Blackwell
Following on that, I don't expect 32 Gibt/4GB Modules to appear until like RTX 70, UDNA Gen 2/RDNA 6 and maybe Xe4 dGPU if that's 2028.Good point. I know Micron talked about creating 32 Gbit GDDR6 modules, but I can't find anywhere suggesting they ever did or they have any for sale. The spec does allow for such a capacity...