Nvidia Ampere Discussion [2020-05-14]

I recall intentionally waiting for the DDR when the SDR launched. I don't think there was any sense of a bait and switch with the DDR version -- AFAIK the reviews made it known that the DDR was coming only a few months later. The SDR was simply a bad buy no matter how you sliced it, it was mostly sold on marketing and tech demos because the raw fillrate and bandwidth grunt weren't a big step up over TNT2 Ultra and 3500. I didn't get many months of enjoyment with the DDR before it was handily eclipsed by the Gef2 GTS.

The fact that Nvidia is harkening back to the Gef256 launch is concerning. Shades of the GeforceFX and FuryX. Hardware that's legitimately good tends to sell itself without the need for empty nostalgia.
 
When has Nvidia ever held back on marketing whether the product was good or not. I don't think they've ever let a flagship product "speak for itself".

I missed the early days of Voodoo/TNT hype. My first card was a PNY Geforce 2 Pro. Good times though.

Honestly, while I'm not "hyped", I pretty curious about the rt performances vs the amd solution, and the dlss evolution with Ampere.
 
NV seems confident about their Ampere gaming gpus, seeing all their ads/articles on facebook. Maybe they have a reason to, Turing on 7nm would be very competitive and more to rdna2.
 
NV seems confident about their Ampere gaming gpus, seeing all their ads/articles on facebook. Maybe they have a reason to, Turing on 7nm would be very competitive and more to rdna2.
Well, they got a reason to be confident - their competitor has a recent history of underperforming, hot and delayed flagship products. There is nothing pointing this gen would be different.

Besides, in 2020 nV will probably launch handful SKUs based on a few chips, AMD will almost certainly deliver just Navi 21.
 
Accelerating Standard C++ with GPUs Using stdpar
August 4, 2020
Historically, accelerating your C++ code with GPUs has not been possible in Standard C++ without using language extensions or additional libraries.
...
Now you can! NVIDIA recently announced NVC++, the NVIDIA HPC SDK C++ compiler. This is the first compiler to support GPU-accelerated Standard C++ with no language extensions, pragmas, directives, or non-standard libraries. You can write Standard C++, which is portable to other compilers and systems, and use NVC++ to automatically accelerate it with high-performance NVIDIA GPUs.
...
The recently announced NVC++ compiler included in the NVIDIA HPC SDK enables you, for the first time, to program NVIDIA GPUs using completely standard and fully portable C++ constructs. C++ Parallel Algorithm invocations instrumented with appropriate execution policies are automatically parallelized and offloaded to NVIDIA GPUs.
https://developer.nvidia.com/blog/accelerating-standard-c-with-gpus-using-stdpar/
 
Recent DLSS inspired discussion moved to it's thread. Some were maybe beyond DLSS but the concerns around it and how to benchmark are very similar so they got moved too. https://forum.beyond3d.com/threads/nvidia-dlss-antialiasing-discussion-spawn.60896/

Thanks Malo!

EDIT: Also as a further note please don't bring AMD into this Nvidia focused thread. Feel free to have those discussions in this other thread or create a more fitting one -- https://forum.beyond3d.com/threads/speculation-gpu-performance-comparisons-of-2020-spawn.61885/
 
Back
Top