https://www.guru3d.com/news-story/brazilians-fit-rtx-2080-ti-super-with-faster-gddr6.htmlSo here's what three musketeers in Brazil did, they removed 16 Gbps GDDR6 from a Galax RTX 2080 Ti Hall of Fame, and then planted that same GDDR6 memory on a GeForce GTX 2080 Ti which normally has 14 Gbps GDDR6. Surprisingly enough, that actually worked.
Obviously I need to make a few remarks here, there is no fully enabled GPU, ergo the shader processor count is the same (which would be different for a Super model). Secondly, there's not one 2080 Ti card we tested, that could not get its memory tweaked at 16 Gbps. In fact the MSI lighting we clocked at 16,548 MHz (effective data-rate).
But .. they 16 Gbps GDDR6 memory, can be tweaked as well, and they ended at 17,200 MHz and that is 3,200 MHz more than the memory that is on a standard 2080 Ti. An higher frequency was not possible on that memory because the memory controller would not allow this, according to the modders.
Brazilians make a 'RTX 2080 Ti Super' by fitting it with faster GDDR6
https://www.guru3d.com/news-story/brazilians-fit-rtx-2080-ti-super-with-faster-gddr6.html
https://wccftech.com/nvidia-geforce-rtx-2060-price-cut-299-usd-tackle-amd-rx-5600-xt/During our visit to the EVGA booth at CES 2020, we came by an interesting NVIDIA GeForce RTX 2060 card with an even more interesting price range. This card is the EVGA GeForce RTX 2060 KO Edition and it will cost just $299 US compared to the current RTX 2060 pricing of $349 US.
...
At $299 US, the new RTX 2060 could offer much better value over the Radeon RX 5600 XT even while being $20 US more expensive. AMD has shown the RX 5600 XT to be up to 15% faster than a GTX 1660 SUPER and 20% faster than a GTX 1660 Ti.
The NVIDIA GeForce RTX 2060, on the other hand, is up to 20% faster than the GeForce GTX 1660 SUPER while offering added features such as RTX, DLSS, etc. It would be interesting to see how the Radeon RX 5600 XT and the price-cut GeForce RTX 2060 perform against each other in the coming weeks.
"KO" is not an Nvidia version, it's something only that specific vendor is doing. Other vendors are simply buying them and releasing as regular 2060's, which is what Nvidia intended. If Nvidia runs out then that specific vendor won't have anymore "KO" versions to sell, that's all. It's not a real product.looks like 2060 ko's are 2070's that failed some tests, (edit: 2080's that failed)
The interesting times will happen if nvidia run out of failed chips
will they secretly ship fully working 2070 gpu's flashed as 2060 ko's
http://www.cgchannel.com/2020/02/review-nvidia-geforce-rtx-2080-ti/As well as seeing how the RTX 2080 Ti performs with a representative range of DCC applications, I will be testing how far Nvidia’s RTX hardware can accelerate GPU rendering, and attempting to answer a couple of questions that readers posed in response to the previous review: how important is memory capacity when choosing a GPU for production work, and how important is it to use Nvidia’s Studio GPU drivers?
...
What is interesting to see is how performance increases when OptiX is enabled, and the software can offload ray tracing calculations to the RT cores of the RTX cards. In Redshift, the impact is relatively small – although bear in mind that version 3.0 is still in early access – but in the V-Ray benchmark, performance increases by 33-35%, and in the Blender benchmark, by 85-105%.
https://www.guru3d.com/articles_pages/gpu_compute_performance_review_with_20_graphics_cards,1.htmlWhen it comes to content creation (render) titles, sometimes applications will automatically opt what's they deem is best, others even trigger extended more advanced APIs, Blender, for example, can now also make use of Optix opposed to CUDA, but only if you have a GeForce RTX based graphics card. OptiX is part of Nvidia GameWorks and a high-level (close to the hardware) API, meaning that it is designed to encapsulate the entire algorithm of which ray tracing is a part, not just the ray tracing itself. This is meant to allow the OptiX engine to execute the larger algorithm with great flexibility without application-side changes. We can only assume that Optix triggers and makes use of RT (Raytracing) cores on the GeForce RTX series graphics cards. So that's three Compute APIs available.
Also let me state, that it's not one or the other; Hybrid Rendering is what is applied the most: (running OpenCL or CUDA both the GPU and CPU) – GPU OpenGL/CUDA rendering can be performed on CPUs and GPUs at the same time allowing the Compute code to combine your CPUs and GPUs to utilize all available resources. For this article, we measure the effect of the GPU only.
In this article, we'll test the three content rendering titles with twenty graphics cards, and see what that brings us in performance. We do so with the three forenamed applications as well as the three available Compute APIs these applications can utilize.
looks like 2060 ko's are 2070's that failed some tests, (edit: 2080's that failed)
The interesting times will happen if nvidia run out of failed chips
will they secretly ship fully working 2070 gpu's flashed as 2060 ko's
https://www.pugetsystems.com/labs/a...-to-Make-Dual-NVLink-Work-on-Windows-10-1688/In light of that experience, I've taken time in the last couple of weeks to go through and test several different NVIDIA drivers on two configurations: four GeForce RTX 2080 Ti cards as well as four Quadro RTX 6000s, both set up in two physically bridged pairs. In this article I will chronicle what I found worked, what didn't work or behaved oddly, and where we are at with this as a company as a result.
...
Can you make two pairs of NVLink cards work in Windows 10? Yes, by using older drivers... but Windows may update them at any time, potentially messing up the configuration. You also wouldn't be able to utilize improved performance or added features from newer drivers, and there is no guarantee that future versions of Windows will work with the older drivers. All in all, not a great solution.
When you're working, playing, and creating content from home, you want to keep background noise to a minimum. NVIDIA RTX Voice is a new plugin that leverages NVIDIA RTX GPUs and their AI capabilities to remove distracting background noise from your broadcasts and voice chats.
RTX Voice allows users to:
Go live or attend a meeting remotely without worrying about finding a quiet place. Suppress background noise from players in loud environments, making incoming audio easier to understand.