This video show exactly the boundry of raytracing. When the pilot will go outside, to a wide area, raytracing will not work, thats why he comes back
Some info related to how the RT core wroks from someone who claimed to have worked on it:
The RT core essentially adds a dedicated pipeline (ASIC) to the SM to calculate the ray and triangle intersection. It can access the BVH and configure some L0 buffers to reduce the delay of BVH and triangle data access. The request is made by SM. The instruction is issued, and the result is returned to the SM's local register. The interleaved instruction and other arithmetic or memory io instructions can be concurrent. Because it is an ASIC-specific circuit logic, performance/mm2can be increased by an order of magnitude compared to the use of shader code for intersection calculation. Although I have left the NV, I was involved in the design of the Turing architecture. I was responsible for variable rate shading. I am excited to see the release now.
Seems NVIDIA is in a hurry, (12nm phasing out quickly?). RTX 2080Ti is going to launch alongside the regular RTX 2080.
https://videocardz.com/newz/exclusive-msi-geforce-rtx-2080-ti-gaming-x-trio-pictured
Maybe a Titan Turing would have more ram, but I bet all the other consumer cards stick with only 8Gb memory chips.
The more I think about the Ti edition the more I think it makes sense to release it now if they can. Typically they would count on Ti owners to upgrade to the x80 and then to the Ti the next year. With Pascal they kind of painted them into a corner where the Ti has 11GB of VRAM and the 2080 will have only 8GB (and only rumored to be 8% faster) so I don't think 1080Ti owners would be willing to upgrade to a 2080.
Ramprices are extremely high a the moment. There won't be a doubling this time. Next year they will either double with a 7nm series or bring customs with double the amount of vram, when gddr prices settle down.
Unfortunately no. More 2080Tis 11GB from Palit and MSII hope those are just bad photoshop jobs.
Unfortunately no. More 2080Tis 11GB from Palit and MSI
https://videocardz.com/newz/msi-geforce-rtx-2080-ti-and-rtx-2080-duke-pictured
https://videocardz.com/newz/palit-geforce-rtx-2080-ti-and-rtx-2080-gamingpro-series-unveiled
source: https://seekingalpha.com/article/41...-results-earnings-call-transcript?part=singleThe gaming community is excited of the Turing architecture, announced earlier this week at SIGGRAPH. Turing is our most important innovation since the invention of the CUDA GPU, over a decade ago. The architecture includes new, dedicated ray-tracing processors or RT Cores, and new Tensor Cores for AI inferencing which together will make real-time ray-tracing possible for the first time.
We will enable the cinematic quality gaming, amazing new effects powered by neural networks and fluid interactivity on highly complex models. Turing will reset the look of video games and open up the 250 billion visual effects industries to GPUs.
Unfortunately no. More 2080Tis 11GB from Palit and MSI
https://videocardz.com/newz/msi-geforce-rtx-2080-ti-and-rtx-2080-duke-pictured
https://videocardz.com/newz/palit-geforce-rtx-2080-ti-and-rtx-2080-gamingpro-series-unveiled
There should comming a Titan TX with 24 GB if you need it (and can afford it).
I'm not sure there were 16Gb GDDR5x chips. With GDDR6 it looks like the manufacturers were able to hit that density from the start. Although I imagine Nvidia wants to steer those needing that much memory to Quadro.
Everybody seems to forget that the Quadro RTX 5000 with 16GB of GDDR6 is quite affordable at $2300 compared to the $3000 Titan V especially if your work is principally graphics and not deep learning.