Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
Some info related to how the RT core wroks from someone who claimed to have worked on it:

The RT core essentially adds a dedicated pipeline (ASIC) to the SM to calculate the ray and triangle intersection. It can access the BVH and configure some L0 buffers to reduce the delay of BVH and triangle data access. The request is made by SM. The instruction is issued, and the result is returned to the SM's local register. The interleaved instruction and other arithmetic or memory io instructions can be concurrent. Because it is an ASIC-specific circuit logic, performance/mm2can be increased by an order of magnitude compared to the use of shader code for intersection calculation. Although I have left the NV, I was involved in the design of the Turing architecture. I was responsible for variable rate shading. I am excited to see the release now.

 
Some info related to how the RT core wroks from someone who claimed to have worked on it:

The RT core essentially adds a dedicated pipeline (ASIC) to the SM to calculate the ray and triangle intersection. It can access the BVH and configure some L0 buffers to reduce the delay of BVH and triangle data access. The request is made by SM. The instruction is issued, and the result is returned to the SM's local register. The interleaved instruction and other arithmetic or memory io instructions can be concurrent. Because it is an ASIC-specific circuit logic, performance/mm2can be increased by an order of magnitude compared to the use of shader code for intersection calculation. Although I have left the NV, I was involved in the design of the Turing architecture. I was responsible for variable rate shading. I am excited to see the release now.


Would have thought BVH traversal is the most time consuming part of raytracing and not intersecting triangles in the final BVH leave node.
But that depends largely on how many triangles there are in the nodes, I suppose.
Any raytracing experts here ?
 
That or they are sitting on a lot of stock.

Maybe Turing was supposed to release earlier this year and the release got delayed due to the mining craze, but they had been piling up chips. So might as well sell everything you got.
 
Maybe a Titan Turing would have more ram, but I bet all the other consumer cards stick with only 8Gb memory chips.

The more I think about the Ti edition the more I think it makes sense to release it now if they can. Typically they would count on Ti owners to upgrade to the x80 and then to the Ti the next year. With Pascal they kind of painted them into a corner where the Ti has 11GB of VRAM and the 2080 will have only 8GB (and only rumored to be 8% faster) so I don't think 1080Ti owners would be willing to upgrade to a 2080.
 
Maybe a Titan Turing would have more ram, but I bet all the other consumer cards stick with only 8Gb memory chips.

The more I think about the Ti edition the more I think it makes sense to release it now if they can. Typically they would count on Ti owners to upgrade to the x80 and then to the Ti the next year. With Pascal they kind of painted them into a corner where the Ti has 11GB of VRAM and the 2080 will have only 8GB (and only rumored to be 8% faster) so I don't think 1080Ti owners would be willing to upgrade to a 2080.

Yes, this seems to have been their business model for some time now. I would say since Kepler.

Ramprices are extremely high a the moment. There won't be a doubling this time. Next year they will either double with a 7nm series or bring customs with double the amount of vram, when gddr prices settle down.

GDDR6 isn't nearly as expensive as HBM2, as I understand it. No, I don't have numbers.

As for board partner custom cards with double the VRAM, that hasn't been a thing since GTX 780, and when the Ti part came out NV put the kibosh on that. Haven't seen a single AIB partner design with double VRAM since. I would absolutely consider buying if such a product were released as I expect Turing's performance to be in the range of what I need, only the VRAM will hold me back. If I have to wait until 7nm here's to hoping the replacement for the 20 series arrives next year.
 
looks like RT and Tensor cores are coming to GeForce Turing. From earning call transcript:

The gaming community is excited of the Turing architecture, announced earlier this week at SIGGRAPH. Turing is our most important innovation since the invention of the CUDA GPU, over a decade ago. The architecture includes new, dedicated ray-tracing processors or RT Cores, and new Tensor Cores for AI inferencing which together will make real-time ray-tracing possible for the first time.

We will enable the cinematic quality gaming, amazing new effects powered by neural networks and fluid interactivity on highly complex models. Turing will reset the look of video games and open up the 250 billion visual effects industries to GPUs.
source: https://seekingalpha.com/article/41...-results-earnings-call-transcript?part=single
(maybe need to register to read this article)
 

That's too bad.

There should comming a Titan TX with 24 GB if you need it (and can afford it).

I'll believe it when I see it. Titan X (Pascal) and Titan Xp both had only 12GB, a whopping 1GB more than the 1080 Ti. Also, at the $3000 cost of the Titan V I'm not interested. $1200 (same as I paid for my Titan X (Pascal)) - sure.
 
I'm not sure there were 16Gb GDDR5x chips. With GDDR6 it looks like the manufacturers were able to hit that density from the start. Although I imagine Nvidia wants to steer those needing that much memory to Quadro.
 
I'm not sure there were 16Gb GDDR5x chips. With GDDR6 it looks like the manufacturers were able to hit that density from the start. Although I imagine Nvidia wants to steer those needing that much memory to Quadro.

Yes, 16Gbit GDDR6 chips are available today from Samsung, at least. I agree that Nvidia is likely to restrict use of these chips to the Quadro lineup, for the time being. It would be great if there's a Turing Titan part with 24GB RAM at a reasonable price though. I'll be all over that.
 
Everybody seems to forget that the Quadro RTX 5000 with 16GB of GDDR6 is quite affordable at $2300 compared to the $3000 Titan V especially if your work is principally graphics and not deep learning.
 
Everybody seems to forget that the Quadro RTX 5000 with 16GB of GDDR6 is quite affordable at $2300 compared to the $3000 Titan V especially if your work is principally graphics and not deep learning.

I haven't forgotten about the existence of the RTX 5000. It's also only 2/3 of an RTX 6000/8000 in terms of functional unit specifications, which implies it uses a smaller die i.e. TU104/RT104.
 
So the 80ti card is being released the same time as the 80? That isn't typical. That has to mean there isn't enough of a window between this series and the next series of cards (presumably 7nm) to legitimately stagger the 2080ti release, right?
 
Status
Not open for further replies.
Back
Top