Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
Here's TU104 if anyone wants to try sizing it

Source: GN quick tear down video at Gamescom

View attachment 2644
And here's another straight-on picture of the PCB, straight from NVIDIA.

geforce-rtx-2080-pcb.jpg
 
I think the biggest mistake is calling $1,000 card "TI"

They messed up the FE pricing again like they did with the 1080 so the MSRP prices are pointless, no vendor will ever sell the Ti at $999 when the FE goes for $1199 and that's reflected in the prices. Typical Nvidia scummy practices here. They really like that FE tax and to make their partners look bad because they don't sell at MSRP.
 
And here's another straight-on picture of the PCB, straight from NVIDIA.

geforce-rtx-2080-pcb.jpg


This one have been made for been calculated ... the core square is not a real one chips, or thave use a super good AI denoize on it. This said it can represent the real size ( +8%)
 
Last edited:
This one have been made for been calculated ... the core square is not a real one chips, it is an image But as it is coming from Nvidia, i think the dimensions are real.
More specifically, it's a sanitized image. That's a real board and substrate; they've just brushed out all of the unique markings on the PCB and RAM, and prettied up the top of the GPU. We've gotten similar images from NVIDIA in prior generations, and they've been accurate.
 
Supposing that neither AMD's Navi nor Intel's upcoming dGPU will have dedicated hardware for raytracing and Nvidia remains as the only one who has it, then how would that affect mass adoption of raytracing in future games?
 
Last edited:
More specifically, it's a sanitized image. That's a real board and substrate; they've just brushed out all of the unique markings on the PCB and RAM, and prettied up the top of the GPU. We've gotten similar images from NVIDIA in prior generations, and they've been accurate.

Hence why i have say, this one is made for been calculated . ( and why i say, remove 8% too .. ok.. let say 5% )
 
Supposing that neither AMD's Navi nor Intel's upcoming dGPU will have dedicated hardware for raytracing and Nvidia remains as the only one who has it, then how would that affect mass adoption of raytracing in future games?
No further than Gameworks type of optional features.
 
NVIDIA's Move From GTX to RTX Speaks to Belief in Revolutionary Change in Graphics

https://www.techpowerup.com/246930/...to-belief-in-revolutionary-change-in-graphics
I thought this part was interesting:
The move from GTX to RTX means NVIDIA is putting its full weight behind the importance of its RTX platform for product iterations and the future of graphics computing. It manifests in a re-imagined pipeline for graphics production, where costly, intricate, but ultimately faked solutions gave way to steady improvements to graphics quality.
That souns to me like a misconception, and I can already read things here and there abou fake resolutions adn what not. Point "rendering" is as fake as it gets I can't see for my life how including some level of ray tracing (physics/optics) to the process make the thing faker, quite the contrary.

Other than that Nvidia is making a leap right there, I wonder how long it will take for the competition to catch-up.
 
Is there any way that the RTX components can be used for accelerating GPU Mining?

Please, let the answer be no. Otherwise the insane prices will be even higher.
 
My guess is they are/could be (same for tensor cores), but that no one will release any such software in the wild for at least a long while
 
Is there any way that the RTX components can be used for accelerating GPU Mining?

Please, let the answer be no. Otherwise the insane prices will be even higher.
Not unless someone makes a new coin where the mining is based on ray tracing or tensor calculations.

The only mining algorithm that really matters is Ethereum - it's 80% of the GPU mining revenue. Ethereum's algorithm is "memory hard" which requires memory bandwidth and a just a few XORs. So GDDR6's higher bandwidth will help ethereum mining, the tensor cores and ray tracing won't have any use.

That said the entire market has crashed, so mining should have 0 effect on the launch:
https://bitinfocharts.com/comparison/ethereum-mining_profitability.html#log
 
NVIDIA’s Flagship Turing TU102 GPU For GeForce RTX 2080 Ti Detailed – 50% Faster Per Core Performance, 288 TMUs/96 ROPs on Full Die and New Overclocking Features
The Turing TU102 GPU has 72 Streaming Multiprocessors (SM) featuring 64 CUDA cores each. The full die features 4608 CUDA cores while the GeForce RTX 2080 Ti features 4352 cores. The chip has 576 Tensor cores, 72 RT cores, 36 Geometry Units, 288 Texture Units (TMUs) and 96 ROPs (Raster Operation Units). In addition to the core specs, the chip has 384-bit memory interface supporting a 7 GHz GDDR6 (14 GHz Effective) DRAM design and 2 NVLINK channels. The chip features 6 MB of L2 cache too.
...
Lastly, there’s a new overclocking feature being talked about which the new Turing GeForce RTX graphics cards will be able to make use of. To be known as the Scanner (Final Name is still a work-in-progress), the feature will let the OC Utility detect the best clock speeds and voltages for you without the need to do anything. Just run a test through the overclocking utility and you are all set.
https://wccftech.com/nvidia-turing-gpu-geforce-rtx-2080-ti-ipc-block-diagram-detailed/
 
Status
Not open for further replies.
Back
Top