Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
I’m sure folks will pay whatever for the 2080/Ti but 500+ for the 2070 is rough.
Agreed. In theory if performance has advanced up or exceeds 1080 levels then people shouldn't have a problem with the 2070 price. I can see where new metrics might be needed to measure the overall benefit of the new architecture (merging ray-tracing, ai, rasterization, compute). Reviews can't come soon enough!
 
RTX 2070 has 40 % higher bandwidth than GTX 1080 and only 10 % less CUDA Cores. It would be very surprising if the RTX 2070 wouldn't be faster than GTX 1080 even if it was based on the same architecture.
 
I cannot wait to see how much faster Nvidia's implementation of raytracing (the RTX platform) gets with future generations Nvidia GPUs on 7nm, 7nm+ with EUV, and 5nm

At the same time, I tend to agree with you about the outlook for raytracing support in games if there's no RT acceleration from AMD, and the next console generation.

So then we'd probably be looking somewhere between the late 2020s and 2030 with 10th generation consoles.

DXR has a compute shader fallback. Next-gen consoles can use it in limited form even without hardware support.
 
Agreed. In theory if performance has advanced up or exceeds 1080 levels then people shouldn't have a problem with the 2070 price.
Well no I mean you could say that about every generation. E.g. if people paid $500 for GTX580 they should be okay paying $500 for a 660 since they perform about the same (at the time anyway). That's not how it's supposed to work.
 
One cool thing about introducing a bunch of non-general-purpose units in the chip instead of just increasing ALU count is that mining performance might be very similar to the significantly smaller Pascal solutions. GPU miners may actually pass on the new hardware and just keep their current cards instead.


Well no I mean you could say that about every generation. E.g. if people paid $500 for GTX580 they should be okay paying $500 for a 660 since they perform about the same (at the time anyway). That's not how it's supposed to work.

nvidia has been steadily increasing the average selling price of cards using similarly sized chips, AFAIK at a much faster inflation than what the newer processes demand.
Their constant record-breaking revenues still come mostly from the geforce products, which are ~75% of their business IIRC. It's >10x larger than automotive and 2.5x larger than datacenter (they're much more dependent on gamers than what they'd like to admit).

Let's hope Raja at Intel doesn't drop the ball with the upcoming discrete GPUs, and Sony's rumored heavy participation on Navi paid off for AMD's side.
The price of the RTX family is screaming for decent competition.
 
Note that I believe most of them are DLSS, not RT. Marketing at work with their "RTX".

Yes, Nvidia have clarify it.......

RT support:
- Assetto Corsa Competizione
- Atomic Heart
- Battlefield V
- Control
- Enlisted
- Justice
- JX3
- MechWarrior 5: Mercenaries
- Metro Exodus
- ProjectDH
- Shadow of the Tomb Raider

DLSS support:
- Ark: Survival Evolved
- Atomic Heart
- Dauntless
- Final Fantasy XV
- Fractured Lands
- Hitman 2
- Islands of Nyne
- Justice
- JX3
- Mechwarrior 5: Mercenaries
- Player Unknown's Battlegrounds
- Remnant: From the Ashes
- Serious Sam 4: PLanet Badass
- Shadow of the Tomb Raider
- The Forge Arena
- We Happy Few
 
One cool thing about introducing a bunch of non-general-purpose units in the chip instead of just increasing ALU count is that mining performance might be very similar to the significantly smaller Pascal solutions. GPU miners may actually pass on the new hardware and just keep their current cards instead.
It is however at least a Volta class SM though. A dedicated and fast int path plus if it has large register file (which I'd say is very likely due to full tensor core setup) will definitely make it a seriously fast mining card...
 
I'm curious to see what DLSS can bring over implementations like TXAA. It will be interesting to see how it performs allowing offloading to the Tensors and whether it gives access to framebuffer data or if it's post-process only. I really don't know enough about it.

Are reviewers going to have access to any games with RT or DLSS implementations for the reviews?
 
I'm curious to see what DLSS can bring over implementations like TXAA. It will be interesting to see how it performs allowing offloading to the Tensors and whether it gives access to framebuffer data or if it's post-process only. I really don't know enough about it.

Are reviewers going to have access to any games with RT or DLSS implementations for the reviews?
Yup. From the sound of it may only be a post process (but a really good one). But I still can't wrap my brain around the claim that Jensen made during the presentation: "We invented TAA!". No you didn't mate (Siggraph 1983 !): . Neither did Nvidia invent the first GPU or the first RT GPU...ridiculous.
 
Last edited:
Nvidia's RTX highlights AI influence on computer graphics
What was particularly interesting about the announcement was how Nvidia ended up solving the real-time ray tracing problem—a challenge that they claimed to have worked on and developed over a 10-year period. As part of their RTX work, the company created some new graphical compute subsystems inside their GPUs called RT Cores that are dedicated to accelerating the ray tracing process. While different in function, these are conceptually similar to programmable shaders and other more traditional graphics rendering elements that Nvidia, AMD, and others have created in the past, because they focus purely on the raw graphics aspect of the task.

Rather than simply using these new ray tracing elements, however, the company realized that they could leverage other work they had done for deep learning and artificial intelligence applications. Specifically, they incorporated several of the Tensor cores they had originally created for neural network workloads into the new RTX boards to help speed the process. The basic concept is that certain aspects of the ray tracing image rendering process can be sped up by applying algorithms developed through deep learning.

In other words, rather than having to use the brute force method of rendering every pixel in an image through ray tracing, other AI-inspired techniques like denoising are used to speed up the ray tracing process. Not only is this a clever implementation of machine learning, but I believe it’s likely a great example of how AI is going to influence technological developments in other areas as well.
...
For Nvidia, the RTX line is important for several reasons. First, achieving real-time ray tracing is a significant goal for a company that’s been highly focused on computer graphics for 25 years. More importantly, though, it allows the company to combine what some industry observers had started to see as two distinct business focus areas—graphics and AI/deep learning/machine learning—into a single coherent story. Finally, the fact it’s their first major gaming-focused GPU upgrades in some time can’t be overlooked either.
https://www.techspot.com/news/76065-nvidia-rtx-highlights-ai-influence-computer-graphics.html
 
DLSS? As in approximating (faking) higher resolutions in real time, instead of actually rendering higher resolutions?
 
nvidia has been steadily increasing the average selling price of cards using similarly sized chips, AFAIK at a much faster inflation than what the newer processes demand.
Their constant record-breaking revenues still come mostly from the geforce products, which are ~75% of their business IIRC. It's >10x larger than automotive and 2.5x larger than datacenter (they're much more dependent on gamers than what they'd like to admit).

Let's hope Raja at Intel doesn't drop the ball with the upcoming discrete GPUs, and Sony's rumored heavy participation on Navi paid off for AMD's side.
The price of the RTX family is screaming for decent competition.

I think that largely depends on two factors: technological departure and competitive environment. As such, I think G80 presents a good analogue:
  • Significant departure from previous architecture
  • Very large die size by the day's standards (both were the largest consumer GPUs to date at launch)
  • Relative competitive vacuum: launched as clear-cut performance leaders with competition's counterparts many months away.
  • While not a simultaneous 3-tier release, the "super high end" part did come within about 6 months into largely intact competitive landscape despite R600 launch


upload_2018-8-21_12-2-26.png

I think the biggest mistake is calling $1,000 card "TI", a price point which has historically been reserved for Titan-class cards and, which is is by far the biggest departure form $650-700 price point these cards have historically been at. Should have called it a "Titan" and then released similar-specked "TI" card early next year at $800 price point and raffled a lot less feathers.

Depending on the performance, $1000 may or may not be a fair value 2080 TI but regardless they messed up their own price tier/naming convention for no good reason.
 
DLSS? As in approximating (faking) higher resolutions in real time, instead of actually rendering higher resolutions?

Well, I imagine (and hope) that'll be an actual option in games that support that the feature.

But otherwise, I think this is a currently being touted as a specific anti-aliasing method. That is, if you were playing at 1080p, DLSS would calculate what all or parts of the image at 4K would look like and downscale the result to your 1080p so you get fewer aliasing effects.
 
NVIDIA's Move From GTX to RTX Speaks to Belief in Revolutionary Change in Graphics
It's been a long road for NVIDIA ever since its contributor Turner Whitted worked on Multi-bounce Recursive Ray-tracing started way back in 1978. Jensen Huang says that GPU development and improvement has been moving at ten times what was being demanded by Moore's Law to CPUs - 1000 times every ten years. But ray-tracing is - or was - expected to require Petaflops of computing power. Yet another step that would take some 10 years to achieve.

The answer to that performance conundrum is RTX - a simultaneous hardware, software, SDK and library push, united in a single platform. RTX hybrid rendering unifies rasterization and ray tracing, with a first rasterization pass (highly parallel) and a second ray tracing pass that only acts upon the rendered pixels, but allows for materialization of effects and reflections and light sources that would be outside of the scene - and thus, virtually inexistent with pre-ray-tracing rendering techniques. Now, RT cores can work in tandem with rasterization compute solutions to achieve reasonable rendering times for ray-traced scenes that would, according to Jensen Huang, take ten times more to render in Pascal-based hardware.
...
Ray Tracing is being done all the time within 1 Turing Frame; this happens at the same time as part of the FP32 shading process - without RT cores, the green Ray tracing bar would be ten times larger. Now, it can be done completely within FP32 shading, followed by INT shading. And there are resources enough to add in some DNN (Deep Neural Network) processing to boot - NVIDIA is looking to generate Artificially-designed pixels with its DNN processing - essentially, the 110 TFLOPS powered by Tensor Cores, which in Turing render some 10x 1080 Ti equivalent performance, will be used to fill in some pixels - true to life - as if they had been actually rendered. Perhaps some Super Resolution applications will be found - this might well be a way of increasing pixel density by filling in additional pixels to an image.

The move from GTX to RTX means NVIDIA is putting its full weight behind the importance of its RTX platform for product iterations and the future of graphics computing. It manifests in a re-imagined pipeline for graphics production, where costly, intricate, but ultimately faked solutions gave way to steady improvements to graphics quality. And it speaks of a dream where AIs can write software themselves (and maybe themselves), and the perfect, Ground Truth Image is generated via DLSS in deep-learning powered networks away from your local computing power, sent your way, and we see true cloud-assisted rendering - of sorts. It's bold, and it's been emblazoned on NVIDIA's vision, professional and gamer alike. We'll be here to see where it leads - with actual ray-traced graphics, of course.
https://www.techpowerup.com/246930/...to-belief-in-revolutionary-change-in-graphics
 
Last edited by a moderator:
Yup. From the sound of it may only be a post process (but a really good one). But I still can't wrap my rain around the claim that Jensen made during the presnetation "We invented TAA!". No you didn't mate (Siggraph 1983 !): . Neither did Nvidia invent the first GPU or the first RT GPU...ridiculous.

Got to control the excepted definitions.
 
Status
Not open for further replies.
Back
Top