I expect all RTX GPUs will be able to use DXCV for the Neural Radiance Cache. Previously it used tiny-cuda-nn which relies on the CUDA runtime.Microsoft is adding "Cooperative Vector" support to DirectX which allows shaders to use tensor cores in the GPU, which sounds like it should work with all GPUs with tensor cores, but in the article it has wordings like "in NVIDIA’s new RTX 50-series hardware" so I don't know.
I mean, it's going be obviously with AI enabled. There's frankly just no way its' going to have the same amount of raw power.It's saddening how tech media seems to be mostly parroting NVIDIAs obvious lie about RTX 5070 being equivalent to RTX 4090. This is misinformation campaign at it's purest.
Exactly. Such approach has already lead us to some questionable comparisons like DLSS2+ vs FSR2+ at a similar scaling level - which has never produced comparable results in IQ to begin with. Framegen is here to stay and all the other AI based tech will inevitably gain more and more traction. Just ignoring that because other cards can't do it is a completely wrong way to compare the products.I don't think raw power is worth discussing any further
Well we know that, but it's likely going to fool plenty of other people who dont know better.I mean, it's going be obviously with AI enabled. There's frankly just no way its' going to have the same amount of raw power.
We would need to wait for benchmarks before judgement.Well we know that, but it's likely going to fool plenty of other people who dont know better.
Cuz it's not like Nvidia didn't use to actually do this. 970 = OG Titan. 1070 = Maxwell Titan.
Now they're selling us a 5060 renamed as a 5070 and telling us it'll perform like a 4090(aka Lovelace Titan) even though it'll probably be more like the 4070Ti in reality. I would definitely rank this well into the 'straight up dishonest' category rather than just some cheeky marketing.
I mean, we can wait for benchmarks to know for sure, but we can definitely make some pretty clear educated inferences here.We would need to wait for benchmarks before judgement.
Not even ray tracing? Just straight up, the ONLY thing worth talking about anymore is AI?I don't think raw power is worth discussing any further.
Well, AI is being used to improve Ray Tracing, so having more AI power would improve RT performance in this case.Not even ray tracing? Just straight up, the ONLY thing worth talking about anymore is AI?
Maybe we can talk like this 10 years from now or something, but general rendering power still matters massively today.
Maybe we can talk like this 10 years from now or something, but general rendering power still matters massively today.
Well I'm personally not understanding how 'ray tracing' capabilities are somehow not included within the 'raw power' aspect.It still matters a lot but it matters less for the most advanced elements of rendering. All of the hard stuff (GI, shadows, reflections) need some sort of ray querying engine unless you want to go back to baked lighting which is an option if you don’t mind static environments.
I would love to see a new game go hard using classic rendering techniques so we can have an actual counterpoint but I doubt that’s going to happen because those techniques are fundamentally limited in what they can achieve. It’s not worth it.
It's probably best to talk about the 3 categories separately:Well I'm personally not understanding how 'ray tracing' capabilities are somehow not included within the 'raw power' aspect.
AI is improving ray tracing much like AI is improving rasterized graphics, so I dont think that's a good argument. Even Nvidia seems to think raw power for ray tracing still matters or else they wouldn't have bothered improving their ray tracing cores for Blackwell.
Well I'm personally not understanding how 'ray tracing' capabilities are somehow not included within the 'raw power' aspect.
AI is improving ray tracing much like AI is improving rasterized graphics, so I dont think that's a good argument. Even Nvidia seems to think raw power for ray tracing still matters or else they wouldn't have bothered improving their ray tracing cores for Blackwell.
I dont think cost of transistors not improving is the entire reason that we cant have better value GPU's(or rather, better improvements in performance per dollar over time than we're currently getting), but that's a headache of an argument to deal with on this forum where 'greed' is basically not a concept that exists or could even theoretically exist.It's probably best to talk about the 3 categories separately:
1) Raw shader flops
2) Fixed-function acceleration of parts of rendering algorithm (RT etc.)
3) AI-based image generation (which needs specific fixed-function blocks)
As of today and the foreseeable future, the cost of a transistor isn't going to improve much. So the question is, what's the best way to split the precious transistor budget between the above 3 categories? For sure the biggest bang for the buck seems to be with #3 -- with caveats of course.
Imagine a hypothetical scenario -- what if the transistor budget for the 6090 is exactly the same as the 5090. No clock speed increases either. How would you re-distribute the resources on the GPU? Some may want to go all-in on #1. Others may want to reduce #1 and #2 in favor of more #3.
But ignoring all that, my real point was that 'raw power' still matters, be it rasterization or ray tracing power. Even if its role slowly declines over time, it's not anywhere close to being completely irrelevant yet, which is what the original claim was saying. I was merely disagreeing with the general extremeness of the argument.
Well, then I hope we'll get SSIM graphs vs ground truth rendering soonI don't think it's a misinformation from that perspective, we need a new medium in which to compare to, I don't think raw power is worth discussing any further.