CES 2025 Thread (AMD, Intel, Nvidia, and others!)

So the RTXkit technologies such as neural shaders/textures/materials and Texture compression, and RTX texture filtering are only supported on 50 series?
 
Microsoft is adding "Cooperative Vector" support to DirectX which allows shaders to use tensor cores in the GPU, which sounds like it should work with all GPUs with tensor cores, but in the article it has wordings like "in NVIDIA’s new RTX 50-series hardware" so I don't know.
I expect all RTX GPUs will be able to use DXCV for the Neural Radiance Cache. Previously it used tiny-cuda-nn which relies on the CUDA runtime.
 
It's saddening how tech media seems to be mostly parroting NVIDIAs obvious lie about RTX 5070 being equivalent to RTX 4090. This is misinformation campaign at it's purest.
 
It's saddening how tech media seems to be mostly parroting NVIDIAs obvious lie about RTX 5070 being equivalent to RTX 4090. This is misinformation campaign at it's purest.
I mean, it's going be obviously with AI enabled. There's frankly just no way its' going to have the same amount of raw power.

I think people need to get used to this concept. We have no other direction to head at this point in time, raw power is cost prohibitive in the consumer market, console market, etc. And also, plainly, it's getting more and more difficult to scale up fidelity and resolution due to a huge number of bottlenecks. While these are AI generated pixels, they get the job done.

I don't think it's a misinformation from that perspective, we need a new medium in which to compare to, I don't think raw power is worth discussing any further.
 
I don't think raw power is worth discussing any further
Exactly. Such approach has already lead us to some questionable comparisons like DLSS2+ vs FSR2+ at a similar scaling level - which has never produced comparable results in IQ to begin with. Framegen is here to stay and all the other AI based tech will inevitably gain more and more traction. Just ignoring that because other cards can't do it is a completely wrong way to compare the products.
 
I mean, it's going be obviously with AI enabled. There's frankly just no way its' going to have the same amount of raw power.
Well we know that, but it's likely going to fool plenty of other people who dont know better.

Cuz it's not like Nvidia didn't use to actually do this. 970 = OG Titan. 1070 = Maxwell Titan.

Now they're selling us a 5060 renamed as a 5070 and telling us it'll perform like a 4090(aka Lovelace Titan) even though it'll probably be more like the 4070Ti in reality. I would definitely rank this well into the 'straight up dishonest' category rather than just some cheeky marketing.
 
Well we know that, but it's likely going to fool plenty of other people who dont know better.

Cuz it's not like Nvidia didn't use to actually do this. 970 = OG Titan. 1070 = Maxwell Titan.

Now they're selling us a 5060 renamed as a 5070 and telling us it'll perform like a 4090(aka Lovelace Titan) even though it'll probably be more like the 4070Ti in reality. I would definitely rank this well into the 'straight up dishonest' category rather than just some cheeky marketing.
We would need to wait for benchmarks before judgement.
 
Not even ray tracing? Just straight up, the ONLY thing worth talking about anymore is AI?

Maybe we can talk like this 10 years from now or something, but general rendering power still matters massively today.
Well, AI is being used to improve Ray Tracing, so having more AI power would improve RT performance in this case.
If the 5000 series is a bridge too far for folks, the 6000 series will be significantly more AI based. Nvidia currently is playing a leadership role defining where computing is headed, and unless we have a breakthrough in silicon advancement, AI is the only piece of software kit that can provide super high computational ability at a fraction of the power and silicon cost.

It's hard to imagine another IHV going in another direction and obtaining better performance than AI generated pixels at the same power, performance, price points.
 
Maybe we can talk like this 10 years from now or something, but general rendering power still matters massively today.

It still matters a lot but it matters less for the most advanced elements of rendering. All of the hard stuff (GI, shadows, reflections) need some sort of ray querying engine unless you want to go back to baked lighting which is an option if you don’t mind static environments.

I would love to see a new game go hard using classic rendering techniques so we can have an actual counterpoint but I doubt that’s going to happen because those techniques are fundamentally limited in what they can achieve. It’s not worth it.
 
It still matters a lot but it matters less for the most advanced elements of rendering. All of the hard stuff (GI, shadows, reflections) need some sort of ray querying engine unless you want to go back to baked lighting which is an option if you don’t mind static environments.

I would love to see a new game go hard using classic rendering techniques so we can have an actual counterpoint but I doubt that’s going to happen because those techniques are fundamentally limited in what they can achieve. It’s not worth it.
Well I'm personally not understanding how 'ray tracing' capabilities are somehow not included within the 'raw power' aspect.

AI is improving ray tracing much like AI is improving rasterized graphics, so I dont think that's a good argument. Even Nvidia seems to think raw power for ray tracing still matters or else they wouldn't have bothered improving their ray tracing cores for Blackwell.
 
Well I'm personally not understanding how 'ray tracing' capabilities are somehow not included within the 'raw power' aspect.

AI is improving ray tracing much like AI is improving rasterized graphics, so I dont think that's a good argument. Even Nvidia seems to think raw power for ray tracing still matters or else they wouldn't have bothered improving their ray tracing cores for Blackwell.
It's probably best to talk about the 3 categories separately:
1) Raw shader flops
2) Fixed-function acceleration of parts of rendering algorithm (RT etc.)
3) AI-based image generation (which needs specific fixed-function blocks)

As of today and the foreseeable future, the cost of a transistor isn't going to improve much. So the question is, what's the best way to split the precious transistor budget between the above 3 categories? For sure the biggest bang for the buck seems to be with #3 -- with caveats of course.

Imagine a hypothetical scenario -- what if the transistor budget for the 6090 is exactly the same as the 5090. No clock speed increases either. How would you re-distribute the resources on the GPU? Some may want to go all-in on #1. Others may want to reduce #1 and #2 in favor of more #3.
 
Why imagine? Blackwell is exactly that. Raw shader flops are nearly identical outside of GB202. But RT Cores (1.5x improvement) and TensorCores (2x with FP4) showing a much bigger advancement.
 
Well I'm personally not understanding how 'ray tracing' capabilities are somehow not included within the 'raw power' aspect.

Raw power matters a ton for RT. We’re at the very early stages of ramping up raw RT throughput.

AI is improving ray tracing much like AI is improving rasterized graphics, so I dont think that's a good argument. Even Nvidia seems to think raw power for ray tracing still matters or else they wouldn't have bothered improving their ray tracing cores for Blackwell.

Yeah. We need a lot more rays to feed the magic AI.
 
It's probably best to talk about the 3 categories separately:
1) Raw shader flops
2) Fixed-function acceleration of parts of rendering algorithm (RT etc.)
3) AI-based image generation (which needs specific fixed-function blocks)

As of today and the foreseeable future, the cost of a transistor isn't going to improve much. So the question is, what's the best way to split the precious transistor budget between the above 3 categories? For sure the biggest bang for the buck seems to be with #3 -- with caveats of course.

Imagine a hypothetical scenario -- what if the transistor budget for the 6090 is exactly the same as the 5090. No clock speed increases either. How would you re-distribute the resources on the GPU? Some may want to go all-in on #1. Others may want to reduce #1 and #2 in favor of more #3.
I dont think cost of transistors not improving is the entire reason that we cant have better value GPU's(or rather, better improvements in performance per dollar over time than we're currently getting), but that's a headache of an argument to deal with on this forum where 'greed' is basically not a concept that exists or could even theoretically exist. :p

But ignoring all that, my real point was that 'raw power' still matters, be it rasterization or ray tracing power. Even if its role slowly declines over time, it's not anywhere close to being completely irrelevant yet, which is what the original claim was saying. I was merely disagreeing with the general extremeness of the argument.
 
But ignoring all that, my real point was that 'raw power' still matters, be it rasterization or ray tracing power. Even if its role slowly declines over time, it's not anywhere close to being completely irrelevant yet, which is what the original claim was saying. I was merely disagreeing with the general extremeness of the argument.

The problem is that while shader performance is easier to quantify (although there were some caveat but fortunately today most shader unit designs are quite sane), it's much harder to quantify raw "ray tracing" performance. I mean, what does the "318 TFLOPS ray tracing performance" means? How is that calculated? Can you compare it directly to other vendors' numbers? Probably not. This makes such numbers very marketing oriented. From this regard, the so called "AI TOPS" is actually better because it's actually easier to quantify (although because there's current no standard between vendors so it's still difficult to compare them, e.g. what does the "TOPS" mean? FP4? INT8? Who knows?).
 
Back
Top