Quad SLI on cards each being 250W anyone? I still don't understand why people are so surprised by GPUs hitting 500W. They have been doing this for ages, and no one is forcing you to buy these products still.One thing is for sure... Pushing GPUs too far isn't a new thing. In fact it's the norm. And we should be laughing at their desperation to win. Or something.
I'm glad they usually include a quiet mode.
It’s only the norm when needed to compete. Nvidia didn't need to do this with Kepler up through Turing. They have gone back to the ways of the Fermi.One thing is for sure... Pushing GPUs too far isn't a new thing. In fact it's the norm. And we should be laughing at their desperation to win. Or something.
I'm glad they usually include a quiet mode.
It’s only the norm when needed to compete. Nvidia didn't need to do this with Kepler up through Turing. They have gone back to the ways of the Fermi.
It is a good point but GTX 480 was problematic for them and was pushed to the limit so they wouldn't fall behind. GTX 580 is essentially the same chip but uses less power, is faster, quieter (than 480 and the competition). 580 also has a Furmark limiter.The interesting thing about this perception (that I see gets brought up a lot especially in comparison to how "light" Pascal supposedly was configured) is that the GTX 480 (Fermi) has the same official 250w TDP rating as the 780ti, 980ti, 1080ti and 2080ti. It also does draw under that in typical gaming workloads. The one caveat is that back then power limiters did not function like now and so you could blow past that via tests such as Furmark.
if you don't believe me - https://www.techpowerup.com/review/nvidia-geforce-gtx-480-fermi/30.htm
and another source - https://www.guru3d.com/articles-pages/asus-geforce-gtx-480-enggtx480-review,6.html
While 1080ti - https://www.techpowerup.com/review/nvidia-geforce-gtx-1080-ti/28.html
Is this really necessary?I'm personally glad you're personally glad for RT as well. Make a new thread about it. Just think about all the thumbs upping you guys could do with each other.
I can get such posts in RDNA3 thread but in Lovelace's?I'm personally glad you're personally glad for RT as well. Make a new thread about it. Just think about all the thumbs upping you guys could do with each other.
Is this really necessary?
The difference between the 4090 and 4080 16 GB is crazy. Ada is clearly a spectacular piece of silicon but with the current line up I can't help but feel it's pricing itself into irrelevance. I really can't see how DLSS3, let alone some of the more obscure features in Ada can ever become close to being mainstream when the GPU's are out of range of 99% of PC gamers.
It won't always be that way. And I'm just guessing here, but I think if you've gone through the trouble to integrate DLSS and any of the other temporal reconstruction techniques out there, it's probably not THAT much more effort to implement DLSS3... especially when you have Nvidia pushing for adoption. It's not just good for gamers, it's an easy marketing bullet point for the game as well.The difference between the 4090 and 4080 16 GB is crazy. Ada is clearly a spectacular piece of silicon but with the current line up I can't help but feel it's pricing itself into irrelevance. I really can't see how DLSS3, let alone some of the more obscure features in Ada can ever become close to being mainstream when the GPU's are out of range of 99% of PC gamers.
It won't always be that way.
And I'm just guessing here, but I think if you've gone through the trouble to integrate DLSS and any of the other temporal reconstruction techniques out there, it's probably not THAT much more effort to implement DLSS3... especially when you have Nvidia pushing for adoption. It's not just good for gamers, it's an easy marketing bullet point for the game as well.
Do we even know what they have improved with these "new" cores ?The RT performance in Port Royal doesn't look particularly impressive though. I was expecting to see the largest difference there given the fact that these are 3rd generation RT cores.