XeSS has Frame Generation too.It is decent, but DLSS still has superior upscaling, and it has Ray Reconstruction and Frame Generation.
What's more, the transition to the transformer model will boost DLSS quality in all categories even further.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
XeSS has Frame Generation too.It is decent, but DLSS still has superior upscaling, and it has Ray Reconstruction and Frame Generation.
What's more, the transition to the transformer model will boost DLSS quality in all categories even further.
NVIDIA has a big supercomputer running 24/7, 365 days a year improving DLSS. And it's been doing that for six years.
Yeah I know but presumably Intel hasn’t been training their model for 6 years so maybe it doesn’t take much to get decent results. 80/20 rule.
The answer is pretty obvious - until whoever keeps leaking them covers their shorts! (Only half joking).They have moved forward their plans in a great way, Blackwell Ultra (B300) is coming up 6 months early, and Rubin is the same. It's the same situation as H100 and H200. NVIDIA keeps pumping up new hardware and customers buy what's available according to their budget, order volume and development plans.
As for reports of delay:
Whatever happened to the idea of just investing in something for long term benefits?This is one of reasons why when I see people saying something like this should be open sourced or made vendor neutral I ask them about specifics.
I know the consumer side might not be want to hear this but sans directly monetizing software improvements like this it's going to need to be captured via the margins on hardware.
Whatever happened to the idea of just investing in something for long term benefits?
at least this is marginally more useful than mining cryptoThe crazy spending on AI continues, after Microsoft pledged 80 billion $ on AI in 2025, Meta now pledges 65 billion $ in 2025.
Meta will set up a massive 2-gigawatt data center, which is "so large it would cover a significant part of Manhattan," according to Zuckerberg. Roughly 1 gigawatt of this computing capacity will be online by the end of this year, and it will use more than a whopping 1.3 million GPUs.
![]()
Zuckerberg Shares $65 Billion 2025 Spending Plan For 1.3 Million GPU AI Datacenter
Meta CEO Mark Zuckerberg ups the stakes and announces a $65 billion capital expenditure plan for 2025 to set up a 2GW data center .wccftech.com
I would argue they have demonstrated the ability to distill ChatGPT models, using its answers for training and nothing more. The R1 preview even displayed OpenAI’s prices for API calls and answered that it was created by OpenAI)China has shown that it is necessary to invest in better algorithms or codes that make the most of the available hardware.
It's actually expanding .. now AI is going to be available on even more devices.the IA bubble seems gone. It may seem brutal
That's super interesting. So the GPT models trained themselves on human-generated data while the R1 trained itself on GPT-generated data. It's like a cheap knockoff of GPT.I would argue they have demonstrated the ability to distill ChatGPT models, using its answers for training and nothing more. The R1 preview even displayed OpenAI’s prices for API calls and answered that it was created by OpenAI)
I would argue they have demonstrated the ability to distill ChatGPT models, using its answers for training and nothing more.
But with the addition of the iterative "reasoning" pass, the output of the discerning model is stable enough to better encode "logic" by example in the primary transformation while also filtering out a lot of the garbage the input model was confronted with in it's own training corpus.These distilled models cannot advance the state-of-the-art in learning.
Or Jevons paradox will kick in, and inference demand will increase significantly necessitating even more compute to be deployed.Why this all is so worrying for Nvidia? Because it
means that at least for inference, peak computing power demand is over
Large corporations are still doubling down on compute. People are racing to build AGIs, they are not building large clusters for inference alone. Even China has announced a 100 billion $ investment in AI.While there is still some minor demand for "large" models that have been trained against the full corpus
That's not the job of NVIDIA, they didn't start this AI cycle, NVIDIA has one task: sell shovels, and that's it. They already have their hands full tuning hardware efficiency.The real problem for Nvidia is that they don't have any answer at all for the question "how do we get that shit efficient" - only for "how can we throw more hardware at it"
Yea, that's not it at all. Many companies like Microsoft are reducing their capex for the next fy as investors are asking for returns on the AI spending. This will affect Nvidia's growth. Second of all, there's a move to limit Nvidia's sales to China further expanding the current restrictions which will limit growth. DeepSeek while relevant is mostly a smoke screen. Depending on how expansive the government's restrictions are, I wouldn't be surprised if Nvidia is sub $100 at some point this year.It's actually expanding .. now AI is going to be available on even more devices.
This selloff is just a knee jerk panic reaction from the less educated masses. The whole semiconductor market is down as a result.