DegustatoR
Legend
Not that wild. Nvidia generally doesn't release any performance numbers or even drivers prior to the announcement.It is wild we haven’t gotten a single credible performance leak yet.
Not that wild. Nvidia generally doesn't release any performance numbers or even drivers prior to the announcement.It is wild we haven’t gotten a single credible performance leak yet.
Yeah but the last two releases have had benches leaked months in advance. Maybe they made adjustments to how they handle roll out to partners and OEMs to keep a lid on things.Not that wild. Nvidia generally doesn't release any performance numbers or even drivers prior to the announcement.
No, that's just the tensor cores doing their usual business. If NVidia decides to make that Blackwell only, then it's a pure software restriction. I expect only a minor reduction in the overhead of switching between tensors, which is effectively equivalent to being able to switch between multiple classic textures. So possibly some memory management detail, to permit efficient arrays of tensors similar in a use case similar to array textures. The hidden (firmware internal) details usually deal with the prefetch logic necessary to have the right amount of data in L1 at the right point in time.Perhaps new hardware blocks for "Neural rendering"?
It's slightly more complicated than that. "Neural rendering" with a single texture from a deferred texturing style compute shader is really simple and brutally efficient since you don't even need to reload the expensive tensor that has the texture encoded, but only the short input vector from the G-buffer. But doing that for a setup with many textures needs some reasonably smart deferred batching by texture. You are trading effectively mip-chains and their very small L1 footprint (much waste in VRAM, very little in L1) for something that isn't even remotely as L1 cache friendly (but much friendlier on VRAM).All videocards are capable of "neural rendering" and of course there will be "advanced DLSS" and "enhanced RT" on future products.
Red herring. Unless I'm mistaken, the motivation for that API doesn't appear to be rendering at all, but rather to enable efficient scheduling of generic inferencing tasks related to the game logic. Not so sure if that will even start to have any relevance for the next 2-3 years, especially considering that simple AI related inference doesn't need to be offloaded to the GPU in the first place. Plus hiding the details behind an opaque, proprietary interface sounds like a stillbirth which will at most hinder integration of costlier AI features such dynamic voice and dialogue synthesis, which is what we are probably going to see more tech demos for. Then again, those options are so VRAM hungry they don't fit the lower half of the product range either. (And if they do, they are mostly still cheap enough to run them somewhere on the CPU anyway.)No no, something is coming ... NVIDIA just released a new SDK for "In-Game Inference".
NVIDIA In-Game Inferencing SDK
Integrate AI models into apps to manage deployment across devices.developer.nvidia.com
Do you think we will get anything new or just more generated frames/higher quality upscaling?No, that's just the tensor cores doing their usual business. If NVidia decides to make that Blackwell only, then it's a pure software restriction. I expect only a minor reduction in the overhead of switching between tensors, which is effectively equivalent to being able to switch between multiple classic textures. So possibly some memory management detail, to permit efficient arrays of tensors similar in a use case similar to array textures. The hidden (firmware internal) details usually deal with the prefetch logic necessary to have the right amount of data in L1 at the right point in time.
It's slightly more complicated than that. "Neural rendering" with a single texture from a deferred texturing style compute shader is really simple and brutally efficient since you don't even need to reload the expensive tensor that has the texture encoded, but only the short input vector from the G-buffer. But doing that for a setup with many textures needs some reasonably smart deferred batching by texture. You are trading effectively mip-chains and their very small L1 footprint (much waste in VRAM, very little in L1) for something that isn't even remotely as L1 cache friendly (but much friendlier on VRAM).
I suspect there's going to be a little trick for getting it fast enough even without aggressive batching.
Red herring. Unless I'm mistaken, the motivation for that API doesn't appear to be rendering at all, but rather to enable efficient scheduling of generic inferencing tasks related to the game logic. Not so sure if that will even start to have any relevance for the next 2-3 years, especially considering that simple AI related inference doesn't need to be offloaded to the GPU in the first place. Plus hiding the details behind an opaque, proprietary interface sounds like a stillbirth which will at most hinder integration of costlier AI features such dynamic voice and dialogue synthesis, which is what we are probably going to see more tech demos for. Then again, those options are so VRAM hungry they don't fit the lower half of the product range either. (And if they do, they are mostly still cheap enough to run them somewhere on the CPU anyway.)
During the shortages AMD.com had a semi-functional queue system when they had their drops once a week on Thursday. That’s the only one I saw, EVGA had a different system where you joined a waitlist as well.Yeah funny how it’s impossible to find a simple queue system at US retailers. It’s every man or woman for themselves. So uncivilized.
I had my heart set on a 5090 but $4000 would definitely make me think twice.
I remember having very accurate information on Ampere and Ada prior to the announcement. Even more information from the AMD side. It’s like crickets this time, feels like enthusiasm is rather low this time around.Not that wild. Nvidia generally doesn't release any performance numbers or even drivers prior to the announcement.
I remember that it was not just inaccurate but in fact completely fake.I remember having very accurate information on Ampere and Ada prior to the announcement.
It is wild we haven’t gotten a single credible performance leak yet.
I remember having very accurate information on Ampere and Ada prior to the announcement. Even more information from the AMD side. It’s like crickets this time, feels like enthusiasm is rather low this time around.