AMD's FSR 3 upscaling and frame interpolation *spawn

2018 is when ampere launched and it had the hardware support for serious ai workloads. Even if dlss wouldn't become good until 2020 or so, they already had a vision for those features.
That's completely baseless claim as itself or you could say the same thing from when they first added units capable of tensor calculations (no, it wasn't tensor units)
Not related to the frame gen conversation, didn't DLSS use tensor cores in 2018 for training the algorithms? (B3D 2018 tread)
Yes among other things, none related to frame gen.
And in Jan 24 we got Lossless Scaling adding framegen from just some guy.
Yep
 
That's completely baseless claim as itself or you could say the same thing from when they first added units capable of tensor calculations (no, it wasn't tensor units)
I wrote ampere, I meant Turing, sorry.

Anyway, I don't remember the specifics of it all, but at the 2018 Turing presentation, they started talking about AI.


I'm praising the vision. I have a thousand of problems with Nvidia (where is my 300$/€ 10/12 GB card Nvidia?) but they are dragging tech in games forward, no doubt about it.

Also, not talking about frame gen in this context, more about using ai to improve the algorithms.
 
I wrote ampere, I meant Turing, sorry.

Anyway, I don't remember the specifics of it all, but at the 2018 Turing presentation, they started talking about AI.
While they're definitely pushing things forward on many levels, matrix (tensor) acceleration what they're now calling AI isn't exactly new thing, Intel had dedicated chip for it already in the late 80s and DSPs have been used to accelerate OCR forever (which would today fall under AI umbrella term) and if we want to go more recent mobile SoCs had dedicated AI acceleration already in 2015 and so on.
 
While they're definitely pushing things forward on many levels, matrix (tensor) acceleration what they're now calling AI isn't exactly new thing, Intel had dedicated chip for it already in the late 80s and DSPs have been used to accelerate OCR forever (which would today fall under AI umbrella term) and if we want to go more recent mobile SoCs had dedicated AI acceleration already in 2015 and so on.

It's not new, but I'm pretty sure when it was added to the GPU, people were saying it's "wasted silicon" or something like that.
 
It's not new, but I'm pretty sure when it was added to the GPU, people were saying it's "wasted silicon" or something like that.
Well, still waiting for that killer thing requiring such hardware. Local AI stuff like image generation stuff etc works just fine without dedicated hardware quickly enough, scaling is available if you want it, frame gen too. :-?
 
Well, still waiting for that killer thing requiring such hardware. Local AI stuff like image generation stuff etc works just fine without dedicated hardware quickly enough, scaling is available if you want it, frame gen too. :-?
Are we still against dedicated hardware acceleration for those features in 2024? 😴

Ps: yes, in your example, ai image generation works without ai hardware. But why wouldn't you want that to be done faster? Those ray tracing and matrix acceleration structures don't even take that much space to begin with (what was it, 10-15% of the die?).
 
Last edited:
Are we still against dedicated hardware acceleration for those features in 2024? 😴

Ps: yes, in your example, ai image generation works without ai hardware. But why wouldn't you want that to be done faster? Those ray tracing and matrix acceleration structures don't even take that much space to begin with (what was it, 10-15% of the die?).
I see much more use for RT than AI acceleration in GPUs, one justifies itself better at least for now.
 
Superior Upscaling/Downsampling, superior anti aliasing, superior frame generation (better quality, better frame pacing), ray reconstruction, vastly superior SDR to HDR convertion and the list goes on, who know what in the future?
- Texture upscaling in real time
- Much more efficient and detailed procedural terrain and map creation
- Instant lag-free AI NPC interaction and voice communication

The key words for local artificial intelligence are speed and instant responsiveness.
 
I see much more use for RT than AI acceleration in GPUs, one justifies itself better at least for now.
As people above said, even now it has super useful uses.
And in the future, developers like rockstar, naughty dog, the coalition, and many others will use this hardware in ways that we aren't even thinking about right now. It makes it possible for new and exiting tech to be developed and used.
Developers get new tools and just for that it's worth it.
 
Are we still against dedicated hardware acceleration for those features in 2024? 😴

Ps: yes, in your example, ai image generation works without ai hardware. But why wouldn't you want that to be done faster? Those ray tracing and matrix acceleration structures don't even take that much space to begin with (what was it, 10-15% of the die?).
@Bold Believe it or not that's the norm amongst many other mainstream hardware designers ...

Nvidia having a relatively fast RT implementation and integrated matrix HW functionality is mostly the exception in the industry. You don't see mobile hardware designers investing all that much in RT acceleration since it'll never go past the tech demo stage in their case and they have other more pressing problems to worry about than implementing HW for AI upscaling which are way down at the bottom of their priority lists. The other hardware vendor who tried doing both of those things currently have unbelievably bad perf/area for their graphics architecture ...
 
@Bold Believe it or not that's the norm amongst many other mainstream hardware designers ...

Nvidia having a relatively fast RT implementation and integrated matrix HW functionality is mostly the exception in the industry. You don't see mobile hardware designers investing all that much in RT acceleration since it'll never go past the tech demo stage in their case and they have other more pressing problems to worry about than implementing HW for AI upscaling which are way down at the bottom of their priority lists. The other hardware vendor who tried doing both of those things currently have unbelievably bad perf/area for their graphics architecture ...
If Nvidia could do it for not much die area, than Intel, amd, Qualcomm, apple and all others can do it too. The performance gains are undeniable, unless someone comes out with a compute solution that is somehow faster than dedicated hardware.

Is it too much to want the other manufacturers to compete?
 
If Nvidia could do it for not much die area, than Intel, amd, Qualcomm, apple and all others can do it too. The performance gains are undeniable, unless someone comes out with a compute solution that is somehow faster than dedicated hardware.

Is it too much to want the other manufacturers to compete?
Not much die area ? I don't suppose how you can explain then why it is their Tegra designs haven't scored any recent wins in mobile or ultra portable form factors ?
 
Not much die area ? I don't suppose how you can explain then why it is their Tegra designs haven't scored any recent wins in mobile or ultra portable form factors ?
Well AFAIK they don't have any SoCs for the mobile or ultra portable space as everything is designed for automation (and maybe the Switch 2).
 
Well AFAIK they don't have any SoCs for the mobile or ultra portable space as everything is designed for automation (and maybe the Switch 2).
Even if Nvidia did trim down their designs to just graphics functionality, OEMs still think that they wouldn't be competitive at all in those highly constrained form factors hence no real demand for them in those cases ...
 
Not much die area ? I don't suppose how you can explain then why it is their Tegra designs haven't scored any recent wins in mobile or ultra portable form factors ?
I found a die diagram of the Turing cores.

NVIDIA-TURING-TU102-1030x599.jpg

Honestly, for how useful and performant they are, they aren't taking that much space.

Switch 2 is going to have both rt cores and ai cores for dlss, so it's possible to use in a small mobile form factor of course.

Ps: I would take 20% less raster power in exchange for rt and ai cores any day; rasterization is spent.
 
Even if Nvidia did trim down their designs to just graphics functionality, OEMs still think that they wouldn't be competitive at all in those highly constrained form factors hence no real demand for them in those cases ...
Their architecture in general may not be tuned for those form factors. I would think it shouldn't be a problem to ship an SoC without RT or Tensor cores if required, so that's most likely not the barrier to entry.

And in any case I think this thread is about the PC GPU space, not gaming on phones or tablets.
 
I found a die diagram of the Turing cores.
Diagrams are *NOT* to scale so I wouldn't infer much information about them in terms of HW unit design complexity ...

Would you think RT/tensor cores be worth the hit to the main compute die being 30% larger ?
Their architecture in general may not be tuned for those form factors. I would think it shouldn't be a problem to ship an SoC without RT or Tensor cores if required, so that's most likely not the barrier to entry.

And in any case I think this thread is about the PC GPU space, not gaming on phones or tablets.
Even when we're solely discussing about desktop graphics, the argument has yet to be settled since high-end graphics could go into a direction that's not RT friendly and AI HW integration still may not be worthwhile on lower-end parts ...
 
Back
Top