Charlietus
Regular
Even more impressiveAnd in Jan 24 we got Lossless Scaling adding framegen from just some guy.
Even more impressiveAnd in Jan 24 we got Lossless Scaling adding framegen from just some guy.
That's completely baseless claim as itself or you could say the same thing from when they first added units capable of tensor calculations (no, it wasn't tensor units)2018 is when ampere launched and it had the hardware support for serious ai workloads. Even if dlss wouldn't become good until 2020 or so, they already had a vision for those features.
Yes among other things, none related to frame gen.Not related to the frame gen conversation, didn't DLSS use tensor cores in 2018 for training the algorithms? (B3D 2018 tread)
YepAnd in Jan 24 we got Lossless Scaling adding framegen from just some guy.
I wrote ampere, I meant Turing, sorry.That's completely baseless claim as itself or you could say the same thing from when they first added units capable of tensor calculations (no, it wasn't tensor units)
While they're definitely pushing things forward on many levels, matrix (tensor) acceleration what they're now calling AI isn't exactly new thing, Intel had dedicated chip for it already in the late 80s and DSPs have been used to accelerate OCR forever (which would today fall under AI umbrella term) and if we want to go more recent mobile SoCs had dedicated AI acceleration already in 2015 and so on.I wrote ampere, I meant Turing, sorry.
Anyway, I don't remember the specifics of it all, but at the 2018 Turing presentation, they started talking about AI.
While they're definitely pushing things forward on many levels, matrix (tensor) acceleration what they're now calling AI isn't exactly new thing, Intel had dedicated chip for it already in the late 80s and DSPs have been used to accelerate OCR forever (which would today fall under AI umbrella term) and if we want to go more recent mobile SoCs had dedicated AI acceleration already in 2015 and so on.
Well, still waiting for that killer thing requiring such hardware. Local AI stuff like image generation stuff etc works just fine without dedicated hardware quickly enough, scaling is available if you want it, frame gen too.It's not new, but I'm pretty sure when it was added to the GPU, people were saying it's "wasted silicon" or something like that.
Are we still against dedicated hardware acceleration for those features in 2024?Well, still waiting for that killer thing requiring such hardware. Local AI stuff like image generation stuff etc works just fine without dedicated hardware quickly enough, scaling is available if you want it, frame gen too.
I see much more use for RT than AI acceleration in GPUs, one justifies itself better at least for now.Are we still against dedicated hardware acceleration for those features in 2024?
Ps: yes, in your example, ai image generation works without ai hardware. But why wouldn't you want that to be done faster? Those ray tracing and matrix acceleration structures don't even take that much space to begin with (what was it, 10-15% of the die?).
Superior Upscaling/Downsampling, superior anti aliasing, superior frame generation (better quality, better frame pacing), ray reconstruction, vastly superior SDR to HDR convertion and the list goes on, who know what in the future?Well, still waiting for that killer thing requiring such hardware
- Texture upscaling in real timeSuperior Upscaling/Downsampling, superior anti aliasing, superior frame generation (better quality, better frame pacing), ray reconstruction, vastly superior SDR to HDR convertion and the list goes on, who know what in the future?
As people above said, even now it has super useful uses.I see much more use for RT than AI acceleration in GPUs, one justifies itself better at least for now.
Why can't it be both?I see much more use for RT than AI acceleration in GPUs, one justifies itself better at least for now.
@Bold Believe it or not that's the norm amongst many other mainstream hardware designers ...Are we still against dedicated hardware acceleration for those features in 2024?
Ps: yes, in your example, ai image generation works without ai hardware. But why wouldn't you want that to be done faster? Those ray tracing and matrix acceleration structures don't even take that much space to begin with (what was it, 10-15% of the die?).
If Nvidia could do it for not much die area, than Intel, amd, Qualcomm, apple and all others can do it too. The performance gains are undeniable, unless someone comes out with a compute solution that is somehow faster than dedicated hardware.@Bold Believe it or not that's the norm amongst many other mainstream hardware designers ...
Nvidia having a relatively fast RT implementation and integrated matrix HW functionality is mostly the exception in the industry. You don't see mobile hardware designers investing all that much in RT acceleration since it'll never go past the tech demo stage in their case and they have other more pressing problems to worry about than implementing HW for AI upscaling which are way down at the bottom of their priority lists. The other hardware vendor who tried doing both of those things currently have unbelievably bad perf/area for their graphics architecture ...
Not much die area ? I don't suppose how you can explain then why it is their Tegra designs haven't scored any recent wins in mobile or ultra portable form factors ?If Nvidia could do it for not much die area, than Intel, amd, Qualcomm, apple and all others can do it too. The performance gains are undeniable, unless someone comes out with a compute solution that is somehow faster than dedicated hardware.
Is it too much to want the other manufacturers to compete?
Well AFAIK they don't have any SoCs for the mobile or ultra portable space as everything is designed for automation (and maybe the Switch 2).Not much die area ? I don't suppose how you can explain then why it is their Tegra designs haven't scored any recent wins in mobile or ultra portable form factors ?
Even if Nvidia did trim down their designs to just graphics functionality, OEMs still think that they wouldn't be competitive at all in those highly constrained form factors hence no real demand for them in those cases ...Well AFAIK they don't have any SoCs for the mobile or ultra portable space as everything is designed for automation (and maybe the Switch 2).
I found a die diagram of the Turing cores.Not much die area ? I don't suppose how you can explain then why it is their Tegra designs haven't scored any recent wins in mobile or ultra portable form factors ?
Their architecture in general may not be tuned for those form factors. I would think it shouldn't be a problem to ship an SoC without RT or Tensor cores if required, so that's most likely not the barrier to entry.Even if Nvidia did trim down their designs to just graphics functionality, OEMs still think that they wouldn't be competitive at all in those highly constrained form factors hence no real demand for them in those cases ...
Diagrams are *NOT* to scale so I wouldn't infer much information about them in terms of HW unit design complexity ...I found a die diagram of the Turing cores.
Even when we're solely discussing about desktop graphics, the argument has yet to be settled since high-end graphics could go into a direction that's not RT friendly and AI HW integration still may not be worthwhile on lower-end parts ...Their architecture in general may not be tuned for those form factors. I would think it shouldn't be a problem to ship an SoC without RT or Tensor cores if required, so that's most likely not the barrier to entry.
And in any case I think this thread is about the PC GPU space, not gaming on phones or tablets.