AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
As well the percentage of games using every last new feature and technology that comes along shrinks progressively with time. You could add matrix only units, but competitive e-sorts games aren't going to use those very meaningfully, if at all. Adding those isn't going to get fps up in Valorant or Dota 2 at low settings; and since you at least theoretically want precisely accurate per pixel information in those games, then are people buying GPUs really going to use AI upscaling which by it's nature is going to give you false information from a bad guess?

I don't know the answer to that. But I do see the use case for such units as at least semi limited for the next seven+ years. Is it worth it for AMD to add those units to a GPU? I'd be unsure it is at the moment. Nvidia has successfully driven the niche high end GPU market crazy for "Raytracing", but their TPUs have gone largely unnoticed at the moment, and with something that abstract I'm not sure how successful a PR campaign in that direction would work.

Yea, there are definitely niches for different markets here and it does look like AMD is aiming for hitting maximum framerates (eSports scene) while Nvidia is trying to spearhead new rendering methods.
Neither is wrong, it's just catering to different markets. Nvidia took a huge risk to lead this charge on their own. And other companies will be more than willing to dive in to compete after nvidia has done the heavy lifting. There's no reason for AMD to enter the scene unless they must, I suppose is the best way to look at it.

I still think there is a future for accelerators in the AI space, there's a lot more than can be used there with Deep Learning than just super resolution. It could be used for animation, simulations, AI, etc that, it has the potential to provide some very good benefits without necessarily increasing cost on studios to support those AIs. I think that's still something that could be appreciated by eSports players etc. Not everyone wants to play on the lowest of low settings.
 
I don't know the answer to that. But I do see the use case for such units as at least semi limited for the next seven+ years. Is it worth it for AMD to add those units to a GPU? I'd be unsure it is at the moment. Nvidia has successfully driven the niche high end GPU market crazy for "Raytracing", but their TPUs have gone largely unnoticed at the moment, and with something that abstract I'm not sure how successful a PR campaign in that direction would work.
NV markets TPUs to be necessary for RT, becasue RT is so expensive upscaling is needed.
To me that's marketing BS, becasue according to NV themselves, i don't need 4K, so i don't need upscaling either: http://phrogz.net/tmp/ScreenDensity...sizeUnit:in,axis:diag,distance:31,distUnit:in
IMO, fixed function units are justified after we have found a use for them to be necessary. For tensor ores, the opposite did happen: TPUs came to gaming GPUs without anyone requesting or supporting them, other than NV with DLSS.
Now i do think ML will turn out useful for games, but at the moment it's not there yet, so i don't see a need for AMD to follow it just to attend to marketing promises of others.
 
Now i do think ML will turn out useful for games, but at the moment it's not there yet, so i don't see a need for AMD to follow it just to attend to marketing promises of others.
Nothing will come to games if it's not present in the gaming h/w.
 
IMO, fixed function units are justified after we have found a use for them to be necessary.

I guess that would be the best but it is a chiken and egg thing...
I'm thinking in the current environment game develpment relies far too much on the major off - the - shelf engines & APIs to be able to inovate wrt to rendering.
As for engines & APIs, their development processes are probably too slow, and their design goals ultimately favor offering a uniform experience, regardless of the hardware.
 
Nothing will come to games if it's not present in the gaming h/w.
Just wrong.
We've had 3D games before GPUs even existed, if you remember. Why do do you think it should be any different with ML, which was there before that as well?

I'm thinking in the current environment game develpment relies far too much on the major off - the - shelf engines & APIs to be able to inovate wrt to rendering.
That's indeed the honest answer i would expect to get for my above question. Though in this case the off the shelf offering is innovation soley presented form a HW vendor. Game industry is not even involved.

Though, DLSS is a good answer to address resolution craze. So i'm happy it exists, for those which believe in the tandem of 4K and upscaling to be worth the resources. I just don't.
 
Just wrong.
We've had 3D games before GPUs even existed, if you remember. Why do do you think it should be any different with ML, which was there before that as well?
We've had a 3D game, or maybe three? For something to be widely used the h/w must be available. How many RT games we had prior to DXR launch? How many ML applications in games did we have before Turing and DML? Pascal had fast INTs for inferencing, who's used them in gaming?
 
We've had 3D games before GPUs even existed, if you remember. Why do do you think it should be any different with ML, which was there before that as well?

3D games did exist before 3d acceleration was a thing, but (much) slower though, besides being less advanced, 'swimming' and other artifacts where common before 3d hw. Its the age-old discussion about the need of hardware acceleration need. Same with T&L, pixel and vertex shaders etc etc. Ray tracing did exist pre-2018 aswell (as old as the hills basically?) but it wasnt all that applicable into consumer products/games before hw accelerated hardware made its way into gaming gpus (RTX and rdna2).
Bump mapping could be done on the PS2 but only happened once (matrix?) Whereas xbox and gc games had it about everywhere due to hardware accleration. Without hardware acceleration it would take ages before its doable on consumer gaming hardware products.
Non hw accelerated hw has disadantages like flexibility to some degree nowadays, but it aint worth waiting till thats possible without specialized hw.

I think specialized hardware will always exist in some form, be it for features on GPUs, decompressors for nvme ssd's, AI/ML etc. Their just that much more efficient at doing certain tasks, not to forget cost to produce, die area etc.

It must be noted also that GPU's do more and more these days, from decompression for nvme (RTX IO, direct storage/VA), physix was once brought over to GPUs, audio decoding etc. Aside from rendering, output, hw acceleration of ray tracing, reconstruction tech, and providing ultra fast vram buffers (GDDR6x/HBM) and more.
GPUs seem to be a very good tool for many things in modern times. The most important component in any gaming system i think.
 
D games did exist before 3d acceleration was a thing, but (much) slower though, besides being less advanced, 'swimming' and other artifacts where common before 3d hw. Its the age-old discussion about the need of hardware acceleration need.
It's a completely new discussion. Notice:

We've had 3D games all the time. Texture mapping gave it a real uplift, but it was expensive and it was clear for everyone we want more triangles and fps. It also was clear how it works (texture swimming was resolved in software already - just some games or PS1 did not care about subpixel accuracy). There also were APIs like OpenGL wich defined the whole 3D rendering pipeline, and overall people were fine with that.
So it was clear what functionality a GPU should have, and it was clear what software would use it (games). It even was clear what the future upgrades the new HW should get (multi texturing, trilinear filter, T&L).

Comparing this with ML:
We don't know what to do with it for games.
We did never yet use it in games.
Conclusion: We don't need it, and we did not wait for it to happen.

Many things wrong with that. For example, if tensor cores are only useful for upscaling as it is now, why no hardware upscaler to save chip area and spend it on some more SMs instead? Because in the future we might eventually do some more things with ML?
Maybe. I'm just not willing to pay for this yet.
 
All i know is that 3D games didnt really takeoff untill hardware acceleration did. The PSX wasnt all that great for true 'advanced' 3d gaming back in the day (1994 hardware).

Comparing this with ML:
We don't know what to do with it for games.
We did never yet use it in games.
Conclusion: We don't need it, and we did not wait for it to happen.

Many things wrong with that. For example, if tensor cores are only useful for upscaling as it is now, why no hardware upscaler to save chip area and spend it on some more SMs instead? Because in the future we might eventually do some more things with ML?
Maybe. I'm just not willing to pay for this yet.

Maybe im misjudging your whole view on this :p If in ML we talk DLSS2.0/reconstruction tech, it is one (if not the most) usefull technologies we have today. It enables for much more performance (1080p performance giving a 4k experience) enabling gamers to go for higher settings or other demanding features such as ray tracing.
Heck, even a RTX2060 does kinda well due to it, gamers with such old lower end gpus wouldnt be able to do any practical RT at all in cp2077 for example. Also, those hunting for highest framerates will obviously very much likee the tech.

If your talking about something else disregard my comment :)
 
Bump mapping could be done on the PS2 but only happened once (matrix?) Whereas xbox and gc games had it about everywhere due to hardware accleration.
Bump mapping was known long before GPUs existed as well. We knew we want it in games. Outcast had it first, and it was a software renderer. So it was used in games before, as usual, as it should be.

But this time my point is not 'We don't need HW acceleration' - my point is: 'We want HW acceleration for things we know we need, and we agree on how this acceleration should look like.'
This latter point holds true for increasing enterprise needs on ML, but not for games. NV does an experiment to test if it works to introduce new HW features together with introducing their application as well, into a market which does not request it.
Ofc. they will pull all marketing stunts to present this as innovation and necessary, and they are pretty successful at it. That's an interesting but worrying aspect of it, while their results on DLSS technology surely are good and interesting on the other hand.
 
Bump mapping was done before hw acceleration yes, but again, not as advanced or performant. Halo Ce (og xbox) really made heavy use of the tech, i doubt that without hw acceleration that they would have come to the same visual achievements and/or performances.

NV does an experiment to test if it works to introduce new HW features together with introducing their application as well, into a market which does not request it.

True, but hey, somewhere you need to start 'experimenting' in search for that breakthrough tech and find ways to integrate it to the masses. We sure can say RT was a successfull one as it is in every product available now and just about every modern game has it, even consoles (and doing so pretty well).

Ofc. they will pull all marketing stunts to present this as innovation and necessary, and they are pretty successful at it.

Effectively marketing strategies are any corporations top priority, im seeing it anywhere. Apple probably is the best in this regard.

That's an interesting but worrying aspect of it, while their results on DLSS technology surely are good and interesting on the other hand.

And thats what matters, the end result. DLSS while underwhelming at its launch, in ints 2.0 form theres really no way around the tech anymore. Performance improvements (what it basically is) are always welcome and would be kinda hard to achieve the same effect without it. AMD is coming with their own solution to it aswell.
GPUs without hardware acceleration for DLSS are really missing out something here. If its all possible prior Turing (no hw acceleration), i have no idea, it probably is but not as performant.
 
All i know is that 3D games didnt really takeoff untill hardware acceleration did. The PSX wasnt all that great for true 'advanced' 3d gaming back in the day (1994 hardware).
That's totally subjective.
Maybe i'm few years older. To me, games like Doom or Quake in software were mind blowing, and it was clear FPS is the hottest genre of the time, and 3D would revolutionize all games. PS1 was the most impressive console i've ever seen. And is was happy to get some first GPU at this time for my PC as well.
I also remember the introduction of GPUs was doubtful back then, at first. Not everybody was willing to pay for one more extension card just to play games which also work without it. API chaos did not help that either. I do not agree with saying 3D games became possible only because of GPUs. The rise of 3D remembres me on Doom, Quake, Magic Carpet, Descent, Tomb Raider, Outcast, etc. All those worked fine without GPU, and since that days, further progress on game design did not add so much after that revolution.
No matter what, we need another 'Doom' to proof a need for ML. DLSS is not enough. It was new games which founded a need for new hardware, not the other way around.
 
Bump mapping was done before hw acceleration yes, but again, not as advanced or performant. Halo Ce (og xbox) really made heavy use of the tech, i doubt that without hw acceleration that they would have come to the same visual achievements and/or performances.
Agree, but it also aligns to my point: Dev: Bump mapping is cool but too slow - make HW flexible enough so it can work! NV: Sure man, take those fragment shaders and have fun!
True, but hey, somewhere you need to start 'experimenting' in search for that breakthrough tech and find ways to integrate it to the masses.
Exactly. Usually you do that in software. CUDA can do a lot of matrix multiplies already.
Then, after you did some innovation, you release games using it. Limited, demanding, but it shows the potential. After some time, HW acceleration pops up, usually... :)

But yeah, let's close this. Pretty much every SoC has some ML acceleration already. I guess we will see games utilizing from that HW as well...
 
That's totally subjective.

Probably.

Maybe i'm few years older.

Most likely :p

To me, games like Doom or Quake in software were mind blowing

Doom, not so (i think). Quake, yes, that was truly mindblowing at the time in software i agree.

PS1 was the most impressive console i've ever seen.

I'd have to give it to the PS2 by a huge margin, that could be, as you said, you where around (older) when the PSX released. I still think the PSX was released in a time where things where moving extremely fast. Just a year or so later 3d hardware acceleration/voodoo graphics made its way and made the psx seem aged much faster.

The rise of 3D remembres me on Doom, Quake, Magic Carpet, Descent, Tomb Raider, Outcast, etc. All those worked fine without GPU, and since that days, further progress on game design did not add so much after that revolution.
No matter what, we need another 'Doom' to proof a need for ML. DLSS is not enough. It was new games which founded a need for new hardware, not the other way around.

Aight i understand i think. ML (as in NV) is only doing reconstruction at the moment, but for me personally thats a huge thing to have, in special if you dont want to upgrade and still want maxed setings with ray tracing. My 2080Ti wouldnt be all that performant otherwise.

We will see what happens and where the market is going. Atleast for now, DLSS and AMD's variant of that, super resolution seem to be relevant this generation atleast.
 
I still think the PSX was released in a time where things where moving extremely fast.
Yes. My systems were Atari 2600, C64, Amiga, PC. So i was there from the start almost, and i think we never had another time when progress was so quick than in the early 90's, because of 3D.
I did know nothing about videogames as a kid, but then my father showed me the Atari he bought. There was awesome box art on the cartridges, and i could not believe this will be like a VHS tape but interactive! Then he turned it on, and... there were just some colored boxes on the screen. So i was disappointed at first, but the games still were fun. And i knew what i had to do... i wanted it to look like real. :)

3D was never something really new. Even the Atari had it with Battlezone, which was very immersive. Then there were first true 3D games like Elite using vector graphics, which also worked on the 8bit 1MHz C64 at maybe 10 fps. It was mainly used by flight simulators, and more powerful systems like Amiga or i386 had it in color with filled polygons. Movies like Tron promised we can expect more from the future. Games like Flashback (16bit sidescroller) had vector gfx cutscenes, not really 3D, but it proofed this future is close.

Then came Doom. And it was shockingly real and different to any previous experience. Was it the texturing? Or was it the first person walking, which is more immersive than being a spaceship or aeroplane? I'm not sure, but it was more than just 'looking real' - it was an entirely new experience. 'Being inside' over 'just watching'. Within few years we saw other genres moving to 3D, and it caused an explosion of creativity and unique games. Tech was evolving quickly too, so 3D characters instead 2D sprites, higher resolutions, more details, and finally GPUs.
Now looking back, Doom no longer looks that exciting ofc. But the move from Super Mario to Doom was much bigger than the move from Doom to Quake or any other progress which followed, including VR goggles. I don't think we can ever top this, which is a bit sad.

We will see what happens and where the market is going. Atleast for now, DLSS and AMD's variant of that, super resolution seem to be relevant this generation atleast.
Relevant, yes. To sell 4K screens and moster GPUs ;) But if there were no DLSS, your 2080Ti would work equally well. Games would just upscale the RT results instead the whole frame, i believe.
But as you say - NV has it, so we expect AMD answers. Personally i'm very surprised they indeed do so. NVs marketing has successfully forced them to compete in a field they are not very experienced with: Software development in general and ML.
Maybe that's for good, but the whole process carries a big risk with it, namely a chip designer declaring which software solutions are the best for a given problem, and everyone accepting that without doubt because 'expertise'. Similar to monopoly, we get a situation where self interest may hinder alternate progress.

Ok - out for real this time. Could not resist the nostalgia. ;)
 
We've had a 3D game, or maybe three? For something to be widely used the h/w must be available. How many RT games we had prior to DXR launch? How many ML applications in games did we have before Turing and DML? Pascal had fast INTs for inferencing, who's used them in gaming?

1-3? Try hundreds of 3D games before there were video cards with hardware 3D acceleration. Just to name a tiny few. System Shock. Ultima Underworld. Battlezone. Aces over the Pacific. Aces over Europe. The Red Baron. Strike Commander. Mech Warrior. Magic Carpet. Microsoft's Flight Simulator. Wizardry VIII. And many more. Fully 3D games and not the 2.5D games like Doom. Hell, there were a lot of 3D games on the Amiga in the 1980's which predates consumer level hardware accelerated video cards by over a decade. Sure there were some professional 3D cards that couldn't run games that costs 10's of thousands of dollars, but nothing WRT gaming on a personal computer.

Regards,
SB
 
We've had a 3D game, or maybe three? For something to be widely used the h/w must be available. How many RT games we had prior to DXR launch? How many ML applications in games did we have before Turing and DML? Pascal had fast INTs for inferencing, who's used them in gaming?
Missed the post.
Yeah, there were hundrets of 3D games. I've also played raytraced games before RTX (small tech projects, no AAA). Maybe Crysis Remake came a bit late to proof my claim 'We would have gone RT with or without RTX'.
But i don't know a single game which uses machine learning yet. All i see is games using NVs middleware for upscaling. Thus i'm not convinced that all future GPUs should invest in tensor cores.
 
Missed the post.
Yeah, there were hundrets of 3D games. I've also played raytraced games before RTX (small tech projects, no AAA). Maybe Crysis Remake came a bit late to proof my claim 'We would have gone RT with or without RTX'.
But i don't know a single game which uses machine learning yet. All i see is games using NVs middleware for upscaling. Thus i'm not convinced that all future GPUs should invest in tensor cores.
Spider man MM uses it for body deformation
 
Status
Not open for further replies.
Back
Top