AMD FidelityFX on Consoles

AMD may very well be or have been pursuing a non-ML based approach for sometime now and that's fine.
My point was about comparison to the space taken up by tensor cores and so had to use it, therefore DLSS.
The end result is a compelling gpu, both in rasterization, RTRT, upscaling. That's all that maters in regards to that aspect.
It doesn't need to be better than DLSS2 to be useful; it just needs to provide a better output/performance ratio than the existing non-ML upscaling solutions. Otherwise we'd go back to temporal/checkerboarding reconstruction.
I fully agree, that any positive improvement is good.
As I said, it was about the framing of the tensor cores.

I'm sure AMD does have an idea what their going to do, they've had a couple years to give it some thought.
Its just that the way they spoke about it, made it sound like, well may have a bit of this, bit of that, depending on devs, users, lots of options.
That sounds like early R&D, not late in development, which we know should be the case.
 
That's not true. I would disagree with this, with some caveats in your favour.

You're entitled you your opinion. But you need to look back at the applications where ML is used and why it has been applied to particular problems. A commonality to these scenaros is not always because problems are hard to solve, it's because there it's easier and faster to deploy a computationally-complex and adaptive ML algorithm than iterate a specific algorithm to accommodate varying data patterns. Computers have no has problems beating chess Grand Masters for decades without ML and using traditional analysis and strategy lookups.

ML is fundamentally throwing a difficult-to-define, varied dataset problems at a considerable amount of computational power relative to the job. Lots of new algorithms have been developed with the help of ML, with ML data analysis doing the heavy lifting and helping programmers developed more honed algorithms to with much lower computational power.

ML is so an easy solution to so many problems, it tends to be the first way many people now try to solve problems. Some are gravitating to it as absolution without even trying conventional methods. It's not an end, it's a means to an end. :yep2:
 
You're entitled you your opinion. But you need to look back at the applications where ML is used and why it has been applied to particular problems. A commonality to these scenaros is not always because problems are hard to solve, it's because there it's easier and faster to deploy a computationally-complex and adaptive ML algorithm than iterate a specific algorithm to accommodate varying data patterns. Computers have no has problems beating chess Grand Masters for decades without ML and using traditional analysis and strategy lookups.
tough this google chess ml algo beat all chess software and also play in unique way ;)
 
You're entitled you your opinion. But you need to look back at the applications where ML is used and why it has been applied to particular problems. A commonality to these scenaros is not always because problems are hard to solve, it's because there it's easier and faster to deploy a computationally-complex and adaptive ML algorithm than iterate a specific algorithm to accommodate varying data patterns. Computers have no has problems beating chess Grand Masters for decades without ML and using traditional analysis and strategy lookups.

ML is fundamentally throwing a difficult-to-define, varied dataset problems at a considerable amount of computational power relative to the job. Lots of new algorithms have been developed with the help of ML, with ML data analysis doing the heavy lifting and helping programmers developed more honed algorithms to with much lower computational power.

ML is so an easy solution to so many problems, it tends to be the first way many people now try to solve problems. Some are gravitating to it as absolution without even trying conventional methods. It's not an end, it's a means to an end. :yep2:
I do agree it's not for everything. But I mean, I'm pretty sold that it works.

Chess[edit]

In the final results, Stockfish version 8 ran under the same conditions as in the TCEC superfinal: 44 CPU cores, Syzygy endgame tablebases, and a 32GB hash size. Instead of a fixed time control of one move per minute, both engines were given 3 hours plus 15 seconds per move to finish the game. In a 1000-game match, AlphaZero won with a score of 155 wins, 6 losses, and 839 draws. DeepMind also played a series of games using the TCEC opening positions; AlphaZero also won convincingly.

Under the same computation, Alpha zero dominates the best chess AI. and the training time was a few hours. Computational time equalized.

Alpha Go is the same thing. And no other AI can do it.

I think there are opportunities here for sure that for some very specific applications, Deep Learning algos are a better fit, given the cost benefit over traditional algorithms. Developing a perfect algorithm, is not budget friendly necessarily either, which is why ML algorithms have been moving to the forefront so quickly. It's simply a hardware cost, not a labour one. And I don't necessarily see that as a bad thing, because lets be real, as the problem gets harder to solve in both complexity and scope, it gets increasingly more difficult for there to be this ideal algorithm that can be developed to do it for cheaper.

I think it's absolutely a fair argument that perhaps it's not needed in Real Time graphics. I'm not debating that. But to say that iteration can solve the hardest problems that ML does easily, is likely a stretch of the truth of our capabilities as humans. It's certainly gets extremely tough as complexity increases and ML becomes very convincingly the solution to use.
 
I do agree it's not for everything. But I mean, I'm pretty sold that it works.

First, ML and AI are not the same thing. There is no AI yet. When you see claims of 'AI', it's ML. And yes ML works but as I said in my last post posts, it's the least computationally efficient way to achieve a goal. True, it is arguably also the fastest way to achieve a some commercial goal but it is a stopgap solution until you can develop a crafted algorithm. It's absolutely brute-force.

I'm not sure where you're going with the chess and Go stuff. Computers have been beating humans at both games for decades. Not all people and not winning all the time. Very much like humans vs humans. Winning strategic games requires an understanding of strategy, some knowledge of your opponent and a streak of creativity.
 
First, ML and AI are not the same thing. There is no AI yet. When you see claims of 'AI', it's ML. And yes ML works but as I said in my last post posts, it's the least computationally efficient way to achieve a goal. True, it is arguably also the fastest way to achieve a some commercial goal but it is a stopgap solution until you can develop a crafted algorithm. It's absolutely brute-force.

I'm not sure where you're going with the chess and Go stuff. Computers have been beating humans at both games for decades. Not all people and not winning all the time. Very much like humans vs humans. Winning strategic games requires an understanding of strategy, some knowledge of your opponent and a streak of creativity.

Which just says to me those games are susceptible to being brute forced given enough computational power. Nothing more. Not true AI in any real sense.

Anyways some DLSS type thing on consoles seems like it would be groovy, but I am a bit skeptical due to lack of tensor cores, and the no free lunch axiom (DLSS isn't free in that sense, it's requiring a lot of computational resources aka the tensor cores). Hope I'm wrong.
 
Anyways some DLSS type thing on consoles seems like it would be groovy, but I am a bit skeptical due to lack of tensor cores, and the no free lunch axiom (DLSS isn't free in that sense, it's requiring a lot of computational resources aka the tensor cores). Hope I'm wrong.

Nothing is for free. But seeing Ampere i think compute performance isnt a matter of transistors or die size anymore. So TensorCores are very cheap and very effective for certain workloads.
 
Last edited:
First, ML and AI are not the same thing. There is no AI yet. When you see claims of 'AI', it's ML. And yes ML works but as I said in my last post posts, it's the least computationally efficient way to achieve a goal. True, it is arguably also the fastest way to achieve a some commercial goal but it is a stopgap solution until you can develop a crafted algorithm. It's absolutely brute-force.

I'm not sure where you're going with the chess and Go stuff. Computers have been beating humans at both games for decades. Not all people and not winning all the time. Very much like humans vs humans. Winning strategic games requires an understanding of strategy, some knowledge of your opponent and a streak of creativity.
I was trying to prove that ML has very high overhead, but not necessarily the least computationally efficient way to achieve a goal. As there are certain complex problems that ML is good at solving, in which computation is so extreme that is where ML finds its niche.

ie. Chess AI vs Alpha Zero. When normalized for computational power, using the same 44 cpu cores, Alpha Zero beats Stockfish 8 (the best crafted algorithm) handedly.

I do see ML as being high overhead, and I don't deny that it's probably well overhyped, and likely being over deployed, but that doesn't take away from the fact that ML on GPU has solved a lot of challenging tasks for computer scientists within the last decade that the typical crafted algorithms have failed to solve.
 
Anyways some DLSS type thing on consoles seems like it would be groovy, but I am a bit skeptical due to lack of tensor cores, and the no free lunch axiom (DLSS isn't free in that sense, it's requiring a lot of computational resources aka the tensor cores). Hope I'm wrong.
Most of the computation is used building the models.
So yes tensor cores would help a lot, but it just needs to give an overall better performance profile (with decent IQ) to make it a net win.
May not be able to go from 1080p to 4K, but it may give better performance and IQ going from 1440p than current upscaling techniques.
 
Like I said, the PS5 is not supporting DX12 Ultimate's VRS, nor will it ever. That topic is moot.

I get that Microsoft's technical marketing department was successful at making a big deal out of the PS5 not supporting specific implementations of Microsoft's own API (well, duh), and Sony not caring about publicly sharing the technical details of their SoC happened to play right in their field.
Claiming the PS5 doesn't support Microsoft's "patented VRS" implementation is as obvious and meaningless as claiming the Series X doesn't support GNM fragment shaders (well, duh).


As for the performance drops, they seem so rare that I doubt Codemasters would find it worth the development time and cost to implement whatever foveated rendering technique that Sony supports in their hardware. Like all practical VRS implementations I've seen so far, it doesn't look like it did any miracles to the DX12 consoles anyways.
I think its pretty settled now that the PS5 doesn't have hardware VRS.
Die shots show the PS5 has RDNA 1 ROPs, while the XSX has RDNA 2 ROPs. The RDNA 2 ROPs were upgraded and feature the changes required for VRS, while the RDNA ROPs dont.

But I think it was pretty obvious from Sony's lack of talk about VRS that it didn't have it. They talk about what they do have, and don't talk about what they don't.
Digital Foundry even asked Sony to clarify if the PS5 has VRS and they never replied from last I heard.

Safe to say the PS 5 doesn't have Mesh Shaders, Sampler Feedback Streaming or Machine Learning either.
 
I think its pretty settled now that the PS5 doesn't have hardware VRS.
We (B3D) should need a lot more than "Sony never answered DF's question about that" to settle on whatever. Though if you wish to settle on that yourself on a personal belief, then go ahead.


As for the "knowing the PS5/SeriesX has RDNAx ROPs/CUs/whatever", you either have official information/documentation from Sony or AMD or you don't.
Otherwise, trying to discern what low-level features are present out of looking at a 8MPix picture of a >12B transistor chip (i.e. 1500 transistors per pixel) has a very limited accuracy and shouldn't be used to settle anything.
IIRC those same pictures are suggesting the SeriesX has "RDNA1 WGPs" (i.e. they look similar to Navi10 WGPs in the picture), which it obviously doesn't have because there are RT units in the TMUs, among other RDNA2 specific features.


Safe to say the PS 5 doesn't have Mesh Shaders, Sampler Feedback Streaming or Machine Learning either.
I think it's better to read a bit on what Machine Learning is (in this particular case, Neural Network Inference) and how it can be processed, before making such an outrageously incorrect statement.
 
We (B3D) should need a lot more than "Sony never answered DF's question about that" to settle on whatever. Though if you wish to settle on that yourself on a personal belief, then go ahead.


As for the "knowing the PS5/SeriesX has RDNAx ROPs/CUs/whatever", you either have official information/documentation from Sony or AMD or you don't.
Otherwise, trying to discern what low-level features are present out of looking at a 8MPix picture of a >12B transistor chip (i.e. 1500 transistors per pixel) has a very limited accuracy and shouldn't be used to settle anything.
IIRC those same pictures are suggesting the SeriesX has "RDNA1 WGPs" (i.e. they look similar to Navi10 WGPs in the picture), which it obviously doesn't have because there are RT units in the TMUs, among other RDNA2 specific features.



I think it's better to read a bit on what Machine Learning is (in this particular case, Neural Network Inference) and how it can be processed, before making such an outrageously incorrect statement.

As for Machine Learning, it was confirmed By a Principal Sony Graphics Engineer who worked on the PS5, Rosario Leonard, that the PS5 does not have Machine Learning. He also stated that the PS5 was not full RDNA2, but a mix of both RDNA 1, RDNA 2 and some custom stuff, which also turned out to be 100% correct.
He would know more than either of us. I will default to him, as you should also do.
EdNdcJwUYAM5Xqz.jpeg-1.jpg
 
As for Machine Learning, it was confirmed By a Principal Sony Graphics Engineer who worked on the PS5, Rosario Leonard, that the PS5 does not have Machine Learning.

1) It's a deleted tweet, and for good reason. The Xboxes also have Navi-based GPUs and they have specific instructions for NN inference. You're also lacking context on what he means by "any ML stuff". For example, no console has specific processing hardware for mixed precision matrix multiplication, like AMD's CDNA1 or Nvidia's Volta/Turing/Ampere.

2) You really need to look up on what types of hardware are capable of running Neural Network Inference. Your "doesn't have Machine Learning" statement is still completely incorrect.


As for Locuza's tweets, the only fact he can 100% confirm is that the PS5's die area dedicated to the ROPs is larger. There's no capability or lack of thereof being proven out of this.
 
There's more than sufficient power on PS5 to perform a NN upscale with standard compute. But hitting 120fps is not likely. 60fps is tight. 30fps is definitely achievable for both. You can cut away the super sampling part and just focus on upscale and you can cut down the processing time significantly as well.
 
We (B3D) should need a lot more than "Sony never answered DF's question about that" to settle on whatever. Though if you wish to settle on that yourself on a personal belief, then go ahead.
Specifically about VRS, I can't remember which dev it was but he pretty much confirmed that PS5 doesn't have VRS. The first and only time I've really seen it actually confirmed.
Even if most people already assumed that was the case.
After that I switched from neutral or I'll have to be told it doesn't have it by a reputable source, to now it doesn't have it unless proven otherwise.
 
Specifically about VRS, I can't remember which dev it was but he pretty much confirmed that PS5 doesn't have VRS. The first and only time I've really seen it actually confirmed.
Even if most people already assumed that was the case.
After that I switched from neutral or I'll have to be told it doesn't have it by a reputable source, to now it doesn't have it unless proven otherwise.
Metro devs on twitter:
 
Last edited:
1) It's a deleted tweet, and for good reason. The Xboxes also have Navi-based GPUs and they have specific instructions for NN inference. You're also lacking context on what he means by "any ML stuff". For example, no console has specific processing hardware for mixed precision matrix multiplication, like AMD's CDNA1 or Nvidia's Volta/Turing/Ampere.

2) You really need to look up on what types of hardware are capable of running Neural Network Inference. Your "doesn't have Machine Learning" statement is still completely incorrect.


As for Locuza's tweets, the only fact he can 100% confirm is that the PS5's die area dedicated to the ROPs is larger. There's no capability or lack of thereof being proven out of this.
They wernt tweets, they were private messages that were then released. It is what it is. He knows exactly what's in the PS5 console, and he said it doesn't have ML. It doesn't.

You really don't want to accept what is right before your eyes.
Sony never said the PS5 could do VRS.
Multiplat games like Dirt that have used VRS haven't used it on PS5, even tho they did on the other platforms.
VRS was added to RDNA 2 because of changes to the ROPs on RDNA 2 over RDNA 1.
Sony is using the older RDNA1 ROPs, while Series X is using RDNA 2 ROPs which contain the hardware additions to make VRS possible. VRS on AMD cards is tied to Direct X 12 U, and no other API.

I mean, if you still want to believe the PS 5 has VRS and ML I guess that's your right.
 
They wernt tweets, they were private messages that were then released. It is what it is. He knows exactly what's in the PS5 console, and he said it doesn't have ML. It doesn't.
His statement is a bit oversimplified, though. "Doesn't have any ML stuff" just means no specific hardware for ML, but ML has been GPU accelerated for some time now on AMD hardware older than RDNA. Just because there is no hardware specifically there to accelerate ML doesn't mean that the GPU couldn't do it at all, and probably fast enough to make it worthwhile.
 
Back
Top