General Next Generation Rumors and Discussions [Post GDC 2020]

With DlSS2.0 being successful as of right now, I’m fully invested into ampere. RDNA 2 could be 15-20 TF and still not have the power to render equivalency to dlss2 upscale.
I think we are waiting for the full specification of the rdna2 architecture. Maybe the new radeon image sharpening method will be better than some would think. ;)
 
I think we are waiting for the full specification of the rdna2 architecture. Maybe the new radeon image sharpening method will be better than some would think. ;)
Possibly. Generally speaking in my experience algorithms are incapable of creating detail from nothing, so aliasing might be very good, but it can't create detail from nothing. That’s a speciality of machine learning and Nvidia holds the models for it. It’s entirely possible that DLSS2.0 could run on RDNA 2 and as far down as a lowly Xbox, but the trained models are owned by nvidia. That’s going to be the tough part for anyone else trying to get into the DLSS business. You would need to train up your own model to do this.

I can’t think of many companies that would take this challenge on except for MS, Google, Amazon, and AMD.
 
Last edited:
Possibly. Generally speaking in my experience algorithms are incapable of creating detail from nothing, so aliasing might be very good, but it can't create detail from nothing. That’s a speciality of machine learning and Nvidia holds the models for it. It’s entirely possible that DLSS2.0 could run on RDNA 2 and as far down as a lowly Xbox, but the trained models are owned by nvidia. That’s going to be the tough part for anyone else trying to get into the DLSS business. You would need to train up your own model to do this.

I can’t think of many companies that would take this challenge on except for MS, Google, Amazon, and AMD.
I"m confused you state that algorithms are incabple of creating detail but when talk about how good machine learning is. Aren't they the same thing ?

Aside from that DLSS took quite a long time for Nvidia to get right. We will have to see what the other companies come up with. I am sure AMD will try and leverage their partnership with both sony and ms to improve their solution. In fact since both high end consoles use rdna 2 whatever solution it ends up being should see a lot of support.
 
I"m confused you state that algorithms are incabple of creating detail but when talk about how good machine learning is. Aren't they the same thing ?

Aside from that DLSS took quite a long time for Nvidia to get right. We will have to see what the other companies come up with. I am sure AMD will try and leverage their partnership with both sony and ms to improve their solution. In fact since both high end consoles use rdna 2 whatever solution it ends up being should see a lot of support.
Firstly, they had 2 versions of DLSS in 2 years. This is lightning fast.

In the general sense of the word sure they are all algorithms, however the way they are designed and made separate a typical user made algorithm vs one made by machine learning.

to put it plainly:
Typical programming is
Input Data + rules = Result

the programmer is responsible for designing the rules in which to process data and form a result

machine learning is:
Results + input data = rules

machine learning algorithms during training give you back an algorithm that has been trained on millions of different training sets. So whatever you teach it, that it how it responds.
You show it something X and you tell it that I needs to end up looking like Y. Then it tries it’s best to do that. Then you repeatedly do that for millions of different training examples then it becomes adaptable. It approximates what it thinks you want from it. And it puts out the output. We keep training in this way until we are satisfied.
The algorithm will grow as large as 500MB or larger depending on what you’re doing, that’s 500MB it values pure values in which neurons are weighted to determine the final output of a value.
In this sense for a programmer to do this, they would need to program 500MB of rules of how to deal with every single scenario and perform this without branching and to be handle cases it’s never seen before. Quite simply; labour impossible.
 
Firstly, they had 2 versions of DLSS in 2 years. This is lightning fast.

In the general sense of the word sure they are all algorithms, however the way they are designed and made separate a typical user made algorithm vs one made by machine learning.

to put it plainly:
Typical programming is
Input Data + rules = Result

the programmer is responsible for designing the rules in which to process data and form a result

machine learning is:
Results + input data = rules

machine learning algorithms during training give you back an algorithm that has been trained on millions of different training sets. So whatever you teach it, that it how it responds.
You show it something X and you tell it that I needs to end up looking like Y. Then it tries it’s best to do that. Then you repeatedly do that for millions of different training examples then it becomes adaptable. It approximates what it thinks you want from it. And it puts out the output. We keep training in this way until we are satisfied.
The algorithm will grow as large as 500MB or larger depending on what you’re doing, that’s 500MB it values pure values in which neurons are weighted to determine the final output of a value.
In this sense for a programmer to do this, they would need to program 500MB of rules of how to deal with every single scenario and perform this without branching and to be handle cases it’s never seen before. Quite simply; labour impossible.

Some would argue that 2.0 is what they should have launched with. Aside from that AMD has also invested in machine learning. We really don't know what will come with RDNA 2 but i wouldn't count them out without seeing the hardware and software options. Like you said DLSS went from lets admit ass to the greatest thing ever in 2 years with no hardware changes on the Nvidia side. We are getting new hardware from AMD and I would assume a new version of FideltyFX. Remember Fidelity FX is hardware agnostic so it can be used on any hardware while the nvidia solution needs tensor cores. So its quite possible that AMD has added in hardware to assist Fidelity FX in future editions.

It will be interesting to see. Both solutions can really help drive vr.
 
I believe another thing is dlss2.0 is doing something temporal by figuring out how combine multiple frames into one frame. Higher framerate provides smaller temporal change making the processing easier/more robust(less change per frame). TAA does same but as it's more naive implementation it also brings in more blur. Due to temporal component the resolution being less is not as bad as it sounds. But I believe this also affects how the engine has to render to be able to provide appropriate sampling coverage over multiple frames. i.e if nothing moves the resolution becomes same as native. DNN is likely to be better at picking what to keep and discard versus a human.
 
Last edited:
At this point in time, it feels like BRiT is bringing the tears to the teardown, he keeps griping about the missing teardown :p

Another day, and nothing about the promised teardown. It's one of few aspects I'm still curious about for the PS5.
 
Firstly, they had 2 versions of DLSS in 2 years. This is lightning fast.

In the general sense of the word sure they are all algorithms, however the way they are designed and made separate a typical user made algorithm vs one made by machine learning.

to put it plainly:
Typical programming is
Input Data + rules = Result

the programmer is responsible for designing the rules in which to process data and form a result

machine learning is:
Results + input data = rules

machine learning algorithms during training give you back an algorithm that has been trained on millions of different training sets. So whatever you teach it, that it how it responds.
You show it something X and you tell it that I needs to end up looking like Y. Then it tries it’s best to do that. Then you repeatedly do that for millions of different training examples then it becomes adaptable. It approximates what it thinks you want from it. And it puts out the output. We keep training in this way until we are satisfied.
The algorithm will grow as large as 500MB or larger depending on what you’re doing, that’s 500MB it values pure values in which neurons are weighted to determine the final output of a value.
In this sense for a programmer to do this, they would need to program 500MB of rules of how to deal with every single scenario and perform this without branching and to be handle cases it’s never seen before. Quite simply; labour impossible.
Artificial intelligence, contra science fiction writers, is not really intelligence. The final cause of consciousness/executive thought is not reduceable to atoms colliding as presupposed by scientism. A.I is a hammer that, though it can strike a nail in a thousand ways based on a myriad of contexts, is still constrained to what it has been programmed for which is hammering a nail.
A.I does not think, learn, interpret or create. It does not even execute.It's simply logic gates being fired based on logic coded by a programmer. Hence the final limitation of DLSS when encountering atypal data. However, human beings are easily fooled by recurring patterns and all DLSS is required to do is to recreate a familliar pattern and human perception shall do the rest. What I mean by that is as long as the visual information replacing that lost during upscaling fits to what's expected, then it is good enough.
 
Artificial intelligence, contra science fiction writers, is not really intelligence. The final cause of consciousness/executive thought is not reduceable to atoms colliding as presupposed by scientism. A.I is a hammer that, though it can strike a nail in a thousand ways based on a myriad of contexts, is still constrained to what it has been programmed for which is hammering a nail.
A.I does not think, learn, interpret or create. It does not even execute.It's simply logic gates being fired based on logic coded by a programmer. Hence the final limitation of DLSS when encountering atypal data. However, human beings are easily fooled by recurring patterns and all DLSS is required to do is to recreate a familliar pattern and human perception shall do the rest. What I mean by that is as long as the visual information replacing that lost during upscaling fits to what's expected, then it is good enough.
do you even know what you’re talking about? Do you even know the basics of testing the fidelity of AI?

What do you mean AI cannot think or create or learn on its own, are you serious? We don’t have a generic AI, sure, but that doesn’t mean we don’t have AI that can learn and create on its own.
 
do you even know what you’re talking about? Do you even know the basics of testing the fidelity of AI?

What do you mean AI cannot think or create or learn on its own, are you serious? We don’t have a generic AI, sure, but that doesn’t mean we don’t have AI that can learn and create on its own.
I have performed machine "learning" for power system stability as part of research work. Genetic algorithm, ant colony optimisation, self-learning fuzzy logic controllers, neural networks...you name it. Machine learning is a misnomer. It is simply the folding of state space into particular attractors with the "learning rules/heuristics" provided by the programmer. Mapping inputs to outputs to create a highly complex control surface (an exceedingly non-linear equation basically) ain't learning and certainly not "creation". Human thought is simply not like this.
 
I have performed machine "learning" for power system stability as part of research work. Genetic algorithm, ant colony optimisation, self-learning fuzzy logic controllers, neural networks...you name it. Machine learning is a misnomer. It is simply the folding of state space into particular attractors with the "learning rules/heuristics" provided by the programmer. Mapping inputs to outputs to create a highly complex control surface (an exceedingly non-linear equation basically) ain't learning and certainly not "creation". Human thought is simply not like this.
You’re bringing philosophy into a technical discussion between hand written algorithms vs machine learned ones?
 
You’re bringing philosophy into a technical discussion between hand written algorithms vs machine learned ones?

The philosophical underpining explains why machine learning fails when encountering the unexpected but is still good enough in the context of human perception (can't produce a different outcomne like a hand-written algo but can cover a larger part of the state-space)
 
True, but you could load as much as you need now and then in the background begin to load all the data most immediately needed into RAM. The issue will remain the size of many modern games which in terms of there assets, will not fit into 32Gb or even 64Gb of RAM so you're potentially still running up against loading times at points.

I mean look a Red Dead Redemption 2 - 150Gb. Ignoring the RAM used by Windows the game you run you could squeeze about 70% of the install into RAM but then you fast travel to the further point away which isn't in RAM and BAM! Loading screen.

And it fails when you look at how many people have enough ram and/or if you want to have a fast travel. Why resist, let's just move to world where ssd is baseline.

Going to next gen the data is very heavily compressed. Data in ram is more than 50% bigger than in disc.
 
The philosophical underpining explains why machine learning fails when encountering the unexpected but is still good enough in the context of human perception (can't produce a different outcomne like a hand-written algo but can cover a larger part of the state-space)
humans fail all the time when encountering the unexpected. I'm not seeing a lot of difference.

Deep reinforced learning algorithms will produce different outcomes each iteration based on past performance and they are entirely self trained since we don't supervise them. The only thing that doesn't change for them is the incentive.

We don't use DRL on many things, but that doesn't mean we don't have AI that can well mimic learning and creation.
 
I just randomly started playing again mgsv. And this really shows where fast streaming in big openworld game could come in. This would be difficult to cache in any machine barring ram disk containing game install. Imagine if the binocular zoom in could bring in full and unique 4k assets in real time? No more copy paste textures/trees. Zoom is pretty slow so it would lend itself very well for streaming. I know mgsv is not technical/graphical master piece but this is stuff I wish would never happen again.


upload_2020-7-18_5-56-27.png

edit. It's insane to compare this to next gen unreal5 demo footage. I'm so waiting for next gen consoles and jump in visual quality.
 
I just randomly started playing again mgsv. And this really shows where fast streaming in big openworld game could come in.
This would actually be a brilliant use of fast streaming and I resent you for thinking of it before I did. :runaway: And when you think about, lots of mechanics predicated on being able to see somewhere else could be facilitated, such as switching to cameras showing other areas.
 
Last edited by a moderator:
Back
Top