Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I find myself having difficulty discerning the difference between sarcastic, hyperbolic posts meant in jest and those which are meant seriously.
Silenti, Let me help you out.

Hyperbolic:
Perhaps because it's just 386 pages of people arguing about the color of Schrödinger's Cat.

Sarcastic:
According to github there is a tabby that lives in his neighborhood.

Serious:
Would like a poll too see how many believe this.

:cool:
 
How possible is it that MS could be incorporating some kind of special machine learning inhouse technology (or one worked with AMD) where the lower res image will ll be reconstructed to 8K and that might consume less performance than actually rendering native 8k?

Maybe through DirectML.
 
I was thinking about it and with Sony having developed RDNA for AMD as a side project, even with a theoretical TF disadvantage on paper, they will code so close to the metal that they'll pick up 40% efficiency just from that vs MS who have to run through layers of software to touch the metal. So a 15% loss on paper vs 40% gain in coding. Plus it'll run cooler and cost less.
 
How possible is it that MS could be incorporating some kind of special machine learning inhouse technology (or one worked with AMD) where the lower res image will ll be reconstructed to 8K and that might consume less performance than actually rendering native 8k?
Technically achievable. But business case to support it is a different animal. Without knowing the costs and the number of games that may want it for “free” like nvidia does. This needs serious consideration
 
Technically achievable. But business case to support it is a different animal. Without knowing the costs and the number of games that may want it for “free” like nvidia does. This needs serious consideration
I am not sure I am following. Can you elaborate a bit so I can understand? :)
 
I am not sure I am following. Can you elaborate a bit so I can understand? :)
Model training and development costs money per title. So the labour costs might be low for instance but you’re taking a lot of capacity to do so much training per title; unless you have an ideal general solution.

so MS will have to incur the labour costs to support ML up resolution. Which is expensive to do for all titles.

and it also puts MS in a peculiar position because they now are responsible for support of the title. They are no longer a hands off platform/publisher. Now they would be fully involved with a lot of titles and that could be extremely costly.
 
Model training and development costs money per title. So the labour costs might be low for instance but you’re taking a lot of capacity to do so much training per title; unless you have an ideal general solution.

so MS will have to incur the labour costs to support ML up resolution. Which is expensive to do for all titles.

and it also puts MS in a peculiar position because they now are responsible for support of the title. They are no longer a hands off platform/publisher. Now they would be fully involved with a lot of titles and that could be extremely costly.

Supposedly Nvidia has DLSS to the point where it no longer requires per-game training...so it's possible.
 
Supposedly Nvidia has DLSS to the point where it no longer requires per-game training...so it's possible.
Yea it would appear so.
I personally hope it spreads to all titles. I mean if it’s generic and easy to transfer then all the studio needs to do is provide an aliased image set and bam it should work. In theory. In practice I don’t know. And also I don’t know whether this is a straight nvidia advantage.

but having said that. If Nvidia does have a generic algorithm. Would be pretty cool if they shared/sold it.
 
Model training and development costs money per title. So the labour costs might be low for instance but you’re taking a lot of capacity to do so much training per title; unless you have an ideal general solution.

so MS will have to incur the labour costs to support ML up resolution. Which is expensive to do for all titles.

and it also puts MS in a peculiar position because they now are responsible for support of the title. They are no longer a hands off platform/publisher. Now they would be fully involved with a lot of titles and that could be extremely costly.

What if you crowd source the training? All Xsxes in the wild does model training. All Xss benefits from the training. The more people play a game on Xsx, the better it looks on Xss.
 
So this O'dium guy is pretty certain PS5 would hit 11.5TF all said and done give or take 0.25TF, I say that's a fantastic scenario if true. The Github leak might be true after all for a 9.2TF PS5 for a time that is, since then it's been redesigned to hit a maximum 11.5TF target all according to his source. But then again he contradicts to Klee, OsirisBlack and maybe a few others for a 12TF+ target, you know what at this point PS5's speccs have pretty much covered every digit ranging from 8-14TF, how the hell can the gap be so big:LOL:?
 
I love how the goalposts are moving for that O'dium guy. First it was a little bit over or a little bit under 11 TF. Then it was about 10.5 to 11.5 TF. Now it's 11.5 TF +/- 0.25 TF? :D

Soon, it'll be a little under or a little over 12 TF? Then 11.5 to 12.5 TF? Then 12.5 TF +/- 0.25 TF? :D

Regards,
SB
 
What if you crowd source the training? All Xsxes in the wild does model training. All Xss benefits from the training. The more people play a game on Xsx, the better it looks on Xss.
No. Cant be crowd sourced that would make the technical issue harder. Training is not the issue; support and scaling the tech out to all their products is.
 
Status
Not open for further replies.
Back
Top