Any details on AMD Leo demo?

Status
Not open for further replies.
What do you mean?

To solve gongo's complaints:

same here....what graphics features does HD7970 adds? pretty hard to tell these days...BUT the shaders aliasing...jaggies...textures shimmering...you know those IQ eyesores...still very prevalent even if i was running at ...1080p ..forcing 8xEQAA...morphoric AA on/off...16xAF through CCC...the chains of bridge...the greenies...the stove...the camera edges...the door handle...the dragon jaws...all breaking the illusion of "CGI-in-realtime" graphics..

i wonder will the day finally come when these go away..imho these artifacts are still a problem to reach that "CGI-in-realtime" graphics...


I've been using FXAA in some games and it seems to do a very good job at reducing the jaggies from shaded surfaces and shimmering textures...
 
I've been using FXAA in some games and it seems to do a very good job at reducing the jaggies from shaded surfaces and shimmering textures...
It's doesn't solve the root cause. You need to fix the shaders (prefiltering, etc) to address undersampling of terms like specular, etc. FXAA/MSAA won't help. SSAA (via MSAA and sample-frequency execution) will help somewhat but it's pretty brute force.

People need to stop asking about hardware "features" these days. More relevant, we just need more performance to use better data structures and algorithms to solve the issues. 7970 delivers pretty decently on that in my limited experience. It certainly enables you to do stuff like use LEAN mapping (with its fairly large increase in texture footprint for instance) on most/all surfaces without being too worried about tanking your frame-rate. It's a very fast card.
 
It's doesn't solve the root cause. You need to fix the shaders (prefiltering, etc) to address undersampling of terms like specular, etc. FXAA/MSAA won't help. SSAA (via MSAA and sample-frequency execution) will help somewhat but it's pretty brute force.

People need to stop asking about hardware "features" these days. More relevant, we just need more performance to use better data structures and algorithms to solve the issues. 7970 delivers pretty decently on that in my limited experience. It certainly enables you to do stuff like use LEAN mapping (with its fairly large increase in texture footprint for instance) on most/all surfaces without being too worried about tanking your frame-rate. It's a very fast card.

Of course post process AA helps! Not by much, but you can see it qualitatively, and so it helps. You take what you can get, for the least amount of milliseconds possible. Sure it would be nicer to pre-filter and run at 64x super sample and etc. And any research into do any of that faster is most welcome, but games need to ship and if FXAA is what you have at the moment then you may as well take it.

But further research does indeed need to go into a more "root cause" solution. Post process AA probably doesn't have much more in the way to offer in terms of visual quality than what's already out.

Further, there is one last thing in terms of hardware features that I'd say most people would like to see, or two things really. Eliminating the messy API stuff, if somehow possible. And much better hardware texture compression. Block compression is very nice, but can still be improved, better quality/less bits means more available ram for everyone without console manufacturers or consumers having to pony up more dough. A win for everyone really.
 
As for the demo, I don't really see the benefit to this for a next gen renderer. I suppose I can see that if you've got the tiled lightlist for deffered shading anyway, you've got the framework/etc. you could re-use for forward rendering transparency. But forward rendering the entire thing doesn't really seem beneficial. Even if you want to do msaa, there's already a nice production ready technique/paper out on reducing the cost for msaa and deffered shading.
 
Of course post process AA helps! Not by much, but you can see it qualitatively, and so it helps. You take what you can get, for the least amount of milliseconds possible. Sure it would be nicer to pre-filter and run at 64x super sample and etc. And any research into do any of that faster is most welcome, but games need to ship and if FXAA is what you have at the moment then you may as well take it.

The problem with FXAA is that it isn't really a solution it's a compromise. You get worse IQ (polygon edges are far worse than MSAA as one example) in some areas but every so slightly better IQ in other areas (shader aliasing for example). But the main benefit for the overall worse IQ (IMO) than MSAA is faster rendering speed.

I've been less than impressed with FXAA in all the use cases I've seen it in when it is not also combined with MSAA. But combining MSAA and FXAA usually leads to a very large performance hit in games that allow you to do that.

Andrew Lauritzen is correct, these things need to be addressed at the application level rather than than at the hardware level. Although hardware obviously needs to be capable enough (in speed or resources) to facilitate better fixes.

Regards,
SB
 
But forward rendering the entire thing doesn't really seem beneficial.

I would say that being able to gracefully handle per-material BRDF's as well as not being restricted to storing material parameters in a G-Buffer are both pretty big benefits.
 
Last edited by a moderator:
I would say that being able to gracefully handle per-material BRDF's as well as not being restricted to storing material parameters in a G-Buffer are both pretty big benefits.
Depends what you mean by "graceful". Branching on materials works great, and if you're concerned about high-register-pressure materials slowing down the common, quick ones then you can easily split those out into a separate pass. There's no real limit to the flexibility of doing it deferred. I don't really see a compelling need to invoke the rasterizer again just to schedule material shaders per-pixel (waaaay overkill). At most, tile classification stuff is all you need, and in most cases I doubt you even need that (benchmark it per app and see).

Not sure what the issue with storing values in a G-buffer is... it's analogous to a compiler spill/fill. In fact, a smart compiler/runtime could do it totally automatically. Sure it costs some bandwidth but again that's not an interesting question... you simply compare which uses more bandwidth - storing the data when you have it the first time around or regenerating it. There's no need for any sort of dogmatic position here, both ways produce equivalent results.

Anyways whenever I've tested tiled deferred always ends up winning, even with complex materials and lots of G-buffer parameters. It just doesn't cost very much to store the BRDF parameters (of which 10-20 is plenty) and read them back in once or twice compared to re-rasterizing the scene. And of course if your parameters are coming from constants, you can just read them directly in the deferred pass as well and avoid storing them. I have yet to see a BRDF that is even close to problematic frankly.

But of course you should test for each application/usage. Hence why I don't think it's really an interesting discussion topic these days. All the shader stuff and what you store - yada yada - is pretty simple and boring. The interesting question is invoking and scheduling work in user space via compute vs. the 3D pipeline/rasterizer. Both have interesting advantages and disadvantages.
 
Last edited by a moderator:
Anyways whenever I've tested tiled deferred always ends up winning, even with complex materials and lots of G-buffer parameters. It just doesn't cost very much to store the BRDF parameters (of which 10-20 is plenty) and read them back in once or twice compared to re-rasterizing the scene. And of course if your parameters are coming from constants, you can just read them directly in the deferred pass as well and avoid storing them. I have yet to see a BRDF that is even close to problematic frankly.

Wasn't this demo and the way they do things less about it being always better than deferred as to it allowing the use of MSAA while enabling all those things without incurring a large hit to performance when using MSAA? Or rendering the benefits of MSAA mostly moot in many deferred renderers?

Perhaps I'm jumping into the discussion at the wrong time. :)

Regards,
SB
 
Wasn't this demo and the way they do things less about it being always better than deferred as to it allowing the use of MSAA while enabling all those things without incurring a large hit to performance when using MSAA? Or rendering the benefits of MSAA mostly moot in many deferred renderers?
Well sure, MSAA works naturally with forward rendering, but at the same time it has been demonstrated that via compute shader rescheduling you can get pretty much ideal computational efficiency on multi-frequency shading with MSAA anyways, so as long as you have DX11+, the benefit is also pretty moot. On AMD you may get some benefit from doing it forward rather than deferred simply because - as discussed earlier - there seems to be some bottleneck in rendering MSAA'd G-buffers, but at least on NVIDIA - and in theory - the hit for properly-implemented deferred MSAA is similar to forward rendering (~25% or so slower for 4x MSAA).

Now with this method - assuming you don't have a *ton* of lights, you will indeed save some memory footprint vs. deferred MSAA, but it's at the cost of additional computation (retransforming/rasterizing the scene) so even the bandwidth argument is unclear/app dependent. Memory footprint is much less interesting than bandwidth in the long run... even now these are 3GB cards, which is plenty.
 
Last edited by a moderator:
Could the image be reduced from 1080p to 480p (and upscaled back to 1080p if we want to simulate the higher res) to reduce aliasing?

I know that in screenshots that gives perfect image quality basically, but in motion do new IQ problems arise? or would that pretty much solve iq quality issues(aside from limiting the detail)?
 
@steampoweredgod its a bad solution
run a game, select 640x480 in the options and let it run fullscreen. notice the image is not as good as setting the resolution to your monitors native res (even on a crt)

I know that in screenshots that gives perfect image quality basically,

Are you saying if I have a pic @1920x1080 reduce it to 640x480, then scale it back up to 1920x1080 I will have a picture thats better or at least as good as the original ?
 
@steampoweredgod its a bad solution
run a game, select 640x480 in the options and let it run fullscreen. notice the image is not as good as setting the resolution to your monitors native res (even on a crt)
IF done right is this not some form of supersampling for a 480p image?

Are you saying if I have a pic @1920x1080 reduce it to 640x480, then scale it back up to 1920x1080 I will have a picture thats better or at least as good as the original ?
The high frequency or fine detail is going to be lost, but the upscaled image can be very very good. Right now newer tvs are going to be upscaling to quadhd, with the right algorithms it will be very impressive.

Here are some comparison of native 1080p against 1080p-720p-1080p and 1080p-480p-1080p content which originally had perfect image quality(video).

Many people have trouble telling the difference between a properly upscaled dvd and native bluray(depending on the softness or lack of fine detail it is very hard to tell.).

Unlike film though, games have the advantage that perfect iq elements like the HUD can be kept at the original resolution preserving fine detail in such. Also any fine detail element that does not feature picture quality anomalies can be kept at original resolution in the realtime graphics image and merged into the final picture.

I know that in stills it seems to eliminate picture quality issues of realtime content(going down to 480p), but haven't checked if it also resolves any issues that may arise in motion.
 
I've recently watched the first three HD remastered episodes of Star Trek TNG and the difference with the original TV/Video run was astonishing. It just shows how poorly the material was transfered for broadcast and home video back in the days. The film noise in the HD transfers was much present though, but the picture detail and clarity was way overcompensating for that and all.
 
IF done right is this not some form of supersampling for a 480p image?

No, that's upscaling as you correctly noted farther in your post. Supersample is taking more samples to derive the color of each sample on the screen. What you are most likely thinking referring to as supersampling is rendering the image at a higher resolution and then downsampling it to the display resolution. That isn't the only way to do supersampling, of course. But super sampling never involves fewer samples than the desired target resolution.

Hence why it generally has such a large performance hit. As you're rendering twice the information with 2x, four times the information with 4x, etc...

Regards,
SB
 
No, that's upscaling as you correctly noted farther in your post. Supersample is taking more samples to derive the color of each sample on the screen. What you are most likely thinking referring to as supersampling is rendering the image at a higher resolution and then downsampling it to the display resolution. That isn't the only way to do supersampling, of course. But super sampling never involves fewer samples than the desired target resolution.

Hence why it generally has such a large performance hit. As you're rendering twice the information with 2x, four times the information with 4x, etc...

Regards,
SB
I did mention two processes the first being rendering at 1080p moving it to 480p, and then the second process would be the upscaling part 480 to1080. When I mentioned supersampling I was implicitly referring to the first of the two processes. at 1080p we're dealing with 3x the pixels, so I'd assume it would be akin to 3x supersampling of 480. Of course you could perform 4, 5, 6x supersampling of a 480p image.

The question is whether supersampling 480p at Nx times would eliminate all picture quality artifacts in motion. Once we've eliminated all artifacts we'd upscale the image to 1080p. We could also keep all artifact free fine detail elements at the original Nx resolution and merge them with the final image(for example the HUD).
 
3x supersampling would be rendering at 5760x3240 and downsampling to 1920x1080
a few people with eyefinity play games at 5760x3240 and it needs a lot of horsepower

as for supersampling 480 your going to have to upscale to 1080 (assuming a 1080 lcd) and upscaling is allways worse than rendering at native res. so you would supersample to get a nice image then ruin it by getting your cpu/lcd's scaler to invent detail that isnt there

First off i certainly tell the difference and it will be greater when going from 480 to 1920 (remember most people do not sit 10 foot away from a computer monitor
second it could of took minutes a frame to upscale those movies we have no way of knowing
 
I did mention two processes the first being rendering at 1080p moving it to 480p, and then the second process would be the upscaling part 480 to1080. When I mentioned supersampling I was implicitly referring to the first of the two processes. at 1080p we're dealing with 3x the pixels, so I'd assume it would be akin to 3x supersampling of 480. Of course you could perform 4, 5, 6x supersampling of a 480p image.

The question is whether supersampling 480p at Nx times would eliminate all picture quality artifacts in motion. Once we've eliminated all artifacts we'd upscale the image to 1080p. We could also keep all artifact free fine detail elements at the original Nx resolution and merge them with the final image(for example the HUD).

It's still not the same as your target resolution in this case is still 1080p. Downsampling to 480p then upscaling to 1080p is going to invevitably lead to a loss of information from the original source.

Now if 480p was your target resolution then yes, the first step would be similar to supersampling. And as a 480p image it would look good. Upscaling it back to 1080p however will result in an inferior image to the original source however.

Regards,
SB
 
Ive just tried it
with the game alpha prime
override aa
4x supersampling
ingame res 640x480 upscaled to 1680x1050 and it isnt good
in fact 1680x1050 no aa is better
unfortunately when using fraps on a 640x480 game running at 1680x1050 the sceenshot comes out at 640x480
 
First off i certainly tell the difference and it will be greater when going from 480 to 1920 (remember most people do not sit 10 foot away from a computer monitor
second it could of took minutes a frame to upscale those movies we have no way of knowing
1920 is 1080p and in the case of the dvd disc upscaled in real time by a bluray player it looks quite nice on my tv.

Ive just tried it
with the game alpha prime
override aa
4x supersampling
ingame res 640x480 upscaled to 1680x1050 and it isnt good
in fact 1680x1050 no aa is better
unfortunately when using fraps on a 640x480 game running at 1680x1050 the sceenshot comes out at 640x480

That's not a multiple of 640x480 unlike 1080p, upscaling to odd resolutions is more troublesome and may introduce additional artifacts.

With regards to comparing it to a native 1080p of course the native will have higher fine detail. And if you could do heavy supersampling on a 1080p it would look even better, but that's too expensive.

The idea is would applying heavy supersampling to a 480p image create a DVD like perfect image quality? IF it does do it, then the upscaled product will be no different than an upscaled dvd in image quality. And the absence of fine detail at 1080p will be compensated by PERFECT prerendered-CG like IMAGE quality.

4x is good, but since we're dealing with 480p, higher levels of supersampling should be feasible in realtime..
 
Status
Not open for further replies.
Back
Top