Any details on AMD Leo demo?

Status
Not open for further replies.
It does use AA when it comes to computer graphics

Pixar uses something called Stochastic Antialiasing.
It is simply a super samping with stochastic sample placement.
They do not do shading on pixel/sub samples so supersapmling doesn't actually increase the shadng cost. (they shade vertexes/micropolygons before rasterization)

So most shader antialiasing actually comes from the shaders themselves.
 
steampowerd ever heard the story of chicken little :D

are you going to prove your theory with benchmarks and screenshots or are you just going to keep repeating "supersampling a low res frame and upscaling it produces perfect iq"

ps: you are aware ssaa obeys the law of diminishing returns, the biggest reduction in aliasing comes from going 0 to 2xssaa
 
steampowerd ever heard the story of chicken little

This is different, we know that while using a different rendering method, the computer graphics films use AA, and can produce 480p, 720p, 1080p perfect IQ images this is not in contest.

There's also the very simple fact of the existence of bullshots, which you should be aware of. Perfect Image quality images of realtime games that can be used to generate perfect image quality 720p or 1080p images.

The possibility of perfect image quality 720p or 1080p is not in question.

We also know as anyone who's seen upscaled from a 720p or a 1080p that the process produces an image that can in many cases look better than the original lower res image, albeit in some circumstances not as good as a native higher resolution image. In any case, the only thing such upscaling-upconverting process need do is scale the image appropriately, which it can do decently enough if the original starting point is good enough, this too is not in question.

Even upscaling from 480p can compete with bad-bluray encodes
Good upscaling algorithms can bring some benefits, just as we see up-scaled DVD out of Blu-Ray players looking pretty good, in many cases giving Blu-Ray titles a run for their money.-Jeff Kilgroe

If you've any doubts, I hope the various presentations from QuadHD tvs showing upscaled 1080p to quadhd(4k) res can ease such doubts.
The picture is so good at times with 4K or 8K... From my viewing, the picture is so good from a good 4K TV that it's lifelike -Robert Wiley

Regards to current gpus being able to handle AA for perfect image, they can, albeit at high costs
. This chapter describes a tiled supersampling technique for rendering images of arbitrary resolution with arbitrarily wide user-defined filters and high sampling rates. The code presented here is used in the Gelato film renderer to produce images of uncompromising quality using the GPU.

are you going to prove your theory with benchmarks and screenshots or are you just going to keep repeating "supersampling a low res frame and upscaling it produces perfect iq"
The Gran Turismo 5 photo mode, whilst it may be further improved, already shows that near perfect images can be generated in short order at 1080p.

ps: you are aware ssaa obeys the law of diminishing returns, the biggest reduction in aliasing comes from going 0 to 2xssaa

For most cases this is true, but it seems there exists odd cases involving sharp angles, sharp contrast changes, etc wherein perceptible artifacts are noticeable even at higher levels of supersampling, or else there wouldn't be a need for taking it all the way up to 64x.
 
The Gran Turismo 5 photo mode, whilst it may be further improved, already shows that near perfect images can be generated in short order at 1080p.
Hardly short order the video was stop motion aka not realtime

if you want to prove rendering a low res and upscaling allows for higher levels of ssaa and produces better iq than a game running at high res with a lower level of ssaa you need to do the work

1. how much more ssaa can you apply be using lower res
2. how does it compare to a non upscaled frame with lower ssaa

/me waits for steampowered to quote a piece of text/image/video instead of actually finding out if his theory is true
 
Hardly short order the video was stop motion aka not realtime

if you want to prove rendering a low res and upscaling allows for higher levels of ssaa and produces better iq than a game running at high res with a lower level of ssaa you need to do the work

1. how much more ssaa can you apply be using lower res
2. how does it compare to a non upscaled frame with lower ssaa

/me waits for steampowered to quote a piece of text/image/video instead of actually finding out if his theory is true

Short order as in about a second, though I've not tried it, don't know how long it takes. Current gpus are more than 10x more powerful, and should in a few years be 30x more powerful. What is a 1fps becomes 30fps.

Regards to Supersampling real games, you just need to spend sometime at places like neogaf and see what happens. Often even images with artifacts at their original high resolution, appear as if out of a hollywood-cg movie when shown in small lower resolution picture... and the members clearly state, it is misleading to post this moving gif or pic that appears as if indistinguishable from real life.
 
Are you actually going to prove your theory or once again quote some thing from someone else
That's a bit unfair - it's not possible to totally test the "theory" given current display densities and games. The proposal is not that *uniform* supersampling scaled up will look better than native res (because that's stupid... it's like rendering at 1080p, downsampling, then re-upsampling for no reason) but rather than stochastic or jittered supersampling can look better for the same cost cost in processing power. Even with MSAA/SSAA overrides in the control panel, users are hardly equipped to do that test properly. Furthermore, it becomes the most interesting once you've passed a certain threshold of visibile pixel density (such as with 300dpi displays, or TVs viewed at far enough distances). Those displays are not yet ubiquitous so that makes it hard to test too.
 
He could at least do some testing, so far he hasnt even got as far as running a game.
His evidence of his theory being correct is a stop motion video of some bullshots with no comparison video of the same bullshots that havnt been ssaa'd as much but rendered at a higher res instead of upscaled, several screenshots from films that dont have any ssaa in them and despite his claim that they are just as good clearly arnt and some quotes from random people on the internet.

He could run at low res with 2xssaa enabled and upscaled 2x, benchmark it
then run at twice the res with 2xssaa benchmark it see how much of an effect resolution has on framerate provide screenshots so we can compare iq.
Then run at low res with 4xssaa enabled and benchmark it and see if the low res allows for higher aa and how it compares to double res lower ssaa in both framerate and iq.

Will he ?
I doubt it, instead he will quote someone saying "hey I upscaled my dvd of the matrix and it looks perfect"
 
He could at least do some testing, so far he hasnt even got as far as running a game.
His evidence of his theory being correct is a stop motion video of some bullshots with no comparison video of the same bullshots that havnt been ssaa'd as much but rendered at a higher res instead of upscaled, several screenshots from films that dont have any ssaa in them and despite his claim that they are just as good clearly arnt and some quotes from random people on the internet.

He could run at low res with 2xssaa enabled and upscaled 2x, benchmark it
then run at twice the res with 2xssaa benchmark it see how much of an effect resolution has on framerate provide screenshots so we can compare iq.
Then run at low res with 4xssaa enabled and benchmark it and see if the low res allows for higher aa and how it compares to double res lower ssaa in both framerate and iq.

Will he ?
I doubt it, instead he will quote someone saying "hey I upscaled my dvd of the matrix and it looks perfect"
There is a difference between PR BSs, and those BSs, those are ingame gran turismo photomode shots, the playstation 3 is generating them on its hardware, and as far as I know it is generating them in a short amount of time. The video shows a 1080p final image, which can easily be upscaled to quadhd whilst preserving its cg-ness and fine detail. A gpu orders of magnitude faster than the ps3 gpu should be able to render such 1080p shots orders of magnitude faster.

Regards upscaling,
obviously upscaling a dvd won't look perfect, especially in cases of fine small details, but there is still plenty of detail and the final product is quite good.

Regardless I've already said it is possible to use a higher than 480p starting resolution with a more proper aspect ratio, something similar to 720p or slightly lower, maybe a bit more. You should be aware that upscaled 720p is even harder to distinguish from native 1080p and it has a substantial amount of fine detail.

Yeah but if you'd scale that small image back up you'd see how horrible it really looks like.
IF you do a brute force scale without any proper algorithm it will end up bad, but some of these screens can occupy up to a quarter or more of my 1080p monitor. So it's not some minuscule screenshot we're starting from.
 
Last edited by a moderator:
which can easily be upscaled to quadhd whilst preserving its cg-ness and fine detail.
so you claim without providing any proof of it
I'll ask again are you going to prove that rendering a game at 480 will save enough gpu horesepower (and not run out of vram) to allow you to double the amount of ssaa you can apply you a game running at 1080 while running at a similar framerate, and are you going to prove that the iq is comparable. Or are you just going to keep repeating yourself and saying its true because "we know its true"
 
IF you do a brute force scale without any proper algorithm it will end up bad, but some of these screens can occupy up to a quarter or more of my 1080p monitor. So it's not some minuscule screenshot we're starting from.
It doesn't really matter what algorithms you use, you'll still be loosing details that would be there when rendered at proper resolution. At best you'll get something similar to a blur filtered image.
 
It doesn't really matter what algorithms you use, you'll still be loosing details that would be there when rendered at proper resolution. At best you'll get something similar to a blur filtered image.
First, there's no such thing as a "proper resolution". Certainly when you render at relatively lower resolutions, you don't get as much detail, but that's hardly a good use of resources if you can't see that detail. Don't get so jaded by your experiences with sitting a foot from a 24" monitor @ 1080p (ouch pixel density) that you can't even imagine the trade-off on higher density displays. Surely some of you have iPhones or TVs that you view from across the room... It's simply not going to be a good use of hardware to render/shade at that native resolution on these devices. The visual difference will be minor, and the FLOPS are better used elsewhere.
 
First, there's no such thing as a "proper resolution".
He means Native resolution

Don't get so jaded by your experiences with sitting a foot from a 24" monitor @ 1080p

Steampowerd's first post where he made the claim
Could the image be reduced from 1080p to 480p (and upscaled back to 1080p if we want to simulate the higher res) to reduce aliasing?

Surely some of you have iPhones or TVs that you view from across the room... It's simply not going to be a good use of hardware to render/shade at that native resolution on these devices.
true but then when you use your iphone normally aka right in front of your face its going to look bad and also would using supersampling be a good use of resources

and speaking of resources we are still waiting for steampowered to tell us if lowering the resolution allows for doubling of ssaa levels and how it compares in iq, no doubt he will do it shortly after Rhys gets permission to tell us about the img PMX590, we've only been waiting since august.
 
... and also would using supersampling be a good use of resources
That's the point though - even if you do supersampling (which you probably wouldn't), almost any sampling pattern is better than the uniform sampling pattern that you get when simply rendering in a higher resolution. The MSAA patterns are *far* better. So even if your per-sample cost remains the same, the final image will look better (particularly temporally, so screenshots are somewhat moot) given shading the same number of pixels.
 
My knee-jerk reaction would be similar to Davros and HoHo, but I get what you're (Andrew) trying to say regarding the sampling differences between bilinear upscale and MSAA / SSAA. I'm still not convinced that the result is useful, however.

I guess it could be more interesting if you somehow muxed the upscale and the MSAA resolve into the same pass somehow, or at least allowed them to be closely intertwined. The brute-force approach is of course to attempt playing a game at 800x480 +AA and then having your LCD (or your graphics card) do the upscale. But that may not be indicative of the result of a game engine specifically written to use "variable, non-linear<?> render density".

One way of thinking about it might be how video compression works, kinda... Big, relatively non-descript chunks of color or gradient that take up several hundred pixels could be converted to a fat few squares and then "upscaled"; areas of high detail might be rendered at 1:1 or even higher and then similarly scaled to fit. If you rendered everything to a final render target in linear space, it would be a distorted, contorted, almost (but not quite) twisted piece of work rather than a rectangle. You'd then "stretch" it to fit the full-screen framebuffer, but the stretch wouldn't be pure biliinar, it would have subpixel information that could possibly be re-constituted during the upscale depending on the "regional" data that is being stretched.

I can't even begin to get my head around how such an engine would be written or optimized, or if all the overhead would be worth anything. I suppose, at most, you might be able to squeeze a bit more detail into single scenes for older graphics cards that do not have the ROPs or memory bandwidth to pull off big AA. But it seems like you'd have to do a lot of computation ahead of time to calculate what parts of the output do not need full sample rate, which seems like you'll trade a ton of CPU cycles to effectively "cull" stuff that would normally have been fed to the GPU.
 
Last edited by a moderator:
The brute-force approach is of course to attempt playing a game at 800x480 +AA and then having your LCD (or your graphics card) do the upscale.
... and stand a *long* way back from your monitor. I don't think anyone is making the argument here that if you can see the individual pixels it's going to look the same ;) The argument is more that the next jump in display density that is coming will most likely jump past the point where it's worth it to shade every pixel.

But yeah, you'd have to use a much better resampling filter as well. That's why DVDs/Blurays are a much better comparison since the media hardware already has really nice filters.
 
Yeah, you can already get some of that if you compare 720p source material to 1080p source material (console games, PC games, and movies) on HDTV's depending on your viewing distance and screen size.

And obviously for PC screens, we're going to need some pixel density advances before we start to see something similar.

I've noted that before when Andrew Lauritzen explained what he meant. And absolutely agree.

The problem here is the back and forth with steampoweredgod where he doesn't take either viewing distance or pixel density into account in his expositions. The whole back and forth started with discussion of improving the rendering quality of the Leo Demo. So a PC demo, and hence everyone is assuming PC monitor at PC viewing distances. In that case, with current 1080p displays there's no way for what he proposes to look better than an image rendered at the native monitor resolution. You certainly might get rid of the aliasing but you'll certainly lose a lot of detail as well.

Regards,
SB
 
I agree with Andrew here. This is a topic worth discussing moving forward.

Personally, back in ancient days of CRT monitors quite often I preferred picture with lower resolution than maximum allowed, but improved by AA. Back then performance of the GPU was biggest limiting factor to run AA in 1280x1024 or similar and CRT screens had that nice property where all resolutions looked equally good. Since LCD age came I couldn't stand blurriness of not native screen resolutions and was forced to play without AA most of the time. Perceived fidelity of graphics is more important than real number of pixel pushed on screen. It's like with watching special effects in movies. If they good enough then our eye and brain can't tell during normal screening what was real and what was not. Only when we specifically look for them we might notice. It's quite like playing a game in slow motion with magnifying glass over section of screen.


Still, with 27''+ LCD's and Eyefinity I doubt we are anywhere close to stop worrying about pushing more shaded pixels on screen(s), but as Andrew points out, it might be the right thing for QuadHD screens.
 
Aye, I've been hoping for practical high DPI screens ever since IBM first introducted their 22" 3840x2400 screen. Unfortunately that never made much progress and was cancelled a few years later.

There's a lot of things that could be done with high DPI screens to increase fidelity of rendered images without rendering costs going up linearly (or worse) with the number of pixels.

Regards,
SB
 
Status
Not open for further replies.
Back
Top