Ripping off the veil: The mysterious PS3 hardware scaler exposed

Discussion in 'Beyond3D Articles' started by Rys, Jan 26, 2007.

  1. betan

    Veteran

    Joined:
    Jan 26, 2007
    Messages:
    2,315
    Likes Received:
    0
    So? (By the way, by definition frequency is not local in space so edges don't have any frequencies infinite or otherwise)

    Another redundant information as I already said native 1080p will have more information than any scaled one.

    Indeed

    Since I am yet to see any related objective reasoning in your posts, I will try to explain slowly and as polite as possible.

    • Most of us have two eyes
    • Eyes do see different things because of their positions.
    • For fixed distance 2d screens such as a TV, that corresponds to horizontal shifting.
    • The resulting superimposition will be equivalent to horizontal low pass filtering.
      For example, if the pixel separation is 1, that would roughly correspond to a filter of [0;0;0; 0;0.5;0.5; 0;0;0]
    • This means, higher horizontal frequencies will be suppressed more than corresponding vertical ones. In other words, we are more sensitive to vertical sharpness.

    Yet you manage to avoid any discussion related to signal processing. I am sorry but this doesn't sound believable at all, partially thanks to your urge to disclose your education level, not to mention overwhelmingly frequent subjective comments.

    Of course, it may be true. Not all EE PhDs are proficient in SP.
    Anyway, I would appreciate if you could manage to keep yourself (or myself) out of discussion as scientific etiquette mandates. :wink:

    You are totally right, I am pretty sure Nyquist does not break under AA.
     
  2. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    I think this attempt at genteel trash talking better back off by a notch or so. :???:
     
  3. paawl

    Newcomer

    Joined:
    Feb 3, 2007
    Messages:
    113
    Likes Received:
    0
    Eye-brain horizontal and vertical resolution

    Aliasing is when unwanted out of band information is mapped into the band of interest, usually due to under-sampling an analog input signal.

    It it my limited understanding of computer graphics that aliasing occurs due to spatially under-sampling the game world while rendering. The game world, consisting of idealized polygons with infinitely sharp edges, has infinite spatial frequency. However, the power spectral density falls off rapidly (as 1/f^2) with increasing frequency. Most anti-aliasing techniques sample the game world at a higher rate, then apply a low-pass filter prior to display. This reduces (but cannot eliminate) the power in the aliased frequencies by a factor which is approximately equal to the factor increase in the sampling rate. For example, doubling the sampling rate (2xAA) halves the aliased power. If someone with greater knowledge of CG cares to correct me, feel free.

    Regarding the horizontal resolution (and aliasing) of a 960x1080 buffer, most of the 480p/1080i TVs that this resolution is targeted towards (CRTs) don't have much better than 800 to 1000 vertical lines of resolution anyway, so their owners won't be missing anything. Importantly, then, any output from the console will effectively be low-pass filtered by the TV, reducing the appearance of sharp edges. I assume that this is what betan meant by, "any decent scaler will make use of interpolation (or anti-aliasing) which will hide aliasing." This isn't really anti-aliasing, because the aliased power is still there, but the edge smoothing may make it less visually objectionable.

    Regardless, the choice for people with 480p/1080i TVs isn't 960x1080 or 1280x720. Their choice is between 960x1080 and 640x480. It would be interesting to make visual comparisons between 960x1080 without MSAA and 640x480 with 4xMSAA, assuming that most games will use 4xMSAA when rendering at lower resolutions. I also don't see why the scene necessarily has to be rendered at 960x1080. Couldn't the developer render at 1280x720, then make a full-scene pass to resolve to 960x1080 for output? That would both save a tiny bit of memory (450 KiB), eliminate any performance penalty, and would provide effective (4/3x) anti-aliasing to make up for the lower horizontal display resolution.

    Lastly, it has often been claimed, and I have always believed, that the human eye-brain system is markedly less sensitive to horizontal resolution than to vertical resolution. However, I did some Googling last night, and if this claim is true, I couldn't find any evidence for it. Does anyone (betan?) have a reference for this?
     
  4. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,325
    Likes Received:
    93
    Location:
    San Francisco
    I don't see how this process would eliminate some performance penalty.. what penalty?
     
  5. paawl

    Newcomer

    Joined:
    Feb 3, 2007
    Messages:
    113
    Likes Received:
    0
    Any (small) penalty associated with filling 960x1080 pixels instead of 1280x720.

    It just seems (and I speak not as a game developer) like the Xbox 360 has a nice setup, where the developer targets a single resolution (1280x720, 1024x576, or whatever) and the scaler takes care of the rest.

    With the newfound ability (or permission) to perform horizontal scaling, the PS3 developer now has almost the same ability as the Xbox developer. He can always render to a 1280x720 render buffer, then resolve (or filter, map, whatever you developers call it) the render buffer to the desired output buffer resolution, e.g., 640x480, 1280x720, or 960x1080. No change in performance (neglecting the filtering), just a little more or less memory for the output buffer.

    If this is not possible or desirable, I'd be interested to know why.
     
  6. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,325
    Likes Received:
    93
    Location:
    San Francisco
    Everything is possible, but I'm not entirely sure that you save anything cause rescaling your image from 720p to another res via GPU is gonna take some time. Maybe in some games is a win..but I'm doubtful
     
  7. paawl

    Newcomer

    Joined:
    Feb 3, 2007
    Messages:
    113
    Likes Received:
    0
    Really? I trust you, but I was assuming that the rescaling on the GPU would take negligible time. I guess I assumed wrong.

    I assumed that the rescaling would be like filling ~1 Mpixel (960x1080) using bilinear interpolation from a 1280x720 texture. On a GPU capable of ~1 Gpixel/s fill rate, this should take ~1 ms. One millisecond is less time than 12.5% of a 33 ms frame.

    Looked at another way, if you were filling 10 million pixels per frame for a 1280x720 display, and you change the render target from 1280x720 to 960x1080, then you have to fill 11.25 million pixels per frame. That's an extra 1.25 million pixels, with shaders, multi-texturing, etc. The ~1 million pixels you have to fill to do the scaling don't have any shaders and only a single texture.

    I agree that the benefit is small (because the cost of 12.5% going from 1280x720 to 960x1080 is small), but I thought that the real benefit would be in keeping your rendering resolution independent of the output resolution. The performance benefit also becomes more significant if you wanted to output 1280x1080, where you only have to fill ~1.3 Mpixels, as opposed to shading an extra 5 million pixels per frame.

    Anyway, that's just my completely unfounded estimation, as someone who's never touched a GPU with code before. I don't know enough to include the effects of caching and memory bandwidth on my estimations. I'm sure you're right. Thanks for the feedback!
     
  8. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,325
    Likes Received:
    93
    Location:
    San Francisco
    Your numbers are right on spot, I just don't think that a 10% increase in pixel count is going to increase the rendering time by 10%, it just does not scale linearly most of the time (mostly sublinearly..). You should also consider that in in a modern game many rendering passes just don't run at frame buffer resolution (shadow maps, post processing effects, etc..) and they will not be affected by a slightly change in frame buffer resolution.
    Moreover think about a game which is doing AA..at some point your frame buffer has to be downsampled/resolved before display or further processing..well, it's the perfect occasion to generate a frame buffer at any odd resolution you need at virtually no extra cost.
     
  9. paawl

    Newcomer

    Joined:
    Feb 3, 2007
    Messages:
    113
    Likes Received:
    0
    Really? Well, color me surprised. Does that not imply that the GPU is spending a significant portion of its time doing things other than filling pixels? What is it doing with that time? Shading vertices? Waiting on the CPU?

    Didn't know that. Thanks for the information.

    That's what I was trying to suggest, but without the AA. (I was under the impression that most titles didn't use AA at 1280x720.) Sorry if I wasn't clear. Do many PS3 developers use this technique? It seems like it would make coding, testing, and supporting multiple resolutions easier.
     
  10. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,325
    Likes Received:
    93
    Location:
    San Francisco
    Umh..the rendering process can be be limited by many different things at the same time, at any time.Rarely rendering time is a linear function of the number of pixels your process fills (an exception could be post processing filter)

    Motorstorm, F1, HS.. and probably many others
    Dunno, to be honest I just thought about this trick while replying to you.. :)
     
  11. paawl

    Newcomer

    Joined:
    Feb 3, 2007
    Messages:
    113
    Likes Received:
    0
    I believe you, I was just under the mis-impression that GPUs were very efficient devices, almost totally limited by resources like memory bandwidth and shader operations that are required in proportion to the number of pixels filled. I didn't think that they spent much of their time doing anything but filling pixels. Anyway, thanks again for the education.
     
  12. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    Ever seen the fourier transform of a step function, mister SP genius? What is the highest frequency it introduces? :roll:

    Who cares? That's not what I'm arguing with. You equated interpolation to anti-aliasing. That is completely wrong. You said a scaler won't introduce vertical aliasing, but that's irrelevant because the near-vertical line aliasing is due to having only 960 pixels across, not the scaler.

    This some really stupid reasoning. Your eyes both look at the same spot, not one pixel apart. The only possible misalignment is away from the point you're looking at, i.e. in your peripheral vision which doesn't have any resolution anyway. If you're talking about just a few degrees off the focal point where eye resolution is still high, do some trignometry (assuming you are capable of math). Misalignment is far, far less than one pixel for any reasonable viewing distance.

    Look at this period:

    .

    Is it oblong? Do you see effects of a LPF? No you don't (unless you have some vision problems) because there is no LPF. You're full of crap.

    What is there to discuss? All a scaler can do is apply a reconstruction filter, like a sync filter. But that only works if the samples (i.e. the framebuffer pixels) are taken from a BW-limited signal. The geometry in a 3D scene, unfortunately, is not BW limited. Signal processing can't do anything, but you naively think it can.
    Who was the one mocking my signal processing education? Oh right, that was you, the master of scientific etiquette.

    And what part of my argument is subjective? The aliasing step size from 960 pixels across is greater than that from 1280 pix across or 720 pix vertically. That is fact. You're the one making some BS argument that a scaler can magically hide this.
     
  13. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,325
    Likes Received:
    93
    Location:
    San Francisco
    Even kids know that.. all this discussion about SP is quite surreal..
    True, even though I don't believe it's a big deal in the end..
     
  14. betan

    Veteran

    Joined:
    Jan 26, 2007
    Messages:
    2,315
    Likes Received:
    0
    It doesn't take a genius really, just elementary SP.
    Let's revisit your previous claim:

    And now you give the example of unit function.
    First of all, the edge of unit function has no frequency as I said before (locality).
    Second the amplitude of infinite frequency in the transform of unit function is 0.
    Third, as it happens, most of its energy is in 0 (constant) frequency (Dirac's delta).

    I assume what you were trying to say is, it is not bandwidht limited.

    By the way, rectangle function would be a better analogy.

    Nobody is equating anti-aliasing and interpolation, but it would be waste of time to try to explain their relation. If you are really interested, go check FT of cubic splines, otherwise you can imagine whatever you want.

    Honestly though, I was under the impression you were initially talking about aliasing due to scaling. That's why I said you won't notice any aliasing with any decent scaler. A scaler doesn't have to be broken to introduce aliasing, being lazy is sufficient.

    That said, independent of any aliasing due to rasterisation, there is only fixed amount of information you can convey with 960 pixels. How efficient you use this is irrelevant. Same goes for any correlation between lines.

    Basic math time then.

    Let's say your eye separation is 2S and from a distance (D) you're looking at a screen with width 2W and horizontal resolution 2P. Finally let's assume you are retarded and focus on a single pixel in the middle.
    That is, no shifting on that pixel.

    What about the pixels closer to the edge? Perceived difference (in pixels) would be roughly
    P*2*(a-b)/(a+b) where a and b are W*D/((W-S)^2+D^2)^0.5 and W*D/((W+S)^2+D^2)^0.5 respectively.

    (You can replace W with W*p/P to get a distribution on pixel p from the center.)

    What does this give us for real numbers (based on my monitor).

    octave:1> W=7.5;D=35;S=1.5; % inches
    octave:2> P=800;
    octave:3> a=W*D/((W-S)^2+D^2)^0.5;
    octave:4> b=W*D/((W+S)^2+D^2)^0.5;
    octave:5> P*2*(a-b)/(a+b)
    ans = 14.025

    14-pixel separation for the 800'th pixel (from the center).
    By the way, you would get 1 pixel separation for roughly 0.5 inches from the center.

    That said, sane people do not focus on a single pixel while watching something, they focus on behind the screen which will result in more even distribution. That's why I said it depends on one's focus.

    Stop trying to read my thoughts as you suck at it. Read what I wrote. Nobody is claiming you can restore what is lost during sampling to resolution of 960. And it is definitely more than 1920. What I am saying -for the nth time- is you already loose some information on native 1920p, meaning some information that you loose because of 960p wouldn't reach your brain anyway.
    How complex can this be?

    I wasn't really mocking your education as I don't think you have one. Yet, even if I was, after reading your lame attempts to insult me without any basis, I have every right to do so.

    You are definitely undersampling what I am saying. Please go and resample at an appropriate rate. :)
     
  15. paawl

    Newcomer

    Joined:
    Feb 3, 2007
    Messages:
    113
    Likes Received:
    0
    I think you need to carefully re-read betan's correction of your earlier erroneous assertion about the "spatial frequency of a polygon edge" being infinite. Betan is quite correct that edges don't have frequencies, because frequencies are not local in space. (Ever seen the inverse Fourier transform of a single spatial frequency, represented by a Dirac delta? Where is that frequency located in space?) Scenes which contain edges have a spectrum which is not band-limited, although the power is "zero at infinity", as it must be, for any power-limited signal.
     
  16. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,325
    Likes Received:
    93
    Location:
    San Francisco
    Guys don't fight, Mintmaster was just trying to say that an heavyside-like signal is not bandwidth limited, he did not say a thing about the amplitude of its highest harmonics, even though it would be nicer to not resort to name calling and such, it's clear that both parties know what they're talking about.
    Thanks to paawl that has shown us a nice formulation of Heinsenberg's uncertainity principle..but can we go on with the discussion or should we start a new thread called "fourier's wars@ ? ;)
     
  17. paawl

    Newcomer

    Joined:
    Feb 3, 2007
    Messages:
    113
    Likes Received:
    0
    Yes, lets. I almost forgot that my original interest in this thread was the assertion that the human visual system has less resolution in the horizontal direction than in the vertical direction. I've often heard this repeated, rarely heard it challenged, and I've always believed it myself, but when I tried Googling, I couldn't find any authoritative references. Anyone have more info on this?

    I like betan's approach, showing the pixel separation away from the focal point, but I'm suspicious that this might be too simple an analysis. It's possible that the eye-brain system is sophisticated enough to transform the images from each eye to a common coordinate system. My own experience with the ear-brain system teaches me that things are rarely as simple as the seem. Ultimately, careful experimental studies were required to determine the limits of human aural perception. Anyone have a nice reference on human visual perception, specifically, perception of resolution in two dimensions?
     
  18. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    And an image rasterized with aliased edges is "mostly" identical to the ideal image. "Most", unfortunately, isn't good enough.
    Why assume? I actually said "geometry is not BW-limited". I don't see why you have to nitpick at semantics. We're talking about images, so I assumed "an edge" was understood to be "an image with a polygon edge". We're talking about SP and undersampling, so I assumed "the frequency" was enough to convey "the maximum non-zero frequency component", and that "infinite" was enough to say that this frequency is unbounded.

    nAo knew what I meant, but if I needed to spell it out for you, I apologize.
    Well, while you were lost in the world of SP, everyone else was talking about 3D graphics. Look at my original post that you objected to. I said "If you draw a circle without AA...". There is no ambiguity there.
    Read my post. The eye does not have much resolution in peripheral vision. Peak resolution of around 0.5 arc minutes is restricted to about 2-4 degrees off the focal point. Here's a test: Focus (with one eye if you want to) on the first letter in this string: ASHTWOIHWKLSFDJ. Now, how far could you read without moving your eye(s) off the "A"? How many degrees is that? For me it's like 2 degrees, and after 1 degree it's already hard enough to read that I'd rather move my eye.

    Retarted? The edges of the screen are meaningless if you're looking at the centre. The centre is meaningless if you're looking at the edges. Whatever you are looking at has no misalignment, and has no horizontal LPF. I have never found something blurrier with two eyes instead of one except when lining up a pool shot, but that's because of the huge distance difference between the cue ball and object ball. A TV screen is flat.

    Focus on a point 3m away with eyes 6cm apart, and 3 degrees off axis your misalignment is 160 microns, which is well below the eye's resolution limit and 1/7th the width of a 960x1080 pixel on a 50" 16:9 screen.

    Misalignment = D*tan{theta + atan(S/D)} - 2S - D*tan{theta - atan(S/D)}

    Draw a diagram and do some simple trig. It's much simpler than your approach. ("theta" is the angle off the focal point.)

    Using your setup, my formula shows 2 inches (3.2 degrees) off centre yields 1 pixel separation (0.0094"). 0.5 inches from the centre has only 1/15th pixel separation.

    Huh? Sane people focus on what they're looking at. You don't need sharp focus outside of that because human eyes have poor resolution there anyway.

    BTW, you still didn't answer my question. When you look at a dot, is it horizontally blurred? Do you see a difference in you eye's ability to resolve vertical and horizontal lines here?

    O rly?

    Then why did you object to my original post? You said "any decent scaler will make use of interpolation (or anti-aliasing) which will hide aliasing." The aliasing is due to lost information, yet you think it can be hidden. Then you chastise me because I "avoid any discussion related to signal processing", even though no amount of SP will hide aliasing. I am not reading your thoughts, I am replying to what you wrote.

    And I'm telling you -for the nth time- that there is no information lost due to having two eyes. If a 1080p image is too sharp for you to see all the detail, then you either don't have very good eyes or your screen is too far away for its size.

    Only because I assumed you knew that "without AA" and "aliasing" referred to rasterization artifacts, which is completely clear in my original post.
     
  19. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    No, but more visible aliasing and higher fillrate cost is the kind of deal you'd expect from a used car salesman :wink:
     
  20. jupitersj

    Newcomer

    Joined:
    Feb 9, 2007
    Messages:
    1
    Likes Received:
    0
    I have quite a simple question(yes...all of the technical knowledge is flying over my head in this thread as I have a PhD in nothing XD)

    My TV (Sony KD-36XS955)

    is a 480i/480p/1080i Super Fine Pitch Tube. It has it's own internal scaler that accepts 720p and will scale it to 1080i...infact I tend to leave my Xbox360 on 720p in the menu since I know of no native 1080i games and I prefer my tv's scaler over the 360's.

    If I had a ps3 could I have it output in 720p and my tv would internally scale it to 1080i? Or would the ps3 not detect my tv's ability to do so since it is not a native 720 set (99% of crt tv's heh) and just give me 480 anyway...even though my tv has a scaler?

    Thankyou for your time ~_~
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...