What rendering tricks might RSX employ?

Bobbler said:
In the case of the consoles the picture information will/can be sent at a high resolution and the TV will scale it down (shrink it to the point that it fits on the screen). It essentially does what Super sampling AA does. Find a normal sized JPG/bmp/whatever, and zoom out so it gets smaller -- the picture gets the advantage of having, essentially, all the pixels of it's original size but not taking up the space.

I'm sure this isn't true, as scaling on the TV screen isn't going to give a better dot pitch. There's a physical resolution on the TV. The problem with scaling the image after rendering everything would mean you would loose details especially with high contrast lines (text, hud controls, etc).

Actually, I sure it's something that all next gen console do...it should render the image at the resolution specified by the user (SDTV vs HDTV). In my current xbox, there's that setting...but since I don't have HDTV, I haven't tested it out.
 
TrungGap said:
I'm sure this isn't true, as scaling on the TV screen isn't going to give a better dot pitch. There's a physical resolution on the TV. The problem with scaling the image after rendering everything would mean you would loose details especially with high contrast lines (text, hud controls, etc).

Actually, I sure it's something that all next gen console do...it should render the image at the resolution specified by the user (SDTV vs HDTV). In my current xbox, there's that setting...but since I don't have HDTV, I haven't tested it out.

Unless I'm mistaken, the Dot pitch is the minimum distance between the pixels and not so much the size of the pixels.

LCD monitors have a set number of pixels, CRTs and CRT-esque screens don't really -- at least not in the same way (the pixels are far smaller on CRTs on average, as far as I know -- maybe I'm mistaken? -- that way multiple resolutions with huge ranges between the min and max are possible).

Scaling the image down is essentially what SSAA does -- you are sort of arguing against the validity of SSAA at this point. You are correct in that there is a limit to how much benefit there is, but there is a benefit.

Would I rather play a game at 480p rendered at 720p and scaled down or one at 1080p? I'd take the 720p output over the 480p with a 720p 'super sample' any day. However, would I rather have an a 480p image flat out or a 480p output with internal render of 720p? I'd take the internal render at 720p any day. I think... from my understanding of it at least.

It doesn't lose any more quality than actually rendering at 480p or whatever the standard resolution is to begin with. You gain sharpness, but since the resolution is so low you lose some of the fine detail (regardless of if you render internally at 480p or 1080p -- this is a problem with the resolution itself not with scaling/super-sampling).

My knowledge of TVs (LCD or CRT) inner workings sort of stops here -- Maybe someone else can throw some stuff in and give us a little clarification?

A digital camera is probably the easiest way to show the difference between scaling down to a lower res and just having a lower res. I don't have one so I can't show you :cry: Of course the pictures taken will be a bit more complex than a video game (the concept is the same though), but video games don't have the luxury of using all the atoms in the area to render each frame ;)
 
Bobbler said:
Scaling the image down is essentially what SSAA does -- you are sort of arguing against the validity of SSAA at this point. You are correct in that there is a limit to how much benefit there is, but there is a benefit.

A digital camera is probably the easiest way to show the difference between scaling down to a lower res and just having a lower res.
You can simulate that by taking a large photo and downsizing it with both bilinear filtering and pixel resizing. That'll show that image produced at lower res (pixel resize) versus that image produced at higher res and downscaled (biliniear filtered).

Another example is here
http://www.beyond3d.com/forum/viewtopic.php?t=23111&postdays=0&postorder=asc&start=69
 
Bobbler said:
Scaling the image down is essentially what SSAA does -- you are sort of arguing against the validity of SSAA at this point. You are correct in that there is a limit to how much benefit there is, but there is a benefit.

Actually, I'm not arguing against SSAA. The problem is not SSAA, but when to SSAA. Normally, you would render the 3D aspect of the game. Then you would overlay with HUD, text and whatnot. The AA algorithm for drawing legible text is different from AA algorithm of general 3D scene.

Also, most TV will not up/down convert the resolution...On high-end HDTV it might offer the ability to up convert for you. But it's not recommend for it will introduce artifacts. On SDTV, it will NOT take 720 to down to 480, so it has to be done on the console. And if it's done on the console. It should render it at 480 (with AA) as oppose to 720 (with AA) then convert to 480.
 
TrungGap said:
Actually, I'm not arguing against SSAA. The problem is not SSAA, but when to SSAA. Normally, you would render the 3D aspect of the game. Then you would overlay with HUD, text and whatnot. The AA algorithm for drawing legible text is different from AA algorithm of general 3D scene.

I see what you mean... you are worried that text/hud might end up getting messed up since it will get shrunk as well?

This seems to be a valid concern, but I have a feeling they have thought about it (considering most of the population will be using standard definition) -- at least I sure hope they have, if they plan on rendering internally at a different resolution as the display. I guess we just wait and see how developers handle it... I'm sure it will vary -- some developers may render internally at the same resolution no matter what and then paste on correctly sized fonts/huds afterwards?
 
I disagree. I've used downsampled text rendered at 2x or 4x resolution to get AA'd edges. As long as you render the text twice the size there shouldn't be issues with it looking manky. On a PC card using MSAA text can look fuzzed out, but not on downsampled images (not in my experience anyway)
 
Shifty Geezer said:
I disagree. I've used downsampled text rendered at 2x or 4x resolution to get AA'd edges. As long as you render the text twice the size there shouldn't be issues with it looking manky. On a PC card using MSAA text can look fuzzed out, but not on downsampled images (not in my experience anyway)

But.. have you taken a 720p or 1080i/p image with text sized for that resolution and downscaled it to 480p and tried to read it?

I'm not sure how well it would turn out -- usually text is a pretty delicate area on consoles.
 
Shifty Geezer said:
I disagree. I've used downsampled text rendered at 2x or 4x resolution to get AA'd edges. As long as you render the text twice the size there shouldn't be issues with it looking manky. On a PC card using MSAA text can look fuzzed out, but not on downsampled images (not in my experience anyway)

The problem isn't 2x or 4x resolution...but going from 1080 -> 480 (2.25:1) or 720 -> 480 (1.5:1). The reason why you won't notice when you downsampled image (3d or pictures) is because the nature of the picture. If the picture you're downsampling is a high contrast line, you would notice it, but if the image is gradation, your eyes will not notice it as much.
 
@ bobbler : Yes, in those cases the text would need to compensate forthe downsampling. Presumably a console setting will specify output and the hardware downscales the front buffer to the display resolution at 480p, in which case the software might well need to adjust the text. Idon't think there's an easy solution and it'll reside with the devs.

@ TrungGap : I don't see downsampling a problem with any ratio. Unless you have single-pixel lines even a lowly a bilinear filter produces suitable results from odd ratios. Especially as next gen is going to be more photo-realistic in it's images producing less high-contrast areas to noticeably get blurred out.

It'll certainly be interesting how images look next-gen on the different TV types, though it'll be a while before we in the EU get to see the higher quality HD.
 
Unless I'm mistaken, the Dot pitch is the minimum distance between the pixels and not so much the size of the pixels.

IIRC dot pitch is the distance between the center of one pixel to the center of another pixel of the same color.
 
I'm not sure how well it would turn out -- usually text is a pretty delicate area on consoles.
Well you can see it on console->PC ports, text is usually still sized for 480 and consequently looks really large on higher resolutions.
There are also filters that deal with stuff like text better then your standard bi-linear/bi-cubic (unfortunately they aren't as cheap though), which could help the look of downsampled stuff some.

Personally I expect most games to use text larger then necessary for higher resolutions. Multiple text sizes may be used for different resolutions too but don't expect that to happen often.
 
I'm sure Sony and MS will test that all the test is readable on a crappy 13 inch TV, so text will be sized appropriately.

IMO it's likely though that Dev's will support two text layouts one for HD and one for SD. Assuming the worst HD set you can find is still pretty good, I'd expect smaller text in that case.
 
Some questions for you all...

I've been doing some thinking about MEMEXPORT on Xenos. I'm basically thinking of it as a band aid for lack of WGF 2, on GPU, efficient vector generation, (although the need for that vector generation is somewhat negated with this functionality combined with UMA). Anyhow, I've been thinking that I wouldn't be surprised to see similar functionality in the RSX to push and pull from the XDR RAM (as we know it has the ability to). But, this led me to start thinking about something potentially more interesting.

Has there been any rumor about direct export of GPU data to the SPEs' SRAM? This could be a nifty latency saver and lead to all sorts of fun post processing ideas. In fact, even prettier would be an ability of the RSX to stream directly to an SPE. Any thoughts? :?: :?: :?:
 
twotonfld said:
Has there been any rumor about direct export of GPU data to the SPEs' SRAM? This could be a nifty latency saver and lead to all sorts of fun post processing ideas. In fact, even prettier would be an ability of the RSX to stream directly to an SPE. Any thoughts? :?: :?: :?:

The Element Interconnect Bus in Cell, I think, has direct access to Flexio. So yes, you should be able to pull things off Flexio into the local memory on the SPEs directly, without having to go RSX->XDR->Cell. Can anyone confirm to be sure?

edit - looked it up. It seems data coming from Flexio HAS to go through the EIB, and so SPEs, the PPE etc. are be able to "listen" for any data on the EIB, and thus any data from Flexio. One question that remains in my mind is whether data RSX would want to send to a SPE or SPEs then has to be pushed out into XDR memory, or is there a way of stopping it at the EIB and preventing it from going any further? It's irrelevant as far as the SPEs are concerned - they can get the data off the EIB directly and not worry about where it goes beyond that, but it might save some write bandwidth to XDR if the data can be consumed in Cell without having to be pushed to main memory. If the SPEs take that data in, do they take the data off the EIB completely, or copy it off the EIB and leave it to pass out onto XDR? In short, in order to share data with Cell, does RSX have to push the data to XDR and then let Cell "snoop" for it along the way, or is there a way for Cell to take it without incurring XDR access?
 
Shifty Geezer said:
Unless you have single-pixel lines even a lowly a bilinear filter produces suitable results from odd ratios. Especially as next gen is going to be more photo-realistic in it's images producing less high-contrast areas to noticeably get blurred out.

I agree with you. My only hang up is with stuff that aren't meant for AA, such as HUD, text and whatnot (and only for odd ratio).

PC-Engine said:
IIRC dot pitch is the distance between the center of one pixel to the center of another pixel of the same color.

You're absolutely correct. It's the distance between the center of two mask holes of same color (color element). So on a CRT a pixel can be form from multiple of these color elements. The smaller the dot pitch, potentially you can have the higher density of pixel. Eh...where am I going with this? I have no idea...Ignore me...

ERP said:
I'm sure Sony and MS will test that all the test is readable on a crappy 13 inch TV, so text will be sized appropriately.

IMO it's likely though that Dev's will support two text layouts one for HD and one for SD. Assuming the worst HD set you can find is still pretty good, I'd expect smaller text in that case.

I don't see the reason why devs on console would develope games that much different than the PC in regard to the output resolution. The user will need to specifiy on the console what setup they have. Now, base on the configuration, the game will render to the targeted resolution.

I hope you're correct that devs are likely to support smaller/finer text for HD. However, I don't think that will be the case at least for the first few generations as not enough people have HD to warrent spending time on it. OTH, maybe you're right...since it's text control...they really don't care just as long as it flows automatically within the alotted space.

IIRC, on the DS's dual screen, only one screen can hardware accelerated 3D. On the other screen it software. I sure some cleaver dev would use some similar technique with PS3. RSX will render the main 3D screen. Using the Cell, it would render a less complex screen and send that to RSX. So in the end, you would have a solid high frame rate using RSX and a decent secondary screen for other things.
 
Titanio said:
twotonfld said:
Has there been any rumor about direct export of GPU data to the SPEs' SRAM? This could be a nifty latency saver and lead to all sorts of fun post processing ideas. In fact, even prettier would be an ability of the RSX to stream directly to an SPE. Any thoughts? :?: :?: :?:

The Element Interconnect Bus in Cell, I think, has direct access to Flexio. So yes, you should be able to pull things off Flexio into the local memory on the SPEs directly, without having to go RSX->XDR->Cell. Can anyone confirm to be sure?

edit - looked it up. It seems data coming from Flexio HAS to go through the EIB, and so SPEs, the PPE etc. are be able to "listen" for any data on the EIB, and thus any data from Flexio. One question that remains in my mind is whether data RSX would want to send to a SPE or SPEs then has to be pushed out into XDR memory, or is there a way of stopping it at the EIB and preventing it from going any further? It's irrelevant as far as the SPEs are concerned - they can get the data off the EIB directly and not worry about where it goes beyond that, but it might save some write bandwidth to XDR if the data can be consumed in Cell without having to be pushed to main memory. If the SPEs take that data in, do they take the data off the EIB completely, or copy it off the EIB and leave it to pass out onto XDR? In short, in order to share data with Cell, does RSX have to push the data to XDR and then let Cell "snoop" for it along the way, or is there a way for Cell to take it without incurring XDR access?

It would be great if the SPEs could consume the data and keep it from hitting the XDR. I wonder if the data can be passed on the EIB with some sort of TTL (much like an ethernet frame). The Cell does seem to be incorporating a network centric approach, maybe the data can be packaged and addressed similar to a packet and the XDR memory controller won't consume data not addressed to it - maybe something to the effect of a UDP apprach to DMA...?
 
I don't see the reason why devs on console would develope games that much different than the PC in regard to the output resolution. The user will need to specifiy on the console what setup they have. Now, base on the configuration, the game will render to the targeted resolution.

Except users hate having to set display resolutions each time they boot a game up, and the game will still likely have to boot up in a base compatiblity mode to prevent users from getting locked out of their games when they take a saved game setup on HD to grandma's and runs it on her 13" TV. Of course I guess you could go the GT4 round and run the menus and such @480i and then the ingame run at whatever mode you specify (although that's still messy if you ask me)...

Plus PC games don't have to deal with TVs either... BIG different going from NTSC to an ASTC format (even if you're staying at the same resolution)...
 
archie4oz said:
Except users hate having to set display resolutions each time they boot a game up, and the game will still likely have to boot up in a base compatiblity mode to prevent users from getting locked out of their games when they take a saved game setup on HD to grandma's and runs it on her 13" TV. Of course I guess you could go the GT4 round and run the menus and such @480i and then the ingame run at whatever mode you specify (although that's still messy if you ask me)...

Plus PC games don't have to deal with TVs either... BIG different going from NTSC to an ASTC format (even if you're staying at the same resolution)...

Ideally, you wouldn't configure it per game setting, but at a global setting...as in the current xbox. But you're right, that it would need to downsample to 480 in order to prevent lockout. But it's not going to look pretty.

In addition to have to do that...Since, 99.99% of SDTV are 4:3 aspect ratio and the aspect ratio of HDTV is going to be different, they will have to know the correct aspect ratio or else everything is going to look funny. Unless the games is going to be in letterbox format...damn, that would be a waste of real estate.
 
If the game users the hardware setting for output, it shouldn't lock a different hardware when the save games's used. eg. I can run NWN on PC at say 800x600, copy my save games to CD, stick them on a mates PC at 1024x768, and it'll run fine.

No need to save output config in the software I'd have thought.

What would be annoying is if you take your console between rooms and different sets, and had to go through a few minutes of fancily rendered but slow to access menus to change screen res. And what if you move your hardware from HDTV to SDTV? You won't see anything to change resolution... Maybe they always boot up in SD analogue and switch to HD only on a game (or application even :D )?
 
Back
Top