Running Photoshop filters on the GPU

It's certainly possible. The NV30 demos pretty much showed realtime allocation of blur, noise, contrast, saturation etc.

So why not? Instead of taking whole seconds to do a image resize or minutes for a radial blur, crunching these calculations in the GPU's superior floating point units should take an order of magitude lower.

Same thing with any media application. Encodings DivX should fly on the GPU! We've already seen some limited use with video shaders etc, but the potential in this area, using the GPU to take over all SIMD / streaming / instruction cachable taskes is enormous.
 
Photoshop filters should be easy, but I don't think video encoding will work well without conditional branching in the pixel pipeline.
 
Umm... you guys never have worked with A4 sized Quality Color Laser resolution prints? the picture size gets huge (well, I have only done few 1200 dpi A4 photo prints on Photoshop taken with 1600 dpi negative / slide scanner...) so, the problem will be locating the image that filters will been aplied. 128 and even 256 MB gets small with these pictures, though it would consist only single layer.

so, I'd say that Video filters on fly via PS lineup with video input on board should be easier to implement than single picture filters. Then there's a problem with how capable shader / programmable pixel pipeline is doing this stuff, but time fixes that.
 
AFAIK, the main operations of (MPEG-style) video encoding/decoding are:
  • RGB <-> YUV conversion: Trivial to do in pixel shaders. You may want MRT support to store Y,U avd V data to separate buffers without needing to run multiple passes.
  • Motion compensation: Easy to do in pixel shaders with enough precision, once you have motion vectors in place.
  • Motion estimation: Requires summing over blocks of pixels to compute error functions, making it a bit inconvenient to implement in a pixel shader - requires very long shader programs to be really efficient. This task is generally by far the most computationally intensive phase of video encoding.
  • DCT/iDCT transforms: Takes as both input and output 8x8 blocks of pixels, where each outgoing datum depends on all of the 8*8 input pixels. Possible but inefficient to do in a pixel shader - MRTs may help efficiency substantially, though.
  • Quantization and Huffman/arithmetic coding: Infeasible to do in pixel shaders - require too much flow control and bit twiddling.
For very large still images, you should be able to partition the image into smaller sub-images and run the filters you want on one sub-image at a time. Deformations may be trickier, though, but not impossible.
 
Let's see, AGP8x provides 2GB/s max while FSB is about 3.2GB/s.
When doing photoshop, it's really a SIMD calculation and it should be speed limited rather than bandwidth limited.
 
I thought the (a) problem with doing work on the GPU that you actually want to save was to get the result back to main memory. 2 Gb/s for AGP 8X downstream, so to speak, but not in the other direction.

On the other hand, if you don't need things in realtime, it might be worth the upload hit? I don't have any numbers whatsoever, just speculating. And ignoring huge pictures.
 
IIRC, ATI advertised this as a potential capability of the R300 -- even media such as .mpegs and .avis via VideoShader.
 
The AGP spec does specify a mechanism for the GPU to write data back to system memory at full AGP bandwidth, but this functionality seems to be unimplemented in most current GPUs as it hasn't been seen as a very useful thing to have (at least up to GF4; dunno about R300/NV30)
 
ATI showcased applying a filter to videostream at launch. Not sure what's happening with it though.

They had a camera hooked up it showhow and the presenters on the screen had various filters applied to them. Wonder if anyone took video of it.
 
MrB said:
ATI showcased applying a filter to videostream at launch. Not sure what's happening with it though.

They had a camera hooked up it showhow and the presenters on the screen had various filters applied to them. Wonder if anyone took video of it.

you mean like the player in their SDK right?

had some fun with my DivX's using it :)
 
Yes, the 9700 can process video in the pixel shader. However, you don't just want to view it on the screen when you're doing photoshopping, you want to read it back into system memory again, which is slow. Could still be faster than letting the CPU do the work.
 
Xmas said:
Photoshop filters should be easy, but I don't think video encoding will work well without conditional branching in the pixel pipeline.

NV40 and R400, maybe?

MuFu.
 
so, the problem will be locating the image that filters will been aplied. 128 and even 256 MB gets small with these pictures, though it would consist only single layer.
3DLabs and PCI-Express...... that's all im saying ;)
 
I thought ATI already had specific MPEG decoding features on their chips (the DVD-playback stuff). Wouldn't it be kind of pointless to step back and use shaders? Maybe in a few generations they can support that stuff only in shaders, and I agree it makes sense for DivX and other stuff that's not supported, but at the moment using shaders for MPEG decoding would be dumb.
 
Nagorak said:
I thought ATI already had specific MPEG decoding features on their chips (the DVD-playback stuff). Wouldn't it be kind of pointless to step back and use shaders? Maybe in a few generations they can support that stuff only in shaders, and I agree it makes sense for DivX and other stuff that's not supported, but at the moment using shaders for MPEG decoding would be dumb.
\

A few years from not it'll seem dumb to need dedicated silicon JUST to decode MPEG. ;)
 
Nagorak said:
I thought ATI already had specific MPEG decoding features on their chips (the DVD-playback stuff). Wouldn't it be kind of pointless to step back and use shaders? Maybe in a few generations they can support that stuff only in shaders, and I agree it makes sense for DivX and other stuff that's not supported, but at the moment using shaders for MPEG decoding would be dumb.
Why? As long as the shaders can do all of the calculations, then using them for MPEG decoding is the absolute best way to go. It can save quite a few transistors.
 
Because shaders will probably do it slower, and I doubt that many transistors are required for DVD playback acceleration. Actually, for that matter you can do it all on the CPU so I guess maybe it's kind of a useless feature.
 
Shaders would definitely accelerate PhotoShop filters. It is up to Adobe to enable this. They should support 64-bit color at the same time to really take advantage of current graphics hardware. After Adobe enables us we it would be nice it it was just a clean interface like RenderMonkey or some visual editor that allow plugins to be quickly written. I recommend everyone interest start contact Adobe as they move SLOWLY! :rolleyes:
 
Back
Top