Is AMD going to take the AVIVO Video Converter seriously?

OICAspork

Newcomer
I was really excited about AVIVO when it was first announced... and horribly disappointed with the garbage that was initially released... then I crossed my fingers when AMD made a press release about its recent update... and then rolled my eyes at the results:

http://www.anandtech.com/video/showdoc.aspx?i=3578

Looking at this quote from the above article:

"We do understand the rush to get software out there that takes advantage of GPU compute capability and that video transcode is the low-hanging fruit."

makes me raise an eyebrow. If video transcode is one of the easiest uses for GPGPU and ATI can't even manage that... it doesn't really encourage others to try for more difficult implementations of GPGPU. I hope they manage to make it a worthwhile product. NVidia's similarly sponsored Badaboom was garbage when it launched, but seems to be a genuinely good product now.

Back to the thread subject. Does anyone know if anyone at AMD is actually focused on AVIVO, or is it just on everyone involved's back burner to play with from time to time when they don't have more pressing work to do?
 
It's not low hanging fruit ... it's bloody difficult. First you need a high class H.264 codec to adapt (can't improve x264, because it would piss off some of your closed source partners). Then you need to juggle an awful lot of data traffic while introducing far more latency in every part of that codec making for huge headaches. Anand can tell me it's low hanging fruit when he does an honest comparison with some of the x264 GUI front ends on fast presets (I'm not asking for optimal command line settings, but not with x264 with default settings either). Till that time I'm calling shenanigans.
 
What I found strange is that Cyberlink Espresso also showed quality differences between AMD and nVidia.

I was under the impression that Espresso used their own implementation, in which case I assume that Cyberlink would want to maintain the same quality levels for all three implementations (CPU, Cuda and Stream), and that they would all be based around the same algorithms and optimizations, as far as the differences in architectures allow.

Did I miss something here, and does Espresso just call some standard AMD and/or nVidia libraries underneath, rather than using Cyberlink's own code? If not, then what reason would Cyberlink have to not maintain the same quality levels across different hardware?
 
I was under the impression that Espresso used their own implementation, in which case I assume that Cyberlink would want to maintain the same quality levels for all three implementations (CPU, Cuda and Stream), and that they would all be based around the same algorithms and optimizations, as far as the differences in architectures allow.

Did I miss something here, and does Espresso just call some standard AMD and/or nVidia libraries underneath, rather than using Cyberlink's own code? If not, then what reason would Cyberlink have to not maintain the same quality levels across different hardware?

Cyberlink merely uses what AMD and nV provide, AFAIR only the CPU encoder is theirs.
 
Back
Top