Is AMD going to take the AVIVO Video Converter seriously?

Discussion in 'PC Hardware, Software and Displays' started by OICAspork, Jun 15, 2009.

  1. OICAspork


    May 9, 2003
    Likes Received:
    Nara, The Land of the Rising Sun
    I was really excited about AVIVO when it was first announced... and horribly disappointed with the garbage that was initially released... then I crossed my fingers when AMD made a press release about its recent update... and then rolled my eyes at the results:

    Looking at this quote from the above article:

    "We do understand the rush to get software out there that takes advantage of GPU compute capability and that video transcode is the low-hanging fruit."

    makes me raise an eyebrow. If video transcode is one of the easiest uses for GPGPU and ATI can't even manage that... it doesn't really encourage others to try for more difficult implementations of GPGPU. I hope they manage to make it a worthwhile product. NVidia's similarly sponsored Badaboom was garbage when it launched, but seems to be a genuinely good product now.

    Back to the thread subject. Does anyone know if anyone at AMD is actually focused on AVIVO, or is it just on everyone involved's back burner to play with from time to time when they don't have more pressing work to do?
  2. MfA


    Feb 6, 2002
    Likes Received:
    It's not low hanging fruit ... it's bloody difficult. First you need a high class H.264 codec to adapt (can't improve x264, because it would piss off some of your closed source partners). Then you need to juggle an awful lot of data traffic while introducing far more latency in every part of that codec making for huge headaches. Anand can tell me it's low hanging fruit when he does an honest comparison with some of the x264 GUI front ends on fast presets (I'm not asking for optimal command line settings, but not with x264 with default settings either). Till that time I'm calling shenanigans.
  3. Scali


    Nov 19, 2003
    Likes Received:
    What I found strange is that Cyberlink Espresso also showed quality differences between AMD and nVidia.

    I was under the impression that Espresso used their own implementation, in which case I assume that Cyberlink would want to maintain the same quality levels for all three implementations (CPU, Cuda and Stream), and that they would all be based around the same algorithms and optimizations, as far as the differences in architectures allow.

    Did I miss something here, and does Espresso just call some standard AMD and/or nVidia libraries underneath, rather than using Cyberlink's own code? If not, then what reason would Cyberlink have to not maintain the same quality levels across different hardware?
  4. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Mar 15, 2005
    Likes Received:
    Cyberlink merely uses what AMD and nV provide, AFAIR only the CPU encoder is theirs.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.