Avivo Transcoding....

Mintmaster said:
Quite impressive.

How much quality to you lose with transcoding? Is it quite noticeable?

Depends on the codecs used and how you have them set. This is just doing the same calculations as you would when transcoding on the CPU, with the same levels of quality - just a lot faster because it's on the GPU. There should be no difference in quality when running the calculations on the GPU as opposed to the CPU.
 
archie4oz said:
Also can't scale, frame-rate convert, crop, or much of anything else yet...
It's only an alpha tech demo, but it could be finished up to be quite a nice product. Companies like Divx or Nero could intergrate this technique and see massive speedups in their transcoding performance.
 
Yeah, pretty damn sweet. We were talking about it in the "Cheng on Avivo" thread in Industry.

X1800XT seems like a more "balanced" part than G70. The "overall performance" tiara isn't the only one on the table. IQ and video processing (tho it is a bit early on the latter, but I'm liking where it is going so far) seems to go to X1800XT. It depends on what you really want and just how important the performance delta is for your given situation.

I'm not in the market at all right now, but if I was I'd probably go X1800XT.

Which is not to minimize (if the rumors are correct) the 512mb version of GTX. It has been three years since NV had the "overall performance" tiara in a flat-out unquestionable way, and they deserve acknowledgement that it appears they are about to do it again. In my mind (again I'm accepting the rumors), the 512mb GTX finally buries the last vestiges of the NV30 debacle, from a "product" perspective (yes, I understand that some will never "forgive" other elements) after three years.

But I still like X1800 a bit better. YMMV.
 
Yeah, but what about encoding? Transcoding from MPEG-2 is no help if you're trying to produce video and your assets aren't mpeg-2 already.
 
DemoCoder said:
Yeah, but what about encoding? Transcoding from MPEG-2 is no help if you're trying to produce video and your assets aren't mpeg-2 already.

Well, you're Mr. Video Slut --you tell us. Is there a substantive difference in the hardware assets necessary to do both? I've been assuming that if they could transcode like mad that the encode wouldn't be much different. Not necessarily true?
 
It depends. There are two ways to do transcoding.

1) decompress MPEG-2 and then recompress with H.264
2) utilize part of the work already done by MPEG-2 and reformat or re-run some of the stages of the H.264 codec, but not all.


The problem is, ExtremeTech's test is not fair. They are comparing #1 to #2 (and also comparing Divx to H.264). There are software MPEG-2 to MPEG-4 transcoders on the market. The proper test to determine whether or not any significant hardware acceleration exists would be to compare a pure software transcoder to ATI's (as well as comparing quality). The other problem is, MPEG-2 to MPEG-4 transcoding algorithms differ in technique, so even then it would be a apples-to-oranges comparison, but atleast more accurate than comparing MPEG-2 decompression followed by Divx to H.264 transcoding.

To answer your question: the hardware neccessary to accelerate transcoding is not exactly the same as the hardware neccessary to accelerate encoding. Transcoding is a simplified problem, and some implementations of it are little more than reformating the syntax of the bitstream and dropping frames or resolution.
 
Interesting. Thanks. I would imagine if they could leverage work already done without quality impact, they would. At least from an efficiency point of view and getting it done fast, why wouldn't you? So likely #2 would make sense even if they had the hardware to do #1 efficiently. Doing more work than necessary is rarely going to be faster.

Would it be true that there is a substantial overlap on the transistor budget for doing both? Any guesses on what your extra percentage would be for adding encode transistors that don't help with transcode? I understand we're being highly theoretical here, and within a month or two we will flat out know, but this is the kind of thing we do here, so wotthehell. :smile: So, say, if 100% gives you both fully accelerated encode and fully accelerate transcode, and A% is tranny budget for transcode only, what is B% (where A + B =100) to add fully accelerated encode?
 
DemoCoder said:
Yeah, but what about encoding? Transcoding from MPEG-2 is no help if you're trying to produce video and your assets aren't mpeg-2 already.
I suppose it depends on whether they are using the MPEG hardware on the chip, or if they are running calculations in a GPGPU (?) fashion. There's no place in the article where it says this only works for MPEG2, and it would be rather limited if that were the case. The fact that they can encode to other formats makes me think they can easily do the necessary calculations on the GPU, so there's no reason they can't do the input calculations too.

Just as other apps like VirtualDub, or DrDivx can accept inputs of many different filetypes, I would expect this new app (or other apps that use this technique) to accept any and all filetypes as input. Wouldn't be much use otherwise.
 
Last edited by a moderator:
I don't think it can encode from raw fields to H.264. The Theater presents MPEG-2 to the GPU over the VIP. So unless the GPU can work with raw data, and the presenting video capture device just sends it in with no pre-processing.....

Don't think that's the case, and certainly won't be with the non-AIW VIVO models at the very least. The hardware combo doesn't work the way it needs to for that.
 
Bouncing Zabaglione Bros. said:
It's only an alpha tech demo, but it could be finished up to be quite a nice product. Companies like Divx or Nero could intergrate this technique and see massive speedups in their transcoding performance.

I hope Nero will, the transcoder in their package just plain sucks although it's the best burning SW out there.
 
Simon F said:
That would be a miracle. H264 is fiendishly complicated

There are already H264 encode algorithms, and even free libraries to do it. Why wouldn't ATI or Nvidia simply run that code on their GPUs instead of the CPU as happens now?

This is not just about transcoding video streams, this is about using the GPU as a specialised processor, for tasks like video transcoding, physics calculations, etc..
 
DemoCoder said:
To answer your question: the hardware neccessary to accelerate transcoding is not exactly the same as the hardware neccessary to accelerate encoding. Transcoding is a simplified problem, and some implementations of it are little more than reformating the syntax of the bitstream and dropping frames or resolution.

Absolutely right, also when transcoding from one codec to the same codec all kinds of shortcuts can be made.

I know that Nero Recode only recodes I-frames in mpeg-2 streams, there'd be no point in recoding P and B frames. NR is pretty fscking fast, <10 minutes for an entire DVD, so I don't know how much benefit hardware assist would give here..

Originally they decoded the video stream and then re-encoded it to mpeg-2 again. Back then transcoding time would be a decimal order of magnitude slower (It could make a better result, because the encoder could make different choices regarding I/P/B frame mix and amount of bits per I-frame). This is exactly the same "penalty" as transcoding between two codecs. This could probably see a significant speed up.

Edit: Regarding NR, not only does it only recode I-frames, it only does re-quantization of the data, so it's just symbol-decode->re-quantization -> symbol-encode, so no DCT/iDCT involved.

Cheers
Gubbi
 
Last edited by a moderator:
Bouncing Zabaglione Bros. said:
There are already H264 encode algorithms, and even free libraries to do it.
I'm well aware that code exists to do it - the standard comes with a reference encoder/decoder.
Why wouldn't ATI or Nvidia simply run that code on their GPUs instead of the CPU as happens now?
Same reason that you don't currently run your word processor on the GPU. You can't just go and take any old C/C++ code, recompile and run it.
This is not just about transcoding video streams, this is about using the GPU as a specialised processor, for tasks like video transcoding, physics calculations, etc..
For some aspects of the video encode, the GPU would be great (motion vector searching would be a possibility) but other parts of the process would be too painful to even bear thinking about.:cry:
 
Bouncing Zabaglione Bros. said:
Depends on the codecs used and how you have them set. This is just doing the same calculations as you would when transcoding on the CPU, with the same levels of quality - just a lot faster because it's on the GPU. There should be no difference in quality when running the calculations on the GPU as opposed to the CPU.


Do you also think that every 3D accellerator produces the same image quality?
If not, then you should NOW have recognized that you're talking bullshit.
 
nobody said:
Do you also think that every 3D accellerator produces the same image quality?
If not, then you should NOW have recognized that you're talking bullshit.

Do you think the image quality a chip produces when rendering games has anything to do with the results of using it as a specialised number cruncher? You think transcoding software running on CPUs produce the same image quality as every other?

It's one thing to take shortcuts in games when rendering an image, it's quite another to have any other result than 4 when adding 2+2.

<zorro>

I think it is you who have the shit of the bull in your face!

</zorro>
 
Last edited by a moderator:
Back
Top