NV30 MPEG-2 Decode Support?

CMKRNL

Newcomer
Has anyone seen any mention of MPEG-2 or MPEG-4 decode capabilities of NV30? I know that in NV17M they added IDCT and IQ to the core. I'm not sure if this feature is also in the standard NV17, or only in the mobile version to reduce CPU load? Does NV25 support this? Any mention of whether NV30 will support this? Seems to me that as we move forward, HDTV decode will become a more important aspect of any GPU and this is a pretty critical feature to have in hardware when you're trying to do something like 1080i. Another thing that would be great to have in HW would be the ability to perform IDCT on DV sources (the coefficients are in a slightly different format and NV17M does not support this as far as I know).

How about R300? Have they done anything beyond what's already there in the existing cores? I guess the only things really left are the stream parsing, VLD and Inverse Quantization. The only other thing I've seen so far is their VideoShader stuff which appears to be some cool shader based post processing filters. Presumably their deblock filter is for MPEG-4 or WMV8 macroblock deblocking and not just a simple multi-tap filter that smooths out the entire image.

Speaking of MPEG-4, I wonder if they will have decode support for that in R300 or NV30? I know that ATI recently announced a design win for their handheld chip, and they mentioned that it can do MPEG-4 decode in HW. This is probably going to be pretty important given how popular MPEG-4 will be in the near future. The question is whether this functionality has been put into R300 as well?
 
There should be no iDCT in NV25, it should be only with NV17 (and its derivatives).

I would also like to know if NV30 will build iDCT into the core as DVD playback is one of my major use with my PC.

ATI has it since the first RADEON, let's see the R300 shader version of iDCT (with 10bit color/channel) is any better than the R200 and R100.
 
Actually if memory serves correct, ATI has had sub pixel accurate motion compensation since RagePro and IDCT since Rage128. ATI has definitely been ahead of nVidia in this area, but it seems that nVidia is finally realizing the importance of this (probably because HDTV is now a reality). The question is to what degree will they have support for this in NV30 and will any of the manufacturers extend this to also cover MPEG-4?
 
:D

The Rage series have already gotten off my head.

For NVIDIA's case, I think they have to get it to quality than just adding a feature.

The hardware acceleration (motion compensation) in the NV25 is better be off than on, but hardware acceleration in ATI RADEON series is a lot better to be on than off.
 
Enabling hardware motion compensation has always looked excellent on both of my GeForce cards (DDR and GF4Ti4200).

Additionally, the GeForce4 MX is supposed to have adaptive deinterlacing. I don't think I've yet seen a DVD quality comparison between the 4MX and ATI's lineup.

Regardless, if the NV30 can put a DVD stream through the pixel shader (and it should be able to), then there's no reason why it can't do hardware iDCT, and you'd think the PS would also be powerful enough for adaptive deinterlacing.
 
Enabling HW MC on my GF3 and GF4 Ti4600 will always make the picture worse.

Enabling HW MC/iDCT on my ATI RADEON/RADEON 8500 will always make the picture a lot better.

I have all of them except GF4 MX (which I have not considered it at all).
 

Additionally, the GeForce4 MX is supposed to have adaptive deinterlacing. I don't think I've yet seen a DVD quality comparison between the 4MX and ATI's lineup.


Yes, that makes a huge difference. I haven't seen nVidia's deinterlacing in action, but ATI has had it since Radeon days and it makes a world of difference on interlaced source.


Regardless, if the NV30 can put a DVD stream through the pixel shader (and it should be able to), then there's no reason why it can't do hardware iDCT, and you'd think the PS would also be powerful enough for adaptive deinterlacing.


Well a shader takes whatever data you put into the texture and the NV30 supports a variety of formats as input so you could easily put out a texture full of DCT coefficients. The problem is that the actual transform is done in a dedicated HW block -- so it either has it or it doesn't.

Yes, I agree the pixel shader is definitely powerful enough for adaptive deinterlacing. From that perspective it will just boil down to who has the superior algorithm and image quality.
 
Enabling HW MC on my GF3 and GF4 Ti4600 will always make the picture worse.


How is it worse? I wonder if NV2x does not support half-pixel accurate motion compensation. That could certainly make it look worse.
 
There are noticeable artifacts with MC on than off with GF4 Ti4600 under PowerDVD XP 4.0.

The above reason alone gets me frequently swap in and out (ATI R8500 and Ti4600) when I watch DVD and play games.
 
Well, I've never used PowerDVD. Maybe that has something to do with it.

I've only used WinDVD and NVDVD. Enabling motion compensation/hardware acceleration in either one never resulted in any noticeable change in image quality.
 
With motion compensation there is only one correct answer so you would hope that doing it in software or hardware would make no difference to the displayed image.

The only reason for differences would be if short cuts were being taken in either the software or hardware implementation or if some sort of post processing was only available after software or hardware motion compensation.
 

With motion compensation there is only one correct answer so you would hope that doing it in software or hardware would make no difference to the displayed image.


Well, proper MPEG-2 motion compensation requires half-pel accuracy and MPEG-4 requires quarter-pel accuracy. There was a time when the only one doing this correctly was ATI. After that S3 followed. I don't know what the status of nVidia is, but presumably they also do half-pel accurate MC nowadays. If not, this could be the reason for the image quality issues.
 
Motion compensation looked like poop on my geforce256.The de-interlacing was a bob method that looked blockier than the one used in the version of windvd that I got free with the card.
 
> MPEG-4 requires quarter-pel accuracy.

Not all of the MPEG-4 profiles require quarter-pel motion compensation. If the chart at divx.com is reliable, the 'simpler' profiles only need half-pel motion-comp.

ATI was the first desktop VGA chip to offer hardware motion-compensation (1997.) Several laptop VGA chipsets offered similar MPEG-2 hardware acceleration (or even more), like SiliconMagic. The Rendition Verite2x00 (also 1997) was supposed to support it, but later drivers removed it entirely, for some reason.

Actually, now that i think about it, SiS's original "6326" also incorporated hardware iDCT + motion-comp. But that VGA wasn't widely available in North America.

The following year (1998), S3's Savage3D and Trident's 9880 (Blade3D) jumped on the hardware motion-comp bandwagon. I still have both VGA cards sitting in a closet, and the Trident's image quality is noticeably worse. The color fidelity seems 'washed out.'
The Savage3D is noteworthy because it was the first VGA with 'hardware DVD-subpicture support' (that's basically a fancy name for alpha-blended 2D-sprites against the overlay and desktop.)

The rest has already been covered by other posts (Rage128, Radeon, Geforce, etc.)

The only oddball chip is the Chromatic Research MPACT. The VGA core contained a programmable RISC engine, which could accelerate substantial chunks of MPEG-2 decoding.
 
Back
Top