Egg on ATI's face?

Status
Not open for further replies.
Razor04 said:
Joe DeFuria said:
CyFactor said:
Why not wait for an All-in-Wonder version of the R420Pro or SE? ;)
Of course, we know nothing about the video capabilities of the R420 and its derivatives, so I certainly wouldn't rule any of them out. (And in all honesty, we don't really know how effective nVidia's video circuitry is either...)
Well I know Tom's Hardware (yea yea I know) did some tests and pretty much concluded that it sucked compared to ATI's video processing. When they asked NV about this they got the line about early drivers. At least this is what I recall reading on launch day. Personally I am going to take a wait and see approach on that video processor. If they can get it doing all they say it can do it will be most impressive but I am not getting my hopes up at this point in time.

Here's a link to what Tom's said.

Doesn't appear it's really doing anything atm.
 
DemoCoder said:
I saw this working under OpenGL on Linux. Current DX drivers probably don't have a custom DXVA/VMR component.
Hey if they can get it working well more power to them as this is something I really want. :)
 
DemoCoder said:
Yes, I'll agree, as long as you're not fillrate or texture/fb bandwidth limited. e.g. <1600x1200, low overdraw, not much blending, etc

I think 1024x768 on UT2003/Q3A engine games stands the best chance. I have problems playing Day of Defeat @ 1600x1200 w/6xFSAA on some maps, and that's a Quake1/HL1 engine game.

Democoder .

While not all games can be played with high lvls of fsaa those that can will see image quality enhancments .

I have alot of games from the last 3 years that i still play . If i can run them with all features and effects on high and play with 8x fsaa that is a bonus and will make playing though them more enjoyable .

If speed is equal but one can open up higher image quality enhancements when possible that is the one i will go with .


There is no reason why nvidia couldn't put a 8x fsaa in the card if only to be used on games like quake 3 or serious sam 2 .
 
DemoCoder said:
I saw this working under OpenGL on Linux. Current DX drivers probably don't have a custom DXVA/VMR component.

Then why not do that rather than write something for Linux?
 
anaqer said:
<OT>Anyone knows why 1280x1024 even exists whereas the far more reasonable 1280x960 has almost no support in games? :devilish: </OT>


http://arkface.com/hci/1280x1024.html

But the resolution of 1280x1024 is completely different. 1280:1024=1.25 Why does this resolution exist? What is the reason that manufacturers of screens and video boards include such a resolution in their products? Because 10 years ago there were some models of computer monitors included in Silicon Graphics workstations that had almost quad (1.25) physical proportions. Yes, they were less rectangular. They were looked almost quad and this resolution fits them perfect. But not today's monitors. Damn tradition.

And lots of old cards don't support 1280x960.
AFAIK even the S3 DeltaChrome doesn't support that resolution.
 
Probably because it is easier to hack the source to libavcodec and recompile MPlayer on Linux than to work with DX DDK. And if NVidia needs some changes to a DDK interface, it's much more difficult to wait for MS than to hack your own OpenGL extension. Ask yourself why NVidia and others have always chosen OpenGL first to expose new features. It's simply easier to do it on Linux where the amount of source code not under your control is minimized. You said it yourself with respect to ATI and video acceleration: That ATI gains a benefit from doing all their own HW (not using third party ICs) because they don't have to depend on third party code.


Curious that your posts have taken on a critical tone with NV lately, but a defensive tone with respect to ATI.

Things that make you go hmmmm.
 
max-pain said:
anaqer said:
<OT>Anyone knows why 1280x1024 even exists whereas the far more reasonable 1280x960 has almost no support in games? :devilish: </OT>


http://arkface.com/hci/1280x1024.html

But the resolution of 1280x1024 is completely different. 1280:1024=1.25 Why does this resolution exist? What is the reason that manufacturers of screens and video boards include such a resolution in their products? Because 10 years ago there were some models of computer monitors included in Silicon Graphics workstations that had almost quad (1.25) physical proportions. Yes, they were less rectangular. They were looked almost quad and this resolution fits them perfect. But not today's monitors. Damn tradition.

And lots of old cards don't support 1280x960.
AFAIK even the S3 DeltaChrome doesn't support that resolution.
More to it than that.
Its part of the XGA spec for LCD displays - tons of LCD displays are native 1280x1024
 
And if NVidia needs some changes to a DDK interface, it's much more difficult to wait for MS than to hack your own OpenGL extension.

The point being that others can, and have shown video performance improvements through their various video system straight through the DX / Windows interfaces. The performance numbers for Toms shows that to be the case, and that clearly the interface for providing benefits from hardware accelerated video is already there – if it is merely a case that this is a driver issue preventing them from implementing it to a lower CPU performance surely the effort would be best served enabling that rather than writing a Linux / OpenGL interface for demo purposes as Windows is the environment that 99% of the target market will be using.

You said it yourself with respect to ATI and video acceleration: That ATI gains a benefit from doing all their own HW (not using third party ICs) because they don't have to depend on third party code.

Actually that pertains to All-in-Wonder products, not the standalone parts.

Curious that your posts have taken on a critical tone with NV lately, but a defensive tone with respect to ATI.

No more or less so than ever before. There has been some rather rampant and off base speculation in various cases that needs to be put in check now and then that might put some noses out of joint that are sensitive to it for their own reasoning.

What I find curious is your own bemoaning of various fan-boys on the forum and yet you only moan when they appear to have particular allegiance to one company. Equally (if not worse) vocal and even stupid members who have an opposing allegiance appear to go completely unchecked by you.

Hmmmm, indeed.
 
Maybe he knows something WE dont, and with the constant sniping going on maybe he feels the need to defend them as ATI could be going to make Nvidia look very bad, so in essence he might be letting you down gently.

Though I think he is just responding to those who are making out they know everything (when he KNOWS a lot more), and cryptic responses are the best kind to quell unrest.
 
DaveBaumann said:
And if NVidia needs some changes to a DDK interface, it's much more difficult to wait for MS than to hack your own OpenGL extension.
If it is merely a case that this is a driver issue preventing them from implementing it to a lower CPU performance surely the effort would be best served enabling that rather than writing a Linux / OpenGL interface for demo purposes as Windows is the environment that 99% of the target market will be using.

When I first got my R300, I had DX9, but there was no DX9 driver from ATI for it (except pre-beta closed to consumers) Clearly, if ATI really cared, wouldn't they have delivered full DX9 drivers with their flagship card? Did ATI ship FullStream in 2002? Nope. These things take time. Hacking Linux for a demo/test rig is quicker than drudging through Microsoft's swamp and I wouldn't blame NVidia's engineers if they prefer doing initial development on Linux.

NVidia shipped functional, working drivers now, to get out to the market, and clearly they will ship better drivers in the near future that take advantage of all the hardware. The NV40 codecs require a rewrite of the standard codec path. Standard MPEG-2 cards usually only accelerate the DCT and sometimes motion comp paths. NV40 requires that the CPU parts of the codec be rewritten for the NV40's VOP. It's not as simple as the 5950 path and is going to take more time.


What exactly is your original cryptic comment suggesting? Are you trying to tell me that it is not the driver's fault and that the NV40 accelerates even less than the 5950 does? So NVidia's VOP is just a big lie and in fact, the NV40 can't even offload things that the 5950 can? So the NV40 requires *MORE* of a given codec to run on the CPU than a 100% pure software decoder, or a hardware that only supports iDCT in HW?

If it's not the driver or player application, then perhaps you can suggest your own theory for why the NV40 eats up more CPU than the 5950?

Actually that pertains to All-in-Wonder products, not the standalone parts.
It pertains to any source code that you do not have under your control. Having the source code to the complete player, codec library, OpenGL driver, and operating system kernel is way better than not having it.

No more or less so than ever before. There has been some rather rampant and off base speculation in various cases that needs to be put in check now and then that might put some noses out of joint that are sensitive to it for their own reasoning.

Rampant and off base speculation like, say, that Tomshardware previews show uncontrovertably that the NV40 video processor can't offload as much work from the CPU as well as the 5950's nearly "everything on the CPU" approach? That his FarCry screenshots show that the NV40 must be the same as the NV3x?

Exactly what are you speculating about the NV40's video processor here Dave?


What I find curious is your own bemoaning of various fan-boys on the forum and yet you only moan when they appear to have particular allegiance to one company. Equally (if not worse) vocal and even stupid members who have an opposing allegiance appear to go completely unchecked by you.

Because they don't run a website that claims to be objective. I hold you to a higher standard than them.
 
demo coder , i had dx 9 drivers for my 9700pro a few weeks before dx 9 was offical released to the end user .

what are you talking about ?
 
I got a R300 PRO in Sept 2002. I had DX9 back then on my PC (yes, it wasn't out yet). The ATI drivers provided on ATI's partner website at that point were hardly even what I'd call beta quality. They were missing tons of functionality. Even after the MS DX9 release, several things continued to not work or were seriously bugged (from a developer perspective). You just didn't notice them because there wasn't much software which fully exercised the DX9 api back then. But if you have access to the DX9 beta forums, you can read about it.

Am I attacking ATI? No, my point is, the initial driver release, especially for a new architecture, should not be expected to be mature, or even 100% functional. Catalyst has come along way since 2002.

Here we have Dave suggesting that NVidia's drivers aren't the problem, but hey, DXVA must be fully supported and optimized by now, and somehow, the NV40 GPU is able to impose a higher CPU usage when decoding than almost pure-software codecs. This, despite the fact that the launch event was demoing hands on boxes that you could play with (just happened to be Linux) that showed the VOP working.

All I'm saying is, given it time. The card isn't even shipped to the consumers yet.
 
I sitll don't understand your point .


Your saying that you wanted drivers for dx 9 even though it wasn't out yet .


Am I attacking ATI? No, my point is, the initial driver release, especially for a new architecture, should not be expected to be mature, or even 100% functional. Catalyst has come along way since 2002.

Well i can agree with this .

But the thing is catalyst was a 100% functonal with a few odd bugs.

Dx 9 wasn't out yet. It supported dx 8.1 though. And before dx 9 came out it supported that too.

Where as nvidia has gone the other way with regards to driver quality.



You just didn't notice them because there wasn't much software which fully exercised the DX9 api back then

I do have access and not only that but there were alot of problems with dx 9 in general because it wasn't even out yet .
All I'm saying is, given it time. The card isn't even shipped to the consumers yet

Well they have less than 45 days now don't they. Most likely half of that .

I mean doesn't nvidia have the "golden" drivers ?

Or will this just be another "bug" they intend to fix ?


I can tell u i wil lbe upset if this isn't functional on a xp box on launch day(day i can go to a store and buy the card)
 
jvd said:
But the thing is catalyst was a 100% functonal with a few odd bugs.

No, the original drivers were't 100% functional. Unless you think, for example, locking up my system when a game uses point sprites is "100% functional"


Where as nvidia has gone the other way with regards to driver quality.

As far as I can tell, they both go in the same direction. Initial drivers: very immature, not optimal, may not support 100% of all features bug free. Later, drivers get faster, more functional, and with fewer bugs.



I mean doesn't nvidia have the "golden" drivers ?
Play the PR game with other posters in this forum. We're discussing technical issues here. The issue is, what is the more likely scenario for the high CPU load in Tom's tests: fundamental HW limitation of the NV40 vs the 5950, OR, drivers.
 
When I first got my R300, I had DX9, but there was no DX9 driver from ATI for it (except pre-beta closed to consumers) Clearly, if ATI really cared, wouldn't they have delivered full DX9 drivers with their flagship card?

I don't see how this analogy applies. We're talking about showing end user benefits in the primary usage scenario - in this instance ATI was targetting a DX8 driver as that is what the consumers were using. When reviews came about they had effective DX8 drivers and that was by far the primary usage scenario when R300 was released and they were able to demonstrate clear performance and quality benefits to end users in reviews with the R300 platform in these conditions.

Linux is not the primary usage scenario for the video acceleration, nor have they yet shown this technology to have benefits over other technologies in the environments that people are going to be using them. The priority, to me, would suggest that this should be the case.

Did ATI ship FullStream in 2002? Nope. These things take time.

Can fullstream be supported through the DDK? Evidently not seeing as it appears that third party applications are required.

What exactly is your original cryptic comment suggesting? Are you trying to tell me that it is not the driver's fault and that the NV40 accelerates even less than the 5950 does? So NVidia's VOP is just a big lie and in fact, the NV40 can't even offload things that the 5950 can?

Please stop using your own sensitivities to put words in other peoples mouths - I'm not suggesting anything, I'm questioning why they would use a Linux based demostration instead of putting the effort into the primary target for consumers right away and hence have a better showing in reviews. I can see that WMV9 is hardly likely to be the highest priority for initial drivers, but I would have thought MPEG / DVD decoding would have been. Given the length of time we've heard boards have been available internally how long does it take to support a single codec under windows and hence how long will it take for them to support all those they have advertised?

Excuse me if you think its wrong to throw some questions up, but if you are going to claim that there are end user benefits then I think it would be benefial to show them in an environment where most end users will get a benefit.
 
Status
Not open for further replies.
Back
Top