arken420 said:
Hey this is also my first post on this site.
[...]
Anyways, I'm tired and want to go to bed, but I hope I pissed a bunch of people off with this post and that it sparks some real technically backed-up response instead of the drabble I've been seeing posted in this forum.
Laterz
Errr, it isn't our fault if most of what we say is based on already discussed and argued facts in other threads
Plus, you got to understand of lot of this is based on reliability. Some people here, including Dave, having proved that when they say something non-public, it's often accurate.
But to respond to a few more precise points...
Are we just supposed to take everyone who works at ATI word on it?
No, but Dio for example, has a pretty big implication in doing for-developer papers about improving shader performance on ATI hardware. OpenGL Guy works on D3D drivers and the AA implementation driver-wise IIRC. And so on.
Those people generally know WTF they're talking about, and they generally have no reasons to lie to us.
Same thing for the NVIDIA personnel lurking internet forums illicitly *grins* - too bad that since they can't claim for NVIDIA, nobody ever takes them seriously. A source of mine regularly posted *correct* information on forums, generally not very juicy stuff, and everyone always said he was lieing or something.
If I told the exact same thing, everyone would have said it made sense.
Just shows you how much reliability plays in trusting information.
I really wish NVIDIA personnel finally received the authorization to post stuff on forums. The current way it works, a small paragraph in a barely related contract not authorizing them to post is pretty lame.
The lamest part, though, is that I know for a fact this has already been used at least once against employees.
Nvidia doesn't have that excuse, they should have done some more research and possibly held back their product so they could have changes to the architecture to be more competitive. But then they wouldn't have been competitive [...]
NVIDIA's original NV30 ETA: Spring 2002
Available on store shelves: Spring 2003
Trust me, it was delayed enough already.
I see that when I hear that Nvidia's product has all these new complex additions, but doesn't run well at all. I have taken a $150,000 Sun server and brought it down lower than a P2-300 in performance with some well placed crappy code and basic assumptions that didn't turn out. I had an excuse, I never compiled or ported any code from x86 to that architecture, I was just messing with it.
I don't quite understand what you're trying to say there. Are you saying NVIDIA's architecture is very complex to program, but that if an ideal situation is given for a specific result, you can achieve good performance?
If that's the case, let me tell you right away that compared to ATI, even ideal cases are generally very far behind in Pixel Shader programs.
Sure, cases where NVIDIA beats ATI in complex programs do exist, but they're significantly rarer than the opposite - and significantly might in fact be an understatement *grins*
and that it sparks some real technically backed-up response instead of the drabble I've been seeing posted in this forum.
Once again, there are people on this forum who pretty much know the technical justifications of what other people are saying. And that might partly be why joining this community isn't particularly easy, as for many other technical communities
Uttar