Bob3D said:fallguy said:Does anyone else find Deco-Rj's post, hard to understand, and pretty much useless?
look's like another nvidiot
Deco-Rj's, what's your vid card?
IgnorancePersonified said:The silence is deafening...
So ATi gets caught doing something screwy, and you bring up nVidia's past to make everything seem O.K?....Hellbinder said:On another note lets not forget that Nvidia is hacking AF in OpenGL and Quality is being affected they are also hacking AF in Several other Application specific cases like UT. And its apparent that their hacks reduce Quality noticably.
The Baron said:Dave, would you mind closing this bullshit thread, banning the fanboys/trolls, and then starting a nice thread where we can have a sane conversation about what is going on here?
Deco-Rj said:Bob3D said:fallguy said:They suck for putting up r42x hardware that they know will be replaced by sm3.0 hw as soon as Ati/tsmc can deliver.
They SUCK for throwing pennies at Valve.
[]'s
Sorry, but these is stupid.
What about all twimtbp titles?
If anyone here is "cheating" it's Nvidia.
Last year i bought a FX5900 and i see all my friends with R3xx cards running twimtbp titles with better speed/IQ.
I really like geforce cards/drivers, but i don't want to get "cheated" again by nVidia.
How could AF affect the shader performance? I'm really curious.DaveBaumann said:Again, we're looking at AA cases - different architectures will do different things in different applications. Second, NV40 has a pixel shader performance hit when utilising AF, R420 does not.
Would this ALU load sharing be necessary for for SM 3.0 support? I really don't know... so that's why I'm asking. If it is I think you could hardly criticize the NV40 architecture over something like this if ATi would have to move to something similar to support it in the future.Hellbinder said:It is also a little irritating how it can be explained as plain as day that Nvidia is losing performance becuase of the way they share the load on ALU1 in certain situations while ATi is fully paralell... yet ask the same insinuating Questions over and over and over.
How could AF affect the shader performance? I'm really curious.
The “Superscalar†nature of the pixel shaders should provide an extra level of flexibility that NVIDIA’s driver compiler should be able to take hold of and increase the shader performance. Although NVIDIA have opted to stick with an ALU that’s also used for texture address processing, this is probably no bad thing either because as shader lengths increase the ration of texture accessed to ALU use will drop, reducing the need for a dedicated texture address processing unit.
Sorry.. time zones suck it was midday monday hereChrisRay said:IgnorancePersonified said:The silence is deafening...
Honestly, You wouldnt expect to hear anything specific from either company On a Sunday Night would you
Would it be needed to issue instruction for every texture sample? I thought the sampling thing is taken care of in the TMU, correct me if I was wrong.noko said:What I get is that during the filtering stage from a texture in which bilinear samples are being done, the texture unit in the pixel pipeline when used prevents ALU or PS operatons on the NV40. As you increase the AF, the more bilinear samples are being process therefore more clock cycles where you have no PS opts. ATI allows some parrallel operation between pixel shading and texture sampling while Nvidia doesn't. The higher the AF, more filtering required, more texture opts required, blocking PS opts for that pixel until the texture filtering is done.