ati's new card.

maxima420

Newcomer
i'm new here. hi nice to meet you all etc. i'v been following the 3d scene for some time. i was just hoping somone could help me with this:

ati's new flagship

it only does 24bit. where as the nv40 does 32. that means the nv40 works about 20% harder to render each frame right, unless it's running in mixed mode right...?

it cheats at af and uses only biliniar, instead of triliniar. excuse my spelling. and it's basically been proven that it only does tri when colored maps are applied...

also it can't do 8xAA only 6X. the 8x temporal is crap. so all the reviews where it's doing 8x its not doing near the same work as the nv40's 8xAA...

i was wondering if anyone has a link to a review of the nv40 vs the r420 where the nv40 is using biliniar AF and doing only 6xAA(i don't remember if it has a 6X) nv the r420 doing there so called 8xAF with it's 6xAA.

also i was wondering if anyone has applied a bell curve(perhapse the wrong word) ontop of a graph showing the difference between the theoretical work the nv40 does in compairision to the r420.

if i'm wrong please point me out i'm really interested.

also the dual gpu mb i'v seens pics of in alienware's box. do you guys think it's possibly nvidia only? if so it would give them a good market segment of designer's who want the absolute most out of a machine. it would deliver the best feature set among 3d cards atm, and nvidia would be the only cards powering them....


anyways cheers!
 
maxima420 said:
ati's new flagship

it only does 24bit. where as the nv40 does 32. that means the nv40 works about 20% harder to render each frame right, unless it's running in mixed mode right...?
Not at all. Both ATI's FP24 and NVidia's FP32 instruction can be executed with equal effort (though FP32 certainly costs more in terms of transistors and therefore die size to implement). Both can execute an instruction in a single cycle. The number of instructions per cycle is actually determined by the number of ALUs in the pipeline. The difference in performance that NVidia has from running in FP32 or FP16 is due to other matters relating to how NVidia implemented registers. I personally do not know exactly what the cause is. It's been discussed by other, more knowledgable people here, though.

it cheats at af and uses only biliniar, instead of triliniar. excuse my spelling. and it's basically been proven that it only does tri when colored maps are applied...
If anisotropic filtering is forced in the drivers then trilinear filtering is used for the first texture stage, while bilinear filtering will be used for the other texture stages. However, if anisotropic filtering is controlled by the application, then the driver obeys and will apply trilinear filtering to whatever texture stages the applications specifies. The latest developement has been regarding the implementation of trilinear filtering itself.

also it can't do 8xAA only 6X. the 8x temporal is crap. so all the reviews where it's doing 8x its not doing near the same work as the nv40's 8xAA...
It cannot do a real 8x AA and is limited to 6x, that is true. 8x can be simulated via temporal AA at 4x, which will swap between two sample patterns each frame. At high framerates (usually the refresh rate of the monitor) the switch is barely, if at all noticable -- and it is certainly invisible in a moving scene -- and does indeed produce a very nice result at a cost barely more than normal 4x.

Temporal AA simulating 8x certainly is not doing as much work as NVidia's 8x. Then again, NVidia's 8x is a combination of supersampling and multisampling. I will not discuss quality, but will certainly say that in most modern games the performance hit is extreme.

i was wondering if anyone has a link to a review of the nv40 vs the r420 where the nv40 is using biliniar AF and doing only 6xAA(i don't remember if it has a 6X) nv the r420 doing there so called 8xAF with it's 6xAA.
AFAIK, NVidia's 6x is not exposed in current drivers for the NV40. Back on the GeforceFX, though, it was not at all impressive and was arguably worse than their 4x supersampling. The problem with comparing anything with temporal AA is that it cannot be demonstrated in screenshots, which is where most comparisons are done. I think most reviews certainly have commented on the effect, though.

also i was wondering if anyone has applied a bell curve(perhapse the wrong word) ontop of a graph showing the difference between the theoretical work the nv40 does in compairision to the r420.
I don't think so. . . Then again, how can "work" really be measured?
 
sorry if i seem like an nvidiot.

i just see these as factors, of which i would like to know more about.

i'm not some idiot who thinks the 420 sucks, its a great card. i just would like to be clarified on these matters...

and no i didn't buy stock in nvidia. i was a high school student then. i wish i had though!

anyways any clarification would be greatly appriciated.

if you have a comment along the lines of not pertaining to the subject, please refrain from posting.

thanks again
 
If the output is the same quality why would you care if it's working harder or not?

Look at the benchmarks, look at the IQ comparisons, decide which performance suits your needs best, decide which features you want or need (SM3.0, power supply requirements, 3Dc, etc.) and base your purchasing decisions on that.
 
i'm new here. hi nice to meet you all etc. i'v been following the 3d scene for some time. i was just hoping somone could help me with this:
Welcome. I'll offer my understanding of some issues, which is by no means perfect.

it only does 24bit. where as the nv40 does 32. that means the nv40 works about 20% harder to render each frame right, unless it's running in mixed mode right...?
R420's pixel shaders compute only at FP24 precision, whereas NV40's can do FP16 or FP32. ATi's vertex shader is FP32, though. NV40 doesn't work "harder" to achieve its full precision (FP32), but FP32 does require more transistors than FP24. AFAIK, NV40 is faster with FP16 than FP32 because the former requires less memory space and thus uses less memory bandwidth.

it cheats at af and uses only biliniar, instead of triliniar. excuse my spelling. and it's basically been proven that it only does tri when colored maps are applied...
OK, the current issue is not with AF, but with trilinear (which can be applied in conjunction with AF). The issue is that ATi isn't providing the same level of trilinear filtering between MIP-maps as it did with its previous high-end (though it appears ATi has been using this current "trylinear" method with the 9600 series), and it also isn't applying the same level or type of trilinear filtering on all textures (it alters it depending on the differences in texture patterns between MIP-maps, if I understand the issue correctly). ATi's "trylinear" is still superior to bilinear in that there is some filtering occuring between MIP-maps. Trylinear doesn't offer an abrupt transition from one MIP-map to another, as with bilinear, but neither does it appear to filter as much as "true" or previous incarnations of trilinear. Whether this is a good optimization or a simple forced trade-off of IQ for speed is still in the air, IMO. Remember, 3D graphics is the art of cheating without getting caught. If IQ ruled supreme, we'd all still be using SSAA instead of MSAA + AF, as SSAA filters more of the scene than the other two methods combined (and at a greater cost in framerate, which is why it was essentially dropped).

also it can't do 8xAA only 6X. the 8x temporal is crap. so all the reviews where it's doing 8x its not doing near the same work as the nv40's 8xAA...
Yes, it appears ATi maxes out at 6xMSAA filtering because it can't combine MS + SS AA modes. nV maxes out at 4xMSAA, and has to add SSAA on top of MSAA to achieve higher AA modes. I'm not sure why you think temporal AA is "crap," though...?

i was wondering if anyone has a link to a review of the nv40 vs the r420 where the nv40 is using biliniar AF and doing only 6xAA(i don't remember if it has a 6X) nv the r420 doing there so called 8xAF with it's 6xAA.
nV hasn't exposed 6x AA (mixed-mode) with NV40 yet, though I think 3DCenter.org may have used either a 6xS or an alternative 8xS mode in their X800P vs. 6800U benchmarks.

also i was wondering if anyone has applied a bell curve(perhapse the wrong word) ontop of a graph showing the difference between the theoretical work the nv40 does in compairision to the r420.
Can you clarify what you mean by "theoretical work?" If you want theoretical numbers, I'm sure most reviews included every card's theoretical fillrate, etc.
 
Ati has a higher sample pattern than Nvidia. 6X Vs 4x. Nvidias 8x mode is a Combination of 4x SSAA and 4x MSAA.

Ati's Temporal AA ishardly crap... :rolleyes:

Depending on the refresh rate your monitor is capable of it can deliver Quite High FPS and vastly superior Edge Smoothing. There are some cases where it does not work well. Where it does work well it is Quite superior. example Simulated 18X FSAA as far as your eye sees at very playable FPS. Vs. Totally unplayable Mixing of 4x modes on the Nv40.

Nvidia is not running FP32 in *Anything*. They hare hacking to FP16 or lower in every single Shader case in every single game on the shelf today. Without exception. Meaning that ATi's FP24 is delivering superior Quality in EVERY SINGLE GAME on teh shelf today without exception.

The above statement is True especially if you Zoom to like 300%.. Which means for all practical purposes THEY LOOK THE SAME. :rolleyes:

AF Quality.. AS far as i can tell from reviews the Quality of AF between the cards is nearly identical. And that was BEFORE anyone discoverd ATi's new method. Did their AF quality suddenly get worse than the reviews showed? NO.
 
Hellbinder said:
Nvidia is not running FP32 in *Anything*. They hare hacking to FP16 or lower in every single Shader case in every single game on the shelf today. Without exception. Meaning that ATi's FP24 is delivering superior Quality in EVERY SINGLE GAME on teh shelf today without exception.

If I can prove your above statement is false, will you stay away from this forum?
________
Essential vaaapp
 
Last edited by a moderator:
mikechai said:
Hellbinder said:
Nvidia is not running FP32 in *Anything*. They hare hacking to FP16 or lower in every single Shader case in every single game on the shelf today. Without exception. Meaning that ATi's FP24 is delivering superior Quality in EVERY SINGLE GAME on teh shelf today without exception.

If I can prove your above statement is false, will you stay away from this forum?

I'd like to see this .

As we only have 2 dx 9 games out right now. Farcry which is seeing the 6800 as an fx and thus forcing p.s 1.4 and painkiller (i belive is dx 9)
 
jvd said:
As we only have 2 dx 9 games out right now. Farcry which is seeing the 6800 as an fx and thus forcing p.s 1.4 and painkiller (i belive is dx 9)

I don't think painkiller runs any ps2.0, but I could be wrong. There are a few others that make some use of ps2.0. Tiger Woods, TR:AoD, Halo.
 
AlphaWolf said:
jvd said:
As we only have 2 dx 9 games out right now. Farcry which is seeing the 6800 as an fx and thus forcing p.s 1.4 and painkiller (i belive is dx 9)

I don't think painkiller runs any ps2.0, but I could be wrong. There are a few others that make some use of ps2.0. Tiger Woods, TR:AoD, Halo.

really , halo runs ps 2.0 ?


Anyway tiger woods is the last thing i'd use as a benchmark. I remember that you had to trick the game into thinking your card was an nvidia card to get high res
 
jvd said:
really , halo runs ps 2.0 ?

Anyway tiger woods is the last thing i'd use as a benchmark. I remember that you had to trick the game into thinking your card was an nvidia card to get high res

Yes halo has ps2.0. I really don't know the number of ps2.0 shaders it has even when they are all forced to ps2.0.

As for tiger woods I was merely commenting that it makes use of ps2.0, whether it makes a good benchmark or not wasn't really the point. Hb's comment said every game on the shelf, not every game benchmarked.
 
AlphaWolf said:
jvd said:
really , halo runs ps 2.0 ?

Anyway tiger woods is the last thing i'd use as a benchmark. I remember that you had to trick the game into thinking your card was an nvidia card to get high res

Yes halo has ps2.0. I really don't know the number of ps2.0 shaders it has even when they are all forced to ps2.0.

As for tiger woods I was merely commenting that it makes use of ps2.0, whether it makes a good benchmark or not wasn't really the point. Hb's comment said every game on the shelf, not every game benchmarked.

Can you prove that the halo and tigerwoods don't run lower shaders ?
 
jvd said:
Can you prove that the halo and tigerwoods don't run lower shaders ?

What do you mean? Can I prove that nvidia cards run ps2.0 in the games? I really have no idea.

I was merely pointing out that there was a wider selection available for the test than just far cry. I would actually be very suprised if Hellbinder is not in fact correct with regards to nvidia products currently on the market. I do expect that once 6800U cards begin to hit the market that they will run a dx9 codepath, although I am sure they will still use mixed mode when they can get away with it.
 
AlphaWolf said:
jvd said:
Can you prove that the halo and tigerwoods don't run lower shaders ?

What do you mean? Can I prove that nvidia cards run ps2.0 in the games? I really have no idea.

I was merely pointing out that there was a wider selection available for the test than just far cry. I would actually be very suprised if Hellbinder is not in fact correct with regards to nvidia products currently on the market. I do expect that once 6800U cards begin to hit the market that they will run a dx9 codepath, although I am sure they will still use mixed mode when they can get away with it.

Sorry it was mikechai that said he can prove that they run it . my bad
 
fallguy said:
'Holy moly' is about all I can think of after reading the first post.

Well I think everyone who participated in this discussion was very mature and helpful -- the way it should be regardless the post :)

Ostsol and Pete gave the original poster very informative information IMO 8)
 
Hellbinder said:
Nvidia is not running FP32 in *Anything*. They hare hacking to FP16 or lower in every single Shader case in every single game on the shelf today.
What?? Tell me you're just joking.
 
jvd said:
mikechai said:
Hellbinder said:
Nvidia is not running FP32 in *Anything*. They hare hacking to FP16 or lower in every single Shader case in every single game on the shelf today. Without exception. Meaning that ATi's FP24 is delivering superior Quality in EVERY SINGLE GAME on teh shelf today without exception.

If I can prove your above statement is false, will you stay away from this forum?

I'd like to see this .

another movie quote, this time from "chiken run."
"You and me both, paps." :)

and that was both of choices. :) Hellbinder prooving to be right AND Helbinder staying out of here. I just can't believe either one.

Hellbinder: careful, you are playing with fire here. Don't burn your fingers too hard because of some graphics company. No one at ATI will give a damn if you get yourself a laughing stock while "defending" them.
 
Back
Top