Egg on ATI's face?

Status
Not open for further replies.
DemoCoder said:
You're talking about a 4x increase in bandwidth needed vs a 2x increase in memory.

Seriously, can you explain that?

The way I see it: twice the samples for AA, twice the bandwidth needed for AA.

I have no doubt it will be possible in some games, but I think it will be the exception, not the rule.

The more options the better, no?
 
Scarlet said:
Doomtrooper said:
Especially comparing PS 2.0 to SM 3.0 vs SM 3.0 to PS_2_b, I would be more concerned about VS 3.0.

Ehh? Would you mind explaining that comment?

he's talking about the vertex shader 2.0 to 3.0 being a bigger jump than the ps2.0 to 3.0.
 
DaveBaumann said:
With effective compression the required bandwidth isn't linear with the number of samples.

Hmm...is this a hint about upcoming 8x MSAA support for R420 :!:
 
bbot said:
According to the Inquirer, Farcry supports shader 3.0, which r420 doesn't support. Haha. So much for ATI's efforts to deemphasize shader 3.0.

i always love it if i go on b3d to get the latest news - and then read stuff which is from months ago :)
 
Yes, but I was using Joe's assumption (no efficiency changes). Sure, I'll grant 8:1 color compression. But 6:1 compression didn't make 6x incredibly viable on the R300 either. The number of games on which I can run 6x is severely limited.


Joe,
You're doing 2x the number of samples, but you have 2x the number of pipelines, therefore you need enough bandwidth to write 4x as many samples. (or 3x for the PRO)

(of course, you may need to loop back 2-3 times to write the additional samples, so it may not matter, of course, it limits your max MSAA throughput too)
 
991060 said:
I'm sure R420 can beat NV40 when the latter is using 8xAA. ;)

Does anybody know why nVIDIA didn't implement 6xAA? Is it hard to implement or protected by ATi's patent?

what do you mean ? NV8xAA vs. ATI no AA ?

i am rather sure that ATI beats NVidia when both use 8x AA. Actually, if i expect ONE good thing about R420 then it is a very good and fast 8xAA.
 
DemoCoder said:
You're doing 2x the number of samples, but you have 2x the number of pipelines, therefore you need enough bandwidth to write 4x as many samples. (or 3x for the PRO)

Well, obviously...I know that if you want to move from being bandwidth limited to being fill rate limited, the bandwidth is going to be inadequaqte to do so. I don't expect to be fill-rate limited on the R420 in 8X (if it exists) or even 6X MSAA modes.

But AFIAK, 4X MSAA is typically bandiwdth limited on the R3xx. So while of course doubling the available bandwidth will not change the situation to be fill-rate limited on the R420, surely you can potentially do double the MSAA sample rate?
 
I think the reason why there is no 6x on NV40 is several fold: probably because they'd need to loop back on the ROPs a 3rd time, having 6x samples would require programmable sample positions, a bigger sparse grid, different compression logic to handle it, a different MSAA buffer layout, plus a deeper with bigger buffers at the end.

I think they made a trade off: They decided that 4xRGMS was "good enough" and that they could use the transistors saved to implement other features or performance boosts. I happen to think they made the right decision, because with the exception of gamma correction, 4xRGMS is on par with the R300, and I don't think the higher modes deliver as much visual impact (diminishing IQ returns) compared to their cost. Obviously, people will disagree and claim 6x and 8x are monstrously better. But I think NV is betting that for most people, 4x will be good enough, especially the mid-range.

i.e. NV is betting 4x is the "sweet spot" for AA.
 
DemoCoder said:
I think they made a trade off: They decided that 4xRGMS was "good enough" and that they could use the transistors saved to implement other features or performance boosts.

Well, I'd say that's obviously what they did. The entire design (from any vendor) is an exercise in trading off...

I happen to think they made the right decision, because with the exception of gamma correction, 4xRGMS is on par with the R300...

Ahhh...but it's not going to be competing against the R300...

...and I don't think the higher modes deliver as much visual impact (diminishing IQ returns) compared to their cost.

It is quite possible that nVidia could have made the right trade-off for their architecture...and at the same time fall behind ATI.

Obviously, people will disagree and claim 6x and 8x are montrously better.

It's not even that.

If performance is close across the board...it's the "little things" that tend to matter (and yes, blown out of proportion!). Which one has more advanced shader support...which one has "better" AA, etc.

i.e. NV is betting 4x is the "sweet spot" for AA.

But that's been the sweet spot for the past year and a half....which is the point. It's time the sweet spot was raised. :)

I personally think that it won't be until we hit 12x or 16x "sparse" type of AA at 1600x1200 that we really reach the point where the additional cost will be hard to justify given today's display technology.

Until then, it's just going to be a matter of prioritizing.
 
I'm hoping performance is close across the board, actually, because having to much of a differential bifuricates the market further for developers. (consider HL2 and FX series)

The issue then comes down to what extras you prefer I might decide to go with an NV44 for HTPC because of the video processor, but an R420 for gaming, but an NV40 for development.

But I expect 8xAA to be like PS3.0, something which won't be a common usage scenario, but nice to have *when* it can be used.
 
DemoCoder said:
But I expect 8xAA to be like PS3.0, something which won't be a common usage scenario, but nice to have *when* it can be used.

Well, I hope we're not getting ahead of ourselves...I'm just hoping for the time being that 8X is offered. ;)

But getting back to the original sort of topic...do you still not agree that a R420 that doubles the bandwidth of an R300 could potentially make 8X MSAA as playable on the R420 as 4X is on the R300?
 
Yes, I'll agree, as long as you're not fillrate or texture/fb bandwidth limited. e.g. <1600x1200, low overdraw, not much blending, etc

I think 1024x768 on UT2003/Q3A engine games stands the best chance. I have problems playing Day of Defeat @ 1600x1200 w/6xFSAA on some maps, and that's a Quake1/HL1 engine game.
 
DemoCoder said:
I might decide to go with an NV44 for HTPC because of the video processor, but an R420 for gaming.

I'm pretty much in total agreement here, DC, and, if things are as I believe, exactly where I'm headed.....Did you mean NV40?
 
martrox said:
DemoCoder said:
I might decide to go with an NV44 for HTPC because of the video processor, but an R420 for gaming.

I'm pretty much in total agreement here, DC, and, if things are as I believe, exactly where I'm headed.....Did you mean NV40?

No, assuming the NV44 is the "cut down" version of the NV40, the NV44 would be a better deal for the HTPC than the NV40. (Think noise.)

This is also assuming, of course, that the video processor isn't cut out of the lower end part. ;)
 
Joe DeFuria said:
No, assuming the NV44 is the "cut down" version of the NV40, the NV44 would be a better deal for the HTPC than the NV44. (Think noise.)

This is also assuming, of course, that the video processor isn't cut out of the lower end part. ;)

Why not wait for an All-in-Wonder version of the R420Pro or SE? ;)
 
CyFactor said:
Why not wait for an All-in-Wonder version of the R420Pro or SE? ;)

Of course, we know nothing about the video capabilities of the R420 and its derivatives, so I certainly wouldn't rule any of them out. (And in all honesty, we don't really know how effective nVidia's video circuitry is either...)
 
Joe DeFuria said:
CyFactor said:
Why not wait for an All-in-Wonder version of the R420Pro or SE? ;)
Of course, we know nothing about the video capabilities of the R420 and its derivatives, so I certainly wouldn't rule any of them out. (And in all honesty, we don't really know how effective nVidia's video circuitry is either...)
Well I know Tom's Hardware (yea yea I know) did some tests and pretty much concluded that it sucked compared to ATI's video processing. When they asked NV about this they got the line about early drivers. At least this is what I recall reading on launch day. Personally I am going to take a wait and see approach on that video processor. If they can get it doing all they say it can do it will be most impressive but I am not getting my hopes up at this point in time.
 
Status
Not open for further replies.
Back
Top