MSAA + HDR benchmarks ?

Humus said:
Oh it's pretty dumb. Multisampling isn't going anywhere. It will still be the most efficient way to deal with polygon edges. And it's easy to use as well. My dollars are on that it's still the preferred method to deal with edge aliasing ten years from now.
I assume this opinion of yours (sullied by a willingness to bet it) is based on the fact that there exists a (fps) competition in 3D, and not on whether it really is the "preferred" method.

10 years? I'll take that bet.
 
Humus said:
My dollars are on that it's still the preferred method to deal with edge aliasing ten years from now.
Seconded.

(with the caveat that we may see minor variations of the technique, but still basically multisampling)
 
Humus said:
Oh it's pretty dumb. Multisampling isn't going anywhere. It will still be the most efficient way to deal with polygon edges. And it's easy to use as well. My dollars are on that it's still the preferred method to deal with edge aliasing ten years from now.
Ten years is an awfully long time in this business, but I'd agree that it's still the preferred method 3 to 4 years from now.
But it would be cool if the shader had access to the multisample mask.

SugarCoat, MSAA with FP16 render targets is a very simple and straightforward extension of MSAA with ordinary RGBA8 render targets. It just comes at a (transistor and bandwidth) cost.
 
SugarCoat said:
I see this as something ATI may have on Nvidia well into 2006, again though, i see anything that one company has over the other, especially something that may play a major role in future game engines, as being detrimental to us, the gamers, in the end. I hope ATI could actually let Nvidia in on how they did it, they did open 3DC after all, sort of get a standard going. I just really dont want to see 3 or 4 completely different ways to do HDR + your specialty with ATI doing half, Nvidia doing the other, and us waiting for patches for either/or on speciifc games.

Maybe there will only be one way? Wasnt 3DC in Direct X now? Maybe ATi should have some leverage to get their standard being the one adopted, and Nv have no choice but to follow?

Also, there is the getinthegame program ATI has. They shouldnt worry about compatibility. When I buy a card from either camp. I have what they have to offer. Current HDR+MSAA is a huge plus for ATI they should exploit it. Therefore they should be working their asses off to get it working in the top 10 games that use HDR and then do all the rest.

It would then be Nvidias choice to then either follow or to do their own implementation. But then would the majority of ATI users care? No.

I know its a dim view, but its the same for BlueRay vs HDDVD right? There is always one who comes out first and the other comes out second and fights for who is the primary standard.

ATI is first out of the gate and should go out guns blazing. That way, their standard should be adopted first.
 
demonic said:
Maybe there will only be one way? Wasnt 3DC in Direct X now? Maybe ATi should have some leverage to get their standard being the one adopted, and Nv have no choice but to follow?

Also, there is the getinthegame program ATI has. They shouldnt worry about compatibility. When I buy a card from either camp. I have what they have to offer. Current HDR+MSAA is a huge plus for ATI they should exploit it. Therefore they should be working their asses off to get it working in the top 10 games that use HDR and then do all the rest.

It would then be Nvidias choice to then either follow or to do their own implementation. But then would the majority of ATI users care? No.

I know its a dim view, but its the same for BlueRay vs HDDVD right? There is always one who comes out first and the other comes out second and fights for who is the primary standard.

ATI is first out of the gate and should go out guns blazing. That way, their standard should be adopted first.

This is not an ATI standard. DirectX 9 defines how to use AA with a non backbuffer format like FP16. This means that every game out there that use HDR could already have build in support for HDR-AA from day one. No developer had to wait for hardware to implement it.
 
Xmas said:
Ten years is an awfully long time in this business, but I'd agree that it's still the preferred method 3 to 4 years from now.
But it would be cool if the shader had access to the multisample mask.

Well, there are other elements that have survived more than tens years, like for instance texturing, though additional improvements have been done along the road to improve quality, performance and features (anisotropic, swizzling in memory, compression, lod-bias, dependent reads, lookup by specifying gradients etc.). I see multisampling almost as fundamental to 3D rendering as texturing, and expect it to survive equally well, but with lots of improvements along the road in performance, quality and programmability.
 
  • Like
Reactions: Geo
Humus said:
Well, there are other elements that have survived more than tens years, like for instance texturing, though additional improvements have been done along the road to improve quality, performance and features (anisotropic, swizzling in memory, compression, lod-bias, dependent reads, lookup by specifying gradients etc.). I see multisampling almost as fundamental to 3D rendering as texturing, and expect it to survive equally well, but with lots of improvements along the road in performance, quality and programmability.
I was thinking more along the lines of a complete shift in rendering methodology, like using voxels, or raytracing. But I realize that, wrt raytracing, when you cast multiple rays and caclulate only one color if they all hit the same surface, that could be called multisampling, too.
 
Next week we'll see Valve's Lost Coast. I'm particularly interested if they really did drop FP blending HDR (according the Driverheaven LC article they did), or that they've added it again to show how X1K can handle this with MSAA.
 
Humus said:
Oh it's pretty dumb. Multisampling isn't going anywhere. It will still be the most efficient way to deal with polygon edges. And it's easy to use as well. My dollars are on that it's still the preferred method to deal with edge aliasing ten years from now.

I get the feeling from those somewhat dubious statements here:

http://www.bit-tech.net/bits/2005/07/11/nvidia_rsx_interview/3.html

that NVIDIA might have been (or is) targetting developers to combine float HDR and MSAA within the applications.
 
Only really playable at 12x10 and below, though, which really isn't that surprisin, but I'm sure the game looks rather nice at that res. with HDR and 4xAA.
 
The X1800s appear to lag behind the 7800s in Far Cry HDR perf. Dave's test system was pretty similar b/w the two reviews, with just the MB (and PSU) changing.

Far Cry HDR (XT/GTX, XL/GT)
8x6: 91/77, 83/77
10x7: 78/77, 67/74
12x10: 52/67, 43/57
16x12: 36/47, 28/39

The 7800s are basically a res above the X1800s at the higher reses. It's interesting to see the 7800 capped (by their FP blend "fillrate?"*), whereas the X1800s scale more--or, rather, drop more. Is more MC tweaking in order?

On the plus side, it doesn't seem they drop further than when adding AA+AF to non-HDR Far Cry. Of course, that's judging only with the XL's 16x12 figure (with and without AA+AF and with and without HDR), as both the XT and XL are capped at 100 up to that res: -27% w/o HDR, -26% with.

-----
* I'm not quite sure how to interpret this. IIRC, NV40+ can do half as many blends as it has ROPs. I just realized R520 has the same number of ROPs as G70 and thus possibly the same blend/ROP ratio. Only, R520 is clocked higher than G70. So, 400-430MHz for G70, and 500-625 for R520. 90fps/77fps ~= 500MHz/430MHz, but that's the XL vs. the GTX. I can't quite square the rest. Kirk (tangentially) says FP rendering fillrate scales w/bandwidth, but that doesn't square my guess that both parts can do 8 blends per clock with the reality that the XL scores higher than the GT (at low [non-ROP-limited?] res), despite (theoretically) having the same available bandwidth. Yes, the XL's core (and thus ROPs?) is clocked higher than the GT's, but the GT is stuck at 77fps from 6x4 to 10x7, whereas the XL starts higher and drops fillrate throughout. OTOH, I see from Dave's G70 preview that the GTX w/o HDR is locked at 81fps from 6x4 to 16x12, and up to 10x7 it only loses 4fps w/HDR. Compare that with the XT, which takes a similarly slight hit from HDR immediately yet a pronounced one starting with 10x7, or one res before the GTX stumbles significantly.

Where was I? Right, in the land of confusion.
 
Last edited by a moderator:
Isnt HDR rendering supposed to be one of the X1800's strong suites? "SM3 Done right" and all that.

Then why does it lose to the GTX in HDR rendering and Why does it suffer a 50% performance hit when doing HDR.

Where did all this "superior SM3" and "superior efficiency" go???? To me shouldnt we be seeing something like a 25-35% performance hit to really be considered efficient?
 
Hellbinder said:
Isnt HDR rendering supposed to be one of the X1800's strong suites? "SM3 Done right" and all that.

IIRC:

1) X1K series supports a superset of HDR "methods" compared to the latest nVidia offerings: in other words, it's more flexible. There are lots of options the developer can make to trade off performance / quality depending on the developers needs.

2) X1K can support HDR and AA...no matter what "method" is used. nVidia's hardware does not.

3) I don't think it's ever been postulated that given the same method, that HDR would be faster / slower on either architecture.
 
I suppose what we will see is Catalyst Ai taking over the App via application detection and forcing the 10:10:10:10.. (or whatever that is i dont have time to look at the moment) HDR method that the X1k series supports.

That will probably end up being ultra fast and looking identical or nearly identical?
 
Hellbinder said:
Isnt HDR rendering supposed to be one of the X1800's strong suites? "SM3 Done right" and all that.

Then why does it lose to the GTX in HDR rendering and Why does it suffer a 50% performance hit when doing HDR.

Where did all this "superior SM3" and "superior efficiency" go???? To me shouldnt we be seeing something like a 25-35% performance hit to really be considered efficient?

I am rather impressed with the numbers with 4AA+HDR.
The R520 is a really interesting part from a tech pov so the only problem is that its really late to the table.

One thing i wonder Hellbinder is why you praise/praised any ATI tech before and then just made a U-turn? Have you lost money in buying stocks?
 
Hellbinder said:
I suppose what we will see is Catalyst Ai taking over the App via application detection and forcing the 10:10:10:10.. (or whatever that is i dont have time to look at the moment) HDR method that the X1k series supports.

I don't believe the driver can force this type of change without having potentially serious rendering side effects....if it can be done at all. AFAIK It has to be done at the developer level.

What ATI can do, is work with devs to enable new HDR paths to support faster and/or enable AA along with HDR.
 
Hellbinder said:
Isnt HDR rendering supposed to be one of the X1800's strong suites? "SM3 Done right" and all that.

Then why does it lose to the GTX in HDR rendering and Why does it suffer a 50% performance hit when doing HDR.
Are you certain the application is doing the same rendering on both cards?
Where did all this "superior SM3" and "superior efficiency" go???? To me shouldnt we be seeing something like a 25-35% performance hit to really be considered efficient?
You can't make blanket statements about what is efficient and what is not when you don't know how the application changes its rendering between modes.
 
Back
Top