FP blending in R420?

Hyp-X, I totally agree with your predictions.

PS 3.0 doesn't do a whole lot for you (I'd say visually it'll do less than going from ps 1.1 to ps 1.4). Too bad this is what the focus is on when people say SM 3.0.

VS 3.0 is cool, but techniques using vertex texturing are a far more radical step than the initial plunge into pixel shaders back in 2001, which took ages to have widespread use. The true advantages of VS 3.0 won't be used, unfortunately, for a long time. Kudos to NVidia for making it available, at least.
 
DaveBaumann said:
So what? Why do we need such insane framerates or super high resolutions anyway?

Look at the performances of something like GT2 in 3DMArk03 - these are not insane and this is still bandwidth limited enough to have large performance drops with FSAA. This is only a relatively complex scene using DX7/DX8.1 features.
That's why I don't particularly like stencil shadowing and the many rendering passes involved. I think the performance drop associated with Doom3 style rendering is not worth the gain in image quality. I'll take HDR over that anyday.

Don't you think most would agree that FarCry looks a lot better than GT2? It's got great framerates. 3DMark2003 was designed as a stress test. You don't have a high image-quality to framerate ratio that most games strive for.

HDR rendering is very flexible since it is a post process. One way to keep the costs and drawbacks down is to render into a normal 8-bit per channel buffer with some sort of cheap tone mapping (simple math, clamping, texture lookup, whatever you want). Do this at high res with AA. Then render again into a lower res FP16 or I16 buffer without AA, and do all your post-processing effects. Halos and glare effects don't need as high resolution anyway since they're spatially low frequency effects. Finally, blend together.

HDR doesn't have to have a crippling effect on framerate, but it makes things look much more real, cinematic, and engaging. I think it's the lowest hanging fruit in the next step to realism.
 
Mintmaster said:
So it looks like most people are pretty sure there's no FP blending, but it's not confirmed yet. OpenGL guy, I was hoping you would be able to answer this. Could you try to find out?

glw, I think the accumulation buffer is sort of a copy and paste type of thing, really only suitable for a whole frame at a time. Pretty much the same limitations as doing a texture read for simulated blending.

This sucks. R300->R420 is almost as bad as GF3->GF4. You get a massive performance leap, but you only get a couple of new features (longer instruction length, 3Dc) that in my book aren't very important. I16 blending at the very least would have made HDR very usable, as a range of 1/256 to 256 will make a nearly identical effect to FP16 if the source art is done right. They should do it like NVidia, and only have 8 blending units (bandwidth barely even allows 4 64-bit blends per clock anyway).

ATI's choice is likely going to hold back HDR adoption. I would definately get NV40 if my money was on the line, especially if they deliver on the $299 12-pipe version.

Am I mistaken or are there still shaders in Shadermark that cannot run on Nivida hardware and do on ATI hardware? I remember HDR not enabled on Nivida cards but that it did work on ATI cards?? Am I mistaken in this??

Please tell me...
 
Socos said:
Am I mistaken or are there still shaders in Shadermark that cannot run on Nivida hardware and do on ATI hardware? I remember HDR not enabled on Nivida cards but that it did work on ATI cards?? Am I mistaken in this??

Please tell me...

That's supposedly only because of the current drivers and that DX9 2.0c isn't released yet. (Or just one of them)
 
Mintmaster said:
So it looks like most people are pretty sure there's no FP blending, but it's not confirmed yet. OpenGL guy, I was hoping you would be able to answer this. Could you try to find out?
With comments like:
This sucks. R300->R420 is almost as bad as GF3->GF4. You get a massive performance leap, but you only get a couple of new features (longer instruction length, 3Dc) that in my book aren't very important.
Why should I feel compelled to answer anything?

I know the answer, there is no "finding out".
 
We don't have any evidence that the NV40 can exectute any of these features well, in fact so far we have only seen the opposite. I see *drivers* used alot for a excuse, how about we wait and see if alot of these features boasted by a NV40 are performing like they should, or a check box feature.
 
Doomtrooper said:
We don't have any evidence that the NV40 can exectute any of these features well, in fact so far we have only seen the opposite. I see *drivers* used alot for a excuse, how about we wait and see if alot of these features boasted by a NV40 are performing like they should, or a check box feature.

Exactly--the extent and efficacy of nV40 ps3.0 implementation has yet to be demonstrated so that we can separate the PR from the actual functionality, should there be differences.
 
OpenGL guy said:
I know the answer, there is no "finding out".

Why not post it then? Or is this uber-confidential ATI information again, you know, the kind of information you'd hide from developers, as to what formats and blending mosts they can use.


Accusing Mint of IHV bias? He used to be a big basher of the NV3x.
 
DemoCoder said:
OpenGL guy said:
I know the answer, there is no "finding out".

Why not post it then? Or is this uber-confidential ATI information again, you know, the kind of information you'd hide from developers, as to what formats and blending mosts they can use.


Accusing Mint of IHV bias? He used to be a big basher of the NV3x.

No, I think he's accusing Mint of being quite rude and disrespectful for someone who is seeking an aswer...kinda like you.
 
You call this

OpenGL guy, I was hoping you would be able to answer this. Could you try to find out?

Rude and disrepectful?!?!

OpenGL Guy is the one who has become rude and disrepectful. Many of his posts lately have become snide sniping remarks at people and I have received PMs to that effect that from several forum members who were frankly surprised that an ATI employee has been acting like that in public forums.

Try again Joe.
 
DemoCoder said:
You call this
OpenGL guy, I was hoping you would be able to answer this. Could you try to find out?
Rude and disrepectful?!?!

Please actually read what OpenGL guy wrote.
Do you really need this to be spelled out to you?
OK, hint : ctrl + f, then type s, u, c, k, s.
 
Hyp-X said:
I don't like the "SM" approach at all. We are talking two different things:

PS2.0 vs PS3.0
Don't expect any visual difference here. PS3.0 is more flexible and it will be easier to develop on PS3.0 capable hardware - but at the end of the day (read at the optimization stage) it might turn out that most of the shaders run faster with the ps_2_a profile.
So it won't affect consumers that much - if at all.
First of all, consider that PS 3.0, with current hardware, effectively also means support for FP blending/filtering (ATI will apparently also include FP blend support on their SM 3.0 parts).

Regardless, some PS 3.0 shaders will just be unfeasible to run in PS 2.0. While it may be fundamentally possible to do the same math, it will be vastly too slow to do practically. These shaders won't appear in games for some time, though, but that still only makes your statement true in the near-term.

And don't forget that PS 3.0 shaders are easier to develop for, given the additional freedom available, and thus we may see game developers start developing PS 3.0 shaders with advanced effects and not bothering to write the PS 2.0 fallback (for example, if it exceeds one of a number of limits seen in PS 2.0) with equivalent quality.

And I don't believe for an instant that any shaders will run faster with the PS_2_a profile than they would with a PS_3_0 profile.
 
So what. Mint's complaint "this sucks" is not an attack on OpenGL Guy personally, it's his expression of disappointment with the lack of features. This is not "rude and disrepectful" towards OGL Guy. OGL Guy fished Mint's post from an earlier context. Mint is no Chalnoth, in fact, he's the anti-Chalnoth.


As to why I got PMed with respect to his behavior, its because in two separate threads, I posted a purely technical post, and got sniped by him. For example, with respect to Gamma Corrected AA, I gave a method by which it could be implemented on other cards and he started off his response with a snide remark. In an earlier post that day, he did the same thing. This generated the PMs.
 
Where is this directed to Opengl Guy? Did you read the quote? He is saying the same thing to Nv and Ati (in fact more on Nv)
 
DC...This is a place that you would best be wary..... It is in no ones favor to be doing what you are trying to do here. Let OpenGL & Mint deal with this themselves...... It is absolutely none of your concern......
 
Well, let's see how Mint reacts. As for wary, I am anonymous here and don't represent any company. I'll be the first to admit I am a real bastard sometimes. But I'm a real bastard whose behavior doesn't reflect on my company.
 
Evildeus said:
Where is this directed to Opengl Guy?
I wonder.
If someone dissed the effort of your company in such tone, how forthcoming would YOU be in helping this person, his/her previous stance towards nVidia notwithstanding?
 
Back
Top