FP blending in R420?

T+L on Savage 2000? HAHAHAHA. Sorry , it just brought back memories. No offense meant to the ppl that worked on S2000 currently on the board, but man oh man did I have a bad exp with a Viper II that Jon Brittain? sent a few years back.
 
Zeno said:
I don't know what the DX spec says about it....I use OpenGL almost exclusively. NVIDIA has been very good in the past about quickly exposing features of their cards through OpenGL extensions. I expect that this one will be available as soon as the cards are.

NVIDIA state that DirectX / Windows is already ready for this. It may be a case of seeing if it can be used under DX9.0c.
 
Not sure how relevant it is but the accumulation buffer is hardware
accelerated on the R300. Very impressive it is too. :)
Seems to be 64 bit (16 bpp) as I notice banding after
256 passes of 32 bit RGBA. Speed is quite good too.
 
glw said:
Not sure how relevant it is but the accumulation buffer is hardware
accelerated on the R300. Very impressive it is too. :)
Seems to be 64 bit (16 bpp) as I notice banding after
256 passes of 32 bit RGBA. Speed is quite good too.
I believe accumulation buffer effects are done through the multisample buffers, don't quote me though!
 
FP blend was another thing that was supposed to be around this generation but ATI dropped it too by the looks disappoint :/ Lets just hope the R500 is as revolutionary as the R300 because if they keeps this up they are gonna fall back in the feature deparement.
 
So it looks like most people are pretty sure there's no FP blending, but it's not confirmed yet. OpenGL guy, I was hoping you would be able to answer this. Could you try to find out?

glw, I think the accumulation buffer is sort of a copy and paste type of thing, really only suitable for a whole frame at a time. Pretty much the same limitations as doing a texture read for simulated blending.

This sucks. R300->R420 is almost as bad as GF3->GF4. You get a massive performance leap, but you only get a couple of new features (longer instruction length, 3Dc) that in my book aren't very important. I16 blending at the very least would have made HDR very usable, as a range of 1/256 to 256 will make a nearly identical effect to FP16 if the source art is done right. They should do it like NVidia, and only have 8 blending units (bandwidth barely even allows 4 64-bit blends per clock anyway).

ATI's choice is likely going to hold back HDR adoption. I would definately get NV40 if my money was on the line, especially if they deliver on the $299 12-pipe version.
 
I would guess that the performance is more likely to be an inhibiting factor. That and you loose AA as well.
 
DaveBaumann said:
I would guess that the performance is more likely to be an inhibiting factor. That and you loose AA as well.

Performance of what? And what hardware are we talking about?
 
"This sucks. R300->R420 is almost as bad as GF3->GF4."

I'm not so sure that's bad thing. As I recal, when the NV30 was launched, the general consensus was that NVIDIA had hit one out of the park with the GF4, but struck out the GF-FX. The GF4 was a damn fine product for it's time -- the Radeon8500, while supporting more features, was slower than the GF4 and therefor did not have the same "warm" market response the GF4 did.



Shit, the whole x800/6800 thing is sort of reminiscent of the 8500/6800. Without getting into the old arguments, I think we can all agree the GF4 was more succesful than the 8500 because it was a better performer. Judging off of that, I'd wager the x800 will be the more "succesful" product, as the market seems to respond better to performance than features.
 
Mintmaster said:
This sucks. R300->R420 is almost as bad as GF3->GF4. You get a massive performance leap, but you only get a couple of new features (...)

You just can't always have both outstanding performance and revolutionally new features. Engines that use all the nifty features from the ground up are developed and used FAR beyond the shelf life of the product introducing these features ( Doom3 vs. GeForce256 being probably the most extreme example ) ... brute force performance enhancements ( pipe doubling, speed bump, etc. ) and efficiency tweaks are thus more important than some neophiles are willing to admit. They allow for increased level of detail in games with a very flat learning curve for developers via often looked down upon methods like simple scaling of triscount, texture resolution, blending passes, etc. Just think about how far games with licensed engines have evolved from the original Q3 / UT, using essentially the same, actually very dated feature set.

( On a sidenote, that's exactly why dissing id about "just" making engines is a silly thing... it allows licensees to focus developement time and assets on content and gameplay while relying on a tried technology. )
 
WaltC said:
Cynical? Perhaps. But then I worry that nVidia had no official demos lined up to illustrate all of this amazingly important stuff like PS3.0 and fp blending, so that reviewers could run them, analyze them, and praise them, if deserving of it. I mean, if it's so "important" and so forth, then WHERE ARE THE IN-HOUSE nVIDIA DEMOS to illustrate it???? I think a bit of empirical evidence is definitely in order.

Wasn't it "demoed" in one of the videos released pre-launch? I guess that might not meet your definition of "official" :?

I have to say that I too a) am a little worried that this feature won't be quite as easy to use as one might hope (given the back-seat it seems to have been given), and b) think that in many ways it's a much more significant new feature than PS3.0, in that it pretty much makes FP rendering usable v. unusable for the purposes for which it is best suited (eg. HDR lighting, etc.).
 
DaveBaumann said:
I would guess that the performance is more likely to be an inhibiting factor. That and you loose AA as well.

Do you lose AA? Anyway, most HDR uses I have seen use rendertargets where MSAA can't beused anyway.

Performance on R420 or performance on NV40? Valve punted on HDR in HL2 according to a message on HL2.net forums. I wonder what platform Valve internally develops on to test their HDR rendering performance?
 
LeStoffer said:
Performance of what? And what hardware are we talking about?

FP frame buffers. With NV40 fill-rate and bandwidth will be halved. There is no AA either. I suspect that these will be the long term limiting factors.
 
DaveBaumann said:
FP frame buffers. With NV40 fill-rate and bandwidth will be halved. There is no AA either. I suspect that these will be the long term limiting factors.

Okay, but is it really that bad a penalty if we're running medium long shaders anyway? Besides the AA issue it doesn't that bad, but to be honest I don't know often you'll need to use them (e.g. each frame or just now and then for special effects).
 
DemoCoder said:
Do you lose AA?

With NV40 and floating point frame buffers, yes

Anyway, most HDR uses I have seen use render targets where MSAA can't beused anyway.

Aren;t they processed to a separate buffer that is not the final frame buffer?

Performance on R420 or performance on NV40?

For using the FP buffer it will halve the fillrate and require twice the bandwidth.

Valve punted on HDR in HL2 according to a message on HL2.net forums. I wonder what platform Valve internally develops on to test their HDR rendering performance?

Given it discussed numerous times previously, they developed it with R300's. I doubt they have had a chance to chage the code they already had to any other method.
 
LeStoffer said:
Okay, but is it really that bad a penalty if we're running medium long shaders anyway? Besides the AA issue it doesn't that bad, but to be honest I don't know often you'll need to use them (e.g. each frame or just now and then for special effects).

Its wan't clear to me whether the FP frame buffers could be utilised at the same stime as the standard frame buffer - it seemed like it was described as an either / or situation. I don't know how they will operate together yet.
 
DaveBaumann said:
DemoCoder said:
Do you lose AA?
With NV40 and floating point frame buffers, yes

I have alternative information that says otherwise.

For using the FP buffer it will halve the fillrate and require twice the bandwidth.

Yes, but on a shader bound game, the fillrate and bandwidth hits will be partially hidden by the ALUs.
 
DemoCoder said:
I have alternative information that says otherwise.

Well, I asked David Kirk that in front of a room full of editors and he said something along the lines of "No, lets take one thing at a time". Neither Toni Tamasi, Emmett Kilgariff or anyone else put anything contrary to that.

Yes, but on a shader bound game, the fillrate and bandwidth hits will be partially hidden by the ALUs.

That could be the case, however in titles we see at the moment that are considered shader limited have completely free AA, indicating that there are bandwidth limitation still - and the bandwidth penalty will be for every pixel, not just those that are shaded. And with the pace of hardware development it seems like the shader capabilities will keep pace and the bandwidth issue doesn't look to be solved in the future, unless there is a step change in bandwidth availability or bandwidth usage.
 
DaveBaumann said:
DemoCoder said:
I have alternative information that says otherwise.

Well, I asked David Kirk that in front of a room full of editors and he said something along the lines of "No, lets take one thing at a time". Neither Toni Tamasi, Emmett Kilgariff or anyone else put anything contrary to that.

I have found the higher level Nvidia execs not to always be reliable sources of information concerning lower level architectural details. Kirk or Tomasi probably couldn't tell you exactly the penalty for using a PS3.0 branch for example. This is not a critique of them. NVidia is too big with too many projects for them to be intimately involved and have perfect memory and unstanding of all of them.

In fact, at the NV30 launch, they were either ignorant or deliberately obtuse or obfuscating when asked detailed questions.
 
Yes, but there were plenty of other NVIDIA personel there, inclusive of developer support, that I suspect they would have known. Plus, the whole HDR thing definitly seemed to be Kirk's baby - this entire section of the NV40 architecture presentation was a late addition, IIRC, and wasn't featured in the notes we were given - this was a little breakout specifically given by Kirk.
 
Back
Top