NV35 - already working, over twice as fast as NV30?

So both R350 and NV35 will work with true 8 pipelines (8 pixels output per clock) and have a minimum of difference theoretical single pixel fillrate (R350 = 380 MHz, NV35 = 400 MHz). That's a difference of 5 percent.
Not So True after Todays revelations about PowerColor and FIC's 9800pro offerings..

FIC 400/460 256Mb

Powercolor OVER 400/460 256mb...

Who knows what else ATi partners are going to offer. Being that the rumored final specs for the Nv35 will be 400/400... um... Hard to say what the outcome will be. Especially if powercolor and a few other Ati partners offer 9800pro's Clocked at 425 or even 450mhz with better cooling. Couppled with 460mhz Ram.

It looks like the Nv35 and 9800pro are going to be a pretty even match. Which makes you wonder what Nvidia is going to do about the R390.... ;)
 
DaveBaumann said:
Where does this come from?
Remember the original PR that was talking about something "without limits?"

It would be most interesting to see what happens with the "soft 9800" mod. But, OpenGL Guy should be pretty reliable, so I'm probably wrong.

Only about 5%? That's not much. But the results I would rather see are FSAA results. It was only with FSAA that I noticed the performance problems in NWN.
 
Chalnoth said:
Remember the original PR that was talking about something "without limits?"

Geez, you're so LAME man! You're SERIOUSLY going to tell us you STILL expect that blurb to be anything but marketing gobbledygook? Everybody ELSE on this earth did you know! Don't you ever learn?

Stop being such a boring silly f-boi person, thank you.

*G*
 
According to a credible Ati resource, the f-buffer is an actual on-chip, on die buffer in the R350 pipeline (accounts for the increase in transistor count between R300 and R350). It is definitely not a software solution.
 
DaveBaumann said:
Chalnoth said:
Ichneumon said:
Well... tweaking their HyperZ implementation in R350 so that it works better with stencil ops should be something you'd be impressed by then Chalnoth... since you harped so strongly on that with the 9700..... of which the NV30 has the same problem.
That's nice, but I still need to see the results.

http://www.beyond3d.com/reviews/ati/r350/index.php?p=21#stencil
:D
Well i don't see that great 1-6% at lower resolutions and +0% in 1600*1200, plus what does the 9700 hack to 9800 give us? I'm puzzled :?
 
I didn't say whether it was good or not, I'm just providing the results he asked for.

FYI, that was with the 9700 running as a 9800.
 
DaveBaumann said:
During my chat with NV's Andrew and Adam at CeBit they said that they guys who came up with this offered it to them first, but they turned it down, so they went to ATI and implemted it.

Very interesting piece of info there, Dave! nVidia might havt felt that they had a better solution - or they might simply feel that whatever talent they already have is superior to anything outside.
 
AFAIK

2 weeks ago in a forum, someone claimed that he's testing F-buffer.
Also claimed that in 512x512 window, he gets ~ 40fps when shading object with ~ 500 instruction long shader, if the object covers ~ 50% of the window.
With 100% covering of the window, fps dropping to ~20, with 2% raising to 200+ .
What was left unknown - the card he was using. He did not gave striaght answer was it R350 or R300softmod (probably NDA?!). He has R300 for sure, whether he has R350 also I don't know.
Anyway the performance numbers he gave look impressive for me.
 
What was left unknown - the card he was using. He did not gave striaght answer was it R350 or R300softmod (probably NDA?!).
F-buffer is a hardware buffer! There is no way R300 could have it unless it had it before the R350 and it was somehow broken.
 
Luminescent said:
F-buffer is a hardware buffer! There is no way R300 could have it unless it had it before the R350 and it was somehow broken.
Oops. Now I found the original. The man tested using "OpenGL's float_buffers" on R300. The difference - you have to "cut" the shader manually + with hardware support they should be faster.
Sorry, I just remembered "important" words as 'f-buffer', but forgot "emulated via float_buffers" :(
 
Alright, chavvdarr, I see what you're saying.

In reference to the f-buffer emulation, after the programmer specifies shader data be stored in the float buffer, the entire rendering pipeline has to go through another pass to access the data, right? It is nice to know that R300 can somehow emulate the functionality of f-buffer through multipass without a major performance penalty.
 
Luminescent said:
In reference to the f-buffer emulation, after the programmer specifies shader data be stored in the float buffer, the entire rendering pipeline has to go through another pass to access the data, right? It is nice to know that R300 can somehow emulate the functionality of f-buffer through multipass without a major performance penalty.
You cannot emulate the F-buffer with float buffers, at least not the way current float buffers are implemented. Float buffers have restrictions on them that don't apply to the F-buffer.
 
OpenGl, sorry if I mislead. In this case, what I meant to point out through my float and f buffer functionality comparison, was that they both allow for more fragment instructions than supported in a standard hardware pass. Float buffers do seem to require complete pipeline multipassing, yet the performance penalty is not that great (guess it has to do with the functionality of the rendering engine). I state this, not to rephrase the obvious, but to point out that, even when multipassing with float buffers, the R3XX core yields decent performance returns. I could imagine the insignificant, infintecimal performance penatly incurred when there is native f-buffer support (no external multipassing). So there is no reason, in light of the evidence, for someone to state that a greater-than-usual penalty will be incurred if the f-buffer is put to use, in the R350.

P.S Now that I re-read, my post I realize I'm arguing for the heck of it. On with the show.
 
DaveBaumann said:
Remember the original PR that was talking about something "without limits?"

I don't seem to remember that intimating that 9800's F-Buffer is a software implemtation.
I'm not attempting to say that, either. Just that the hardware was there in the 9700, just not yet activated. But, according the OpenGL Guy this isn't the case either.
 
Back
Top