Specification of the NV35

2* shaders operations (than NV30 )
There is some doubts about this right now... Even though i've waited for this myself. It looks like NV35 is a simple NV30+256 bit memory bus, which is really sad :(

[edit]And yeah, other specs are true :)
 
3dchips-fr.com
For the NV35 the INTELLISAMPLE was polished and becomes INTELLISAMPLE HTC. The algorithms charged to determine when a color of texture or data Z must be compressed were refined.
Moreover, INTELLISAMPLE HTC introduces techniques of antialiasing and improved filtering anisotropy
Can we have hope?! :?
 
We've heard that line before from nVidia...

"This is new...and that's new..."

...come to find out that it's just marketing rebadging stuff that older generation products can also take advantage of (be it intentional, or not).

I'll pretty much believe it until _I_ see it...nVidia has lost way too much credibility these last 12 months, and I certainly am not going to believe a word from any of their PR-laced white papers.
 
I'm a gamer so I lack the expertise of most members here, but I've followed 3d card developments for a while now. I expect the nv35 to be what the nv30 should have been. Is that an accurate assesment?

I'm intrigued by just one feature of the nv35: the shadow accelerator. How does it work? What benefit will it bring? Does a game have to be specifically programmed to take advantage of it, or is it general purpose enough to be speed up shadow rendering in any game?
:?:
 
ZoinKs! said:
I'm intrigued by just one feature of the nv35: the shadow accelerator. How does it work? What benefit will it bring? Does a game have to be specifically programmed to take advantage of it, or is it general purpose enough to be speed up shadow rendering in any game?
:?:
Probably either just increased speed for stencil buffer writes or simply the ability to use Hierarchical-Z when the stencil buffer is in use. . . The 9800 already has the latter.
 
ZoinKs! said:
I'm intrigued by just one feature of the nv35: the shadow accelerator. How does it work? What benefit will it bring? Does a game have to be specifically programmed to take advantage of it, or is it general purpose enough to be speed up shadow rendering in any game?
:?:

It's probably the marketing name of GL_NV_ depth_bounds_test OpenGL extension.
Eric Lengyel said:
Yes, NV35 supports the GL_NV_depth_bounds_test extension. (It might end up being an EXT extension.) This lets you test the frame buffer depth value (not the fragment depth value) against a specific range and reject fragments where it fails. Very useful for stencil shadows, but not supported on NV30. This extension is discussed in the forthcoming book The OpenGL Extensions Guide:
http://www.terathon.com/books/extensions.html
http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/009442.html
 
Ostsol said:
ZoinKs! said:
I'm intrigued by just one feature of the nv35: the shadow accelerator. How does it work? What benefit will it bring? Does a game have to be specifically programmed to take advantage of it, or is it general purpose enough to be speed up shadow rendering in any game?
:?:
Probably either just increased speed for stencil buffer writes or simply the ability to use Hierarchical-Z when the stencil buffer is in use. . . The 9800 already has the latter.

It was a feature that John Carmack asked for as it will increase rendering speed for shadow routines that use the stencul buffer (very importtant for Doom III). It was broken in both the R300 and the NV30, and is now fixed in the R350 and NV35.
 
Just listened to the Nvidia CC. Jen Hsun mentioned that they are using first silicon for NV35, that it was *perfect* on first pass (his words not mine) and that they have been ramping since February (he mentioned small amounts of production in January as well).

Perhaps there's hope we'll see this thing fairly soon.
 
"UltraShadow" doesn't sound too intriguing anymore... :?

Based on comments by the Stalker developers, I'd come to expect more from nv35's shadow acceleration than just parity with the r350.
 
Just wait and see - as someone already mentioned. Plan for the worst and hope for the best. Or, however it goes...

Nvidia has room to work to some degree. People trip at times, let's see how they correct the NV30 and move forward. The market loves them regardless - they continue to post positive numbers - and have lost very little market share - if any.

Let's face it - not many games demand the 9800 Pro let alone one with 256MB of RAM. Ravenshield, UT2003, and whatnot run perfectly fine on a Ti4200 or 9500 Pro. Most people don't even enable AA/FSAA and leave the stuff at the default settings...

I'll wait - hope it is a good chip - then see how ATI and Nvidia dance. Something will shake out. Until then, I just need ATI to fix the damn bug whereas I can enable AA/FSAA in DaOC and I'm happy. If at E3 Epic displays a new engine and/or game, I'll consider upgrading.

Peace
 
Evildeus said:
Name FX 5900
According to Nvidia 20-80% faster than a 9800 pro @ 16*12 + 4*AF/8*AF
I assume that you mean 4*AA/8*AF. In any case, one wonders what sort of AA, and what sort of AF...
 
Evildeus said:
Name FX 5900
3 cards 299-499$
2 introduced in june one in july
GPU 450 MHz
Memory 425 MHz
256 bits bus
A bandwidth advantage, and looking like a large one...I think this will be the secret to benchmark selection. "256-bit is overkill, until we do it too". :p
I expect the $499 cards to be the benchmarks we'll see. :-?
128/256 MB of DDR/DDR-2
AA-AF ameliorations ( :? ) INTELLISAMPLE HTC
I'm reminded of some of the hacked up RAM usage calculations I made before, for when the post processing AA might be used to offer performance increases. I think if this will be a possible use of the 256MB cards, and I'm also curious about changes for the 256 MB "mid range performance level" cards and their AA performance hit, maybe exposed now, or maybe depending on the Det FX drivers or some other later driver change.

CineFX 2 ( :rolleyes: )
New shadow accelerator: UltraShadow (for DOOM 3 :))
Very Xabre-esque (can I steal that term?)...not because of the names, but because of the names instead of actual details. If this would change after launch, it would be different, but it doesn't seem likely that it will.

2* shaders operations (than NV30 ) :rolleyes:

I think this is good news for PS 2.0 compliant performance, and maybe Direct X feature exposure in the drivers, compared to the NV30.

According to Nvidia 20-80% faster than a 9800 pro @ 16*12 + 4*AF/8*AF

This would fit trading extravagant RAM usage for performance for AA using the post processor, I think. Just a whacky theory for now, though.
But then the caveats would be that it was when you aren't using PS 2.0, have a 256 MB card and have enough room for your texture demands, and aren't using the equivalent of Application AF.

...

Further on the whacky thinking...if some aspect of AF quality depends on post processing, the lack of RAM might have limited the "new" AF quality for the NV30 as well...so you'd need a 256 MB card (and 4x AA activated) to get good quality AF with good relative performance) in a game with the texture usage level of recent games. This theory might be even whackier, but I think it fits something that could be spinned into "debugging", and maybe nVidia's nv30 not being as screwed up as it appears to be in this regard (as long as there is enough RAM).
Did 3dfx have any such aniso-like shortcut in their post processor usage? For AF being on, it would be sort of an adaptive texture supersampling but with AF sampling criteria, and I think a dependence on AA usage to restore texture sampling quality might be the reason Aggressive AA looks so horrible, and Balanced doesn't look measure up to Application (i.e., it might with 4x AA implemented in the post processor, which it couldn't be for the nv30 because of expensive and high clock speed demands needed for RAM, making 256MB impractical).
The only problem with this would be the price you have to pay in RAM usage which might preclude getting extra details from higher resolutions and larger textures (EDIT: Oh, yeah, and killing color compression effectiveness...you'd be doing supersampling and hiding performance loss in reduced AF performance hit and skipping a blend step, and this would make sense with a more complicated color compression scheme that prevented color compression from disappearing completely). Quake III or something with similar demands as a showcase application for benchmarking would fit this theory.

Oh, and [/rampant speculation] ;)
 
kid_crisis said:
Just listened to the Nvidia CC. Jen Hsun mentioned that they are using first silicon for NV35, that it was *perfect* on first pass (his words not mine) and that they have been ramping since February (he mentioned small amounts of production in January as well).

Perhaps there's hope we'll see this thing fairly soon.

They also said margins are decreasing primarily because of yield issues , they are not hitting there cost targets yet. Also , when specifically asked how soon after announcement will there be cards , the reply was hundreds of thousands of units by Christmas. That doesn't sound soon to me.

Btw "perfect" is a relative term in doublespeak.
 
indio said:
Also , when specifically asked how soon after announcement will there be cards , the reply was hundreds of thousands of units by Christmas. That doesn't sound soon to me.

Actually, he wasn't asked how soon after announcement that cards will be available. This is what he was asked

the nv35, is it fair to say we should expect that as a high end chip and be here in time for the Christmas season?

In his answer, he did say something along the lines of they should be available relatively shortly after next week (i'm guessing sometime in june, given that it took ATI two months from announcing their .13u part to shipping it). It also makes sense that he said there would be hundreds of thousands of units by Christmas because he's just answering the quesiton he was asked.
 
In the sense that a shader-heavy benchmark like 3DM03 at a low res like 10x7 is probably GPU-limited on cards with 256-bit memory.

BTW, how do we know the Det 50's correctly use shaders? Tech-Report showed in their 5600 256MB review that the 43.51 seems to correct the clouds in 3DM03 GT4, but the grass still doesn't look as bright as on a Radeon.
 
Pete said:
In the sense that a shader-heavy benchmark like 3DM03 at a low res like 10x7 is probably GPU-limited on cards with 256-bit memory.

BTW, how do we know the Det 50's correctly use shaders? Tech-Report showed in their 5600 256MB review that the 43.51 seems to correct the clouds in 3DM03 GT4, but the grass still doesn't look as bright as on a Radeon.
So how do you know that's not 2 times, as only GT4 uses PS2/VS2 where there's issues.

Well that's what TT-H says, that's all i can say, before reviews newt week ;)
 
Back
Top