Put Your GeForce FX Questions here...

Status
Not open for further replies.

Dave Baumann

Gamerscore Wh...
Moderator
Legend
OK, I've just had an offer from NV's PR for some questions to be answered on GFFX, so hopefully if we send some off we'll actually get them answered. So, write down what you want to know here...

I think we've already got questions over the number of FP16/32 instruction per cycle and what Displacement Mapping support it actually carries.
 
Can he clarify the type of technic used for AA (6* and 8* mode)?
What's the difference between the GF FX 5800 and 5800 Ultra? ;)
 
1.Price for the two GFFX models.
2.Are they serious about their fsaa having no performance hit?(it says so
in their tech papers)
3.Why has their fssa engine remained essentially the same?
4.Adaptive aniso.What is its average performance hit and how much does it affect IQ?
 
Does it do tessellation in hardware ? If so, then where in the pipeline and how is it programmable ( i.e. what kind of surfaces are doable , does it sample from texture maps )
If no, then whats NVidias stance now on doing HOS in HW in general ? ( they touted GF3s HW HOS features quite a lot. Their developers site still has materials on that )

hmm remembered something on texture filtering .. is the only texture magfilter supported still bilinear ? no bicubic ?
 
Will dust, hair, animal fur be a major worry with the heat management design?

How thoroughly and often will such a card with high air throughput need to be cleaned in a normal household - how will this be obivated or achieved?

Will the GFX have heat monitoring software that detects a fan / heating failure and totally protect the card?
 
why are the 4xs,6xs FSAA modes d3d only?
will there be official support for the new FSAA modes for gf3,4 cards and if not , why?
will there be new Multisampling only FSAA modes?
 
f-buffers, gl2

Does it support f-buffers?
The GFFX shaders can be long but they are still not long enough and sometimes multipass can not help (for instance when there's transparency/blending involved).

Do they plan to implement (a subset of) OpenGL 2.0 on GFFX hardware when the spec is finalized?
The GFFX will not be able to comply fo the complete gl2 shader language spec but implementing a (functionaly equivalent to CgFX) subset of it would be very nice.
 
FSAA:

How big is the memory footprint of their 8xFSAA implementation, regarding max. selectable resolution on 128MB cards. Will it switch back 'silent' to lower FSAA modes (like GF3/4) if running out of memory?

Anything about their actual FSAA implementation (sampling grids, line caches etc.) would be of course most interesting.

Thanks
 
why the Z-Compression on GFFX can achieve 1:4 ratio and this kind of useful feature can it still include in next following product like NV31?
 
Oops, 4 things that I want to know

1. Gigacolor (10/10/10/2) desktop support (like Parhelia)
2. Available FSAA modes on the respective APIs
3. MPEG-2 decoding acceleration (DVD, HDTV, any enhanced Motion compensation)
4. Video scaling filters employed and deinterlacing

[edit: add 2 more that I have missed and clearer points]
 
1) Does nVidia still support the claim that the NV30 has an effecitve bandwidth of 48 GB/sec. (Assuming 500 Mhz core and memory?)

2) If so, how is that figure arrived at?
 
Perhaps something about their implementation for Vertex Shaders. Something like real numbers (or some numbers different from what Toms had that I don't know where they come, 3X GF4 maybe?) so we can compare better with other systems.
 
Re: pbuffers, gl2

_GeLeTo_ said:
Does it support pbuffers?

pBuffers are supported from TNT and up, so I think it's a safe bet to assume it does. But a slightly related question:

Binding a pBuffer directly as a texture with WGL_ARB_render_texture has for a long time resulted in very poor performance on GF3/4 level chips, in fact it's much slower than using glCopyTexSubImage2D() on the pBuffer. The whole idea with this extension is to improve performance of rendering to textures by removing the copy step, and the driver could always use glCopyTexSubImage2D() under the hood anyway, yet the performance had never really been close to what it should be, a fact that has caused some problem for me in several of my demos.

The word was that this should get fixed with newer drivers, but no words about fixes have yet arrived. I assume that the problem on the GF3/4 level chips are that the format of textures and render targets doesn't match on these chips. Is that true? And if it is, does the GeforceFX solve this problem by supporting matching texture and render target formats?
 
is the gamma correction (if present) hardwired (for gamma = 2.4, or whatever the usual value is), or is it programmeable?

If there is gamma correction on output, is there gamma correction on input (textures)?

Serge
 
Oups - f-buffers

Oups, that's what happens when I post without thinking (I've used pbuffers in my apps and have a few post in the opengl.org forums about pbuffers).
I meant f-buffers, not p-buffers.

Using f-buffers is the only easy way to handle multipass rendering when blending/transparency is involved. OpenGL 2 requires complex shaders that can not fit in the hardware limits to be automaticaly broken into several passes and f-buffers can be used to pass the intermediate results. This is not very important for games but would be great for hardware accelerated offline renderers.

Here's some info on this.
http://graphics.stanford.edu/projects/shading/pubs/hwws2001-fbuffer/

I'll edit my previous post now
 
One more thing, not directly NV30 related, but: whats the issue with AGP upstream speeds, and is it resolved on GfFX so its really usable for cinematic rendering ?
http://www.tech-report.com/etc/2002q3/agp-download/index.x?pg=1
[edit]
hmm seems like NV has taken care of the issue already, as mentioned in this recent Xabre600 review
http://www.tech-report.com/reviews/2002q4/sis-xabre600/index.x?pg=7
see below, AGP write performance test
160MB/s on Gf4MX460 vs 10MB/s on Gf4Ti4600 earlier, seems like at least ten-fold improvement 8)
So ill replace the question with:
How does it feel to be second to market with .13-micron GPU ? :rolleyes:
 
Status
Not open for further replies.
Back
Top