QuadroFx1000 ( NV30GL ) news and pic

FYI. Clocks are 300/300 for FX 1000 and 400/400 for FX 2000. They are clocked down from GeForce FX to ensure reliability in the workstation environment.

I called nVidia today to find out this exact information for another forum (should have looked here first) but the guy I talked with, Manuel, said that neither of these cards (1000/2000) will be having the FX-Flow cooler since its not necessary.
 
Psikotiko said:
DaveBaumann said:
FYI. Clocks are 300/300 for FX 1000 and 400/400 for FX 2000. They are clocked down from GeForce FX to ensure reliability in the workstation environment.

I don't remenber where, but i saw a video ( i think it was german ) with the GFFX giving 2 blue screens.

I hope it was a driver's problem. :LOL:

I saw the video too. BUT it was a very very early version of the card. When i saw the video, around november-december, it was like a month old or so i think.
 
DaveBaumann said:
FYI. Clocks are 300/300 for FX 1000 and 400/400 for FX 2000. They are clocked down from GeForce FX to ensure reliability in the workstation environment.

:oops:

That is a quite bizarre idea, although I acknowledge the fact that they probably don't need to clock it at 500MHz+ to oust the fairly unimpressive (thus far) FireGL X1. I'm sure it is far more likely that they are using a slightly less maelstromous leaf blowing contraption on the FX 2000, or just a standard HS/fan and therefore cannot qualify at the same clockspeeds as the consumer-level card.

MuFu.
 
Andrew said:
FYI. Clocks are 300/300 for FX 1000 and 400/400 for FX 2000. They are clocked down from GeForce FX to ensure reliability in the workstation environment.

I called nVidia today to find out this exact information for another forum (should have looked here first) but the guy I talked with, Manuel, said that neither of these cards (1000/2000) will be having the FX-Flow cooler since its not necessary.

The information on nVidia's website states the following on their flash demo -

Quadro FX 1000 occupies AGP slot only
Quadro FX 2000 occupies AGP + adjacent PCI slot for airflow
 
Hmm... I guess thats what I get for calling Manual over at nVidia. He was probably the guy who fetches coffee for the tech guys.
 
Andrew said:
Hmm... I guess thats what I get for calling Manual over at nVidia. He was probably the guy who fetches coffee for the tech guys.
Well, it isn't the 'FX Flow' cooler... ;)

It covers two slots, but there's room enough to add the Genlock/Framelock option without taking up another PCI slot.
 
One question that interests me is whether there are any inherant advantages to using DDR-II memory or not. If not then it would seem that NV30 is tuned only to use DDR-II because you'd think it would be cheaper to use standard DDR, or even DDR SGRAM, certianly on the 300MHz version.
 
So, does all of this mean that the GFFX really is heavily overclocked to beat the 9700? Seems to me that nVidia has never in the past downclocked their Quatro cards from the GeForce version. If so, and if this has required the heatsink from hell, then just how much overclocking headroom does it have? And just how much room is there for nVidia's NV35 refresh? It's really beginning to look like this is going to be a very shortlived product, to me. But, as Dennis Miller says..... Well, I could be wrong......
 
martrox said:
And just how much room is there for nVidia's NV35 refresh?

A huge amount (~20-25%). It will ultimately be mainly process-oriented, but there are a number of base gains to be had by fixing/optimising the current ASIC (NV30).

MuFu.
 
In trms of short lived products the Geforce 256 SDr was another shortlived product that was an excellent stepping stone for nVidia wasnt it, and was replaced twice in 6 months?

Maybe the Fx will go a similar route.
 
MuFu said:
martrox said:
And just how much room is there for nVidia's NV35 refresh?

A huge amount (~20-25%). It will ultimately be mainly process-oriented, but there are a number of base gains to be had by fixing/optimising the current ASIC (NV30).

MuFu.

I just find it hard to believe with that cooler on the GFFX that there's much to be gained here.......
 
martrox said:
MuFu said:
martrox said:
And just how much room is there for nVidia's NV35 refresh?

A huge amount (~20-25%). It will ultimately be mainly process-oriented, but there are a number of base gains to be had by fixing/optimising the current ASIC (NV30).

MuFu.

I just find it hard to believe with that cooler on the GFFX that there's much to be gained here.......

umm process improvements different ASIC design on the same process (just look at R200->R300). There are always opportunties for improvements second/third time around. A lot of people cant believe that there is enough left on the 0.15 process for a major leap in clock speed from the R300->R350.
 
Randell said:
In trms of short lived products the Geforce 256 SDr was another shortlived product that was an excellent stepping stone for nVidia wasnt it, and was replaced twice in 6 months?

Maybe the Fx will go a similar route.

If so, people who buy an FX will feel screwed. (I was one of the idiots who bought a 256 SDR; power-hungry [should have had a power connector], bad drivers initially, never really ran right in my machine, and not much faster than a TNT 2. I replaced it with GF 2 as soon as those came out which ran (and still runs) like a dream.)

I think the NV30 continues to be a disaster for NVidia. Two months after the announcement and there are still not any real reviews, much less cards in stores. If ATI is set to announce an R350 that matches or beats NV30 in performance, the disaster is complete.

I imagine NVidia will be marketing the hell out of CineFX cinematic rendering, and hoping you buy one of their other cards (NV3x) that feature it. At this point I don't think they expect or even intend to sell that many FX cards (what was the Inquirer figure: 100,000?)--it's just a placeholder at the top of their line until a more successful high-end card is available.

Odd to think that NVidia finds itself with a flagship card is late, may be one-upped by the competition the moment it becomes available, and depends heavily for its success on features that are not directly supported by DirectX (then it was t-buffer, now it is pixel shader extensions...)
 
MuFu said:
martrox said:
And just how much room is there for nVidia's NV35 refresh?

A huge amount (~20-25%). It will ultimately be mainly process-oriented, but there are a number of base gains to be had by fixing/optimising the current ASIC (NV30).

MuFu.
Any idea if the low-k process will be available at that time? :?:
 
antlers4 said:
If so, people who buy an FX will feel screwed. (I was one of the idiots who bought a 256 SDR; power-hungry [should have had a power connector], bad drivers initially, never really ran right in my machine, and not much faster than a TNT 2. I replaced it with GF 2 as soon as those came out which ran (and still runs) like a dream.)
I agree
 
antlers4 said:
Odd to think that NVidia finds itself with a flagship card is late, may be one-upped by the competition the moment it becomes available, and depends heavily for its success on features that are not directly supported by DirectX (then it was t-buffer, now it is pixel shader extensions...)

There's one thing which isn't quite correct in that comparaison. 99% of nVidia's shading extensions are available in DX9 if you specifically ask for it.

Dynamic branching in the VS is available. I'd guess most of the GFFX instructions are also available, since the GFFX driver directly recieves the HLSL code, which got cos and sin instructions, which will simply be done with more instruction slots on the R300.
DX9 supports up to 512 PS instructions, the GFFX supports 1024 and the R300 96 ( AFAIK, that's because DX9 divides constants and instructions and the GFFX doesn't. So the GFFX is able to do 1024 for both, while DX9 does a maximum for 512 for either )
FP16 ( which is not supported by the R300 AFAIK ) is available in HLSL, it's called "half" - I guess the R300 would automatically give full precision to that, lowering performance.

Little of the CineFX architecture is only available in OpenGL. There is some, but it's a small part of it. So saying it's the same problem as with the t-buffer simply isn't true.


Uttar
 
FP16 ( which is not supported by the R300 AFAIK ) is available in HLSL, it's called "half" - I guess the R300 would automatically give full precision to that, lowering performance.

R300 will not lower or increase performance it will operate at 96bit in the PS regardless, at the same rate it does operations now. Performance will only change if the source or target was of a higher or lower bit level, but thats likely the same with GFFX.
 
DaveBaumann said:
R300 will not lower or increase performance it will operate at 96bit in the PS regardless, at the same rate it does operations now. Performance will only change if the source or target was of a higher or lower bit level, but thats likely the same with GFFX.

Hmm, yeah. But what I meant is lowering performance compared to what's needed. Generally, if a programmer is going to ask for "half" ( or rather, partial precision ) , it's because it gives no practical advantage to use full precision in that circumstance. So, compared to an implementation which supports half, you're losing performance.


Uttar
 
What you mean to say is that potentially an alternative architecture may gain performance by specifying a lower bit rate, since R300's pixel shader processor rate is constant - it will always operate a 96bits of precision per clock. However, you also have to be sure of what rate the alternative architecture actually executes 64/128bit instructions at.
 
Back
Top