Tech-Report blasts GeForce FX

BTW, does the R300 really do displacement mapping 100% in hardware, is this a case where the feature is really 90% done by the drivers, with a small amount of setup by the HW? I'm having trouble visualizing the R300 creating actual vertices on-chip. It's certainly possible, but it seems like a big step to put the entire tesselation engine 100% into HW. I can more easily imagine having the driver massage the mesh and insert placeholder data into the mesh (e.g. doing the actual tesselation) and than having the HW operate on them to move them to the correct positions with the correct normals, et al.

Any ATI engineer care to comment on the actual process. I see it kind of like how MPEG decoding works. First mo-comp was hardware assisted in hardware. Then iDCT was added. Others added pull-down and scaling features. Gradually over time more and more of the software codec got replaced by hardware acceleration, but it didn't start out like that.
 
Im most intrested to seeing a lower clock version of this card. Perhaps 400/800mhz w/o need for such exotic cooling. If nVidias yields are fairing well enough to speed bin a 400mhz chip with success, then they likely have a real winner, since it should cost less per chip to manufacture and also Im assuming the pcb boards will cost significantly less to produce as well (compaired to the dense 9700 boards).
 
partly right... BUT

then the damn thing would not be much faster than a 9700Pro any more!
And latter costs aroun 320$ at the moment.....

I fully expect the 50MHz part to consist of the top 10% performing chips of the whole batch. The rest will be sold on the non-Ultra versions IMHO.

Nice times to come... i need a new PC in spring... and am happy at the choices thrown to me.
 
2B-Maverick said:
then the damn thing would not be much faster than a 9700Pro any more!
What makes you think its going to be faster in the first place ? 8)
Id find it real funny, if ATi pulled a "new Detonator" on nvidia. i.e. release new performance-enhancing drivers the day when first real NV30 benches hit the web
 
2B-Maverick said:
partly right... BUT

then the damn thing would not be much faster than a 9700Pro any more!
And latter costs aroun 320$ at the moment.....

I fully expect the 50MHz part to consist of the top 10% performing chips of the whole batch. The rest will be sold on the non-Ultra versions IMHO.

Nice times to come... i need a new PC in spring... and am happy at the choices thrown to me.

Thats exactly my point. A product thats on par with the 9700 in performance, yet much more profitable.
 
A lower clocked, equivalent performance NV30 board will still sell well out of brand recognition, loyalty and belief in the driver quality being superior and JC will undoubtably endorse the 5800/5800U for Doom3 based on - it will be faster than the R300 in Doom3 (however small or large the difference) and OGL driver quality.
 
There's something I forgot to ask, and I've not seen it elsewhere either, that could have quite an impact in DoomIII, and thats whether GFFX can pack Stencil ops into one pass.
 
Nvidia claimed two-sided stencil in their CineFX papers I believe, so I would be suprised if the GFFX is missing this.
 
Is it so hard to understand that there isn't going to be an overall winner this time around? That, depending on the game or application, one card is going to win, and the other will lose?

Not for me personally. ATI took such a gigantic leap in terms of performance over the R200 with their R300, that it seemed virtually impossible to me for any IHV to crank out an accelerator that would be by a lot faster while at the same time keeping a 400-500$ launch pricetag.

Could be that many got too exited from the prolonged anticipation of the Uber-chip and it's more the psychological factor that let's them down, then the card itself.

I mean come on, the R300 launched only a couple of months now and there are people that can snatch a card like that slightly over or equal to 300$, while highly potent mainstream cards like the 4200's cost half than that or less.

I too would want right now an accelerator that's capable of 16x sample MSAA + 32x Level aniso with minimal performance cost, but I'm not willing to pay 1000+$ for it.
 
DemoCoder said:
megadrive0088 said:
the thing is, GeForce FX isn't lacking in performance. the thing is clocked at 500 Mhz! What it's missing is FEATURES. like a new method of AA.
a wide bus, etc.

A wide bus isn't a feature, it's architecture. From the point of view of software or the end user, they don't care what architecture is used to render a pixel to the screen, only that the pixel is rendered fast and correct.

ATI and NVidia made different design choices. NVidia designed to save some transistors and go with faster memory for more bandwidth. The same as they did back when 3dfx and Nvidia were making bandwidth decisions (DDR vs SLI). This time around, ATI decided to make a wide bus rather than rely on exotic memory. PowerVR/Kyro relies on tiling for their bandwidth.

Kudos to ATI for doing something different this time around (256-bit bus, like Parhelia and P10), but doing something different isn't equivalent to doing something better. It's laughable to call DDR2 a "brute force approach" but not a 256-bit bus. Both are very simplistic ways of increasing bandwidth, unlike something more complex and "elegant" like deferred rendering.


It appears that NVidia has spent more of their transistor budget on shading this time around. They see large market not just in games, but in workstations and offline rendering. ATI made a different decision on where to spent their resources.

Neither is "better" or "correct", just different, depending on the audience or vision they are trying to fufill.

So now we are left with the situation of Intel and AMD. Two big players with nearly equivalent performance, but with different architectures: Intel going for the clock speed race, AMD doing more per clock. Again, both achieve the same thing, thru different means, and depending on the benchmark used, Intel wins, or AMD wins.


Is it so hard to understand that there isn't going to be an overall winner this time around? That, depending on the game or application, one card is going to win, and the other will lose?

That must be the first intelligent thing I've heard about GeForceFX since its launch.
 
BoddoZerg said:
That must be the first intelligent thing I've heard about GeForceFX since its launch.

Agreed. Too much psychological fall-out from ATi beating NV to market with the 9700. That said, however, I'm still extremely disappointed with NV30's AA support. The more advanced shading capabilities might be attractive to developers/programmers, but for consumers who'll most likely never see this really supported in games within the product's lifespan, they will be forced to instead 'enjoy' its inferior AA. And here I thought it was all about better pixels. . .apparently not (at least from the above perspective).

Edit: I just realized how mimetic this situation is to that of the GF2 vs. V5 back in '00. :rolleyes:
 
John Reynolds said:
Edit: I just realized how mimetic this situation is to that of the GF2 vs. V5 back in '00. :rolleyes:

other than the R300 isn't missing any key DX features and the R300 is unopposed in the market for longer :)
 
Actually, it seems more analogous to the original GF2 versus Radeon battle. The GF2 was sorely imbalanced with a lack of memory bandwidth, but compensated for this by having unearthly core clockspeeds.

As for the AA, I'm greatly disappointed with it too. For a chip that claims to be based on 3dfx technology, there's precious little that sounds "3dfx-ish" in the NV30. No texture computer, no jittered AA, no Gigapixel tiling/deferred rendering, in fact the most 3dfx-resembling thing about the GeForceFX is the endless delays and benchmarkless paper launch. ;)

However, it's ridiculous that people are taking the specs and jumping to conclusions about NV30's performance. You might as well take the specs for the Prescott and tell us how much faster than Pentium 4 it will be.
 
Well from a marketing perspective I could think of one IHV being actually glad, that they didn't opt (this time around) for a deferred renderer ;)
 
John Reynolds said:
Edit: I just realized how mimetic this situation is to that of the GF2 vs. V5 back in '00. :rolleyes:

Except theres one big difference. Back then both cards weren't really fast enough to use AA very much at all esp not for people who are fussy about framerates.

Now we are in a situation where highend cards a perfectly cabable of using FSAA in nearly all games often to a high degree while still getting a high framerate. To now bring out a card with rubbish AA is much more wasteful than it was with the gf2.
 
DemoCoder said:
Nvidia claimed two-sided stencil in their CineFX papers I believe, so I would be suprised if the GFFX is missing this.

I believe it's a feature required in the DX9 spec.
 
Back
Top