Tech-Report blasts GeForce FX

Chalnoth- what i read is a guy who is fed up with the hype that nvidia is spreading and is trying to give a fair shake of the broom for both ati and nvidia. Can anyone here truely say this card is impressive ? Can you say its impressive when its at least 6 months behind a card that does 98% of what this card does ? I for one am not impressed at all. I haven't been impressed with anything from nvidia since the geforce(the geforce 3 was kinda impressive but not to much)
 
So you are saying 128bit support is trivial? try do that with 0.15
design. It will melt your computer...

I didn't say 9700 is bad. All I am saying is bashing NV30 is
totally unjustified just because they are 3-4 month behind
in consideration of huge risk they took.

Also there are more perform design in NV30 that nobody in this board
is paying attention. For example like new Vertex shader Array.
You guys are paying attention only to surface feature description..

And for 9700 does 98% of NV30? You gotta be kidding?
 
Hey. everyone bashed the radeon 8500 for being later than the geforce 3 and yet it did far more than even the geforce 4 can do. Why can't i use the arguments that were used before. wait i remember them clearly, who cares what is going to happen to years or more from now my current card wont even be in my computer ?

I try to be fair but you know what this didn't do anything for me. It be like having a radeon out and then launching a tnt 2 . nothing happens. Yes a 128bit color is great , yes more than dx 9 is great. but guess what ati is doing 96bit color , more than dx 9 and they did it half a year ago .

This is so boreing its not even funny.. whats worse is this thing can be a dog in performance . You notice they only compare it in a benchmark to the r9700pro once and thats a custom doom 3 level.

Please stop looking like a fanboy , or if you are come out and admit it like I did.
 
Uh, could have that sworn that Nvidia doesn't have the final production version of the silicon back yet. So let me see, you guys want the A0 or A1 spin of the NV30 to be benchmarked against final R300 silicon? Sounds real fair to me........
 
I didn't say 9700 is bad. All I am saying is bashing NV30 is
totally unjustified just because they are 3-4 month behind
in consideration of huge risk they took.
Try about 6 months, by the projected February release. And yes, the 9700 does damn near everything the GFX does. The GFX's only notable advantages are it's higher clock frequencies and it's theoretical bandwith (the jury's still out on that one).

Mind you, nVidia could still pull a rabbit out of their collective hat when they launch it, if it really blows the R9700 out of the water in independant benchmarks. But that's not looking like it's going to be the case right now.

Even by nVidia's own claims of a 30% speed increase over the R9700, is it really worth waiting an extra 3-4 months, plus spending an extra $100-$150 dollars? It certainly doesn't look that way to most people. Sure, there's always going to be the "hardcore" types that have to get the latest, bleeding-edge cards and the NV30 is going to be for them. But for everyone else, the R9700 Pro is looking more and more appetizing.
 
As it looks at the moment, the 128-bit color support of NV30 does not give visibly better graphics than the 96-bit colors of the R300 and can thus easily be dismissed as 'trivial' (at least until someone shows pictures where there is a real difference). The new vertex shader array appears to deliver less clock-for-clock performance than the more traditional 4 vertex shaders of the R300, and is thus not that impressive. The added programmability of NV30 over R300 seems to mostly fall in a no-man's land between VS/PS 2.0 and 3.0 and is thus of somewhat limited interest too. Color compression? It's there in R300 too.

As for what NV30 appears to lack and R300 definitely has: a 256-bit memory bus, displacement mapping, rotated/programmable grid AA, floating-point cubemaps, multiple render target support. Hummm .....

Actual performance vs R300? Core clock says one thing, memory bandwidth another. We'll see - eventually.
 
NV30 = Late, underspecced(somewhat) due to feature crawl. And they sure didn't see that punch ATI threw at them with the 9700
Anyone remember the argument NVDIA had against 3dfx when they brought the voodoo5 with T-Buffer?
They said it was a highly unbalanced part with its massive fillrate but lacking in geometry processing capabilities('cause Geforce brought TnL)

Now the tables have turned and it is Nvidia with an unbalanced part. This time they devoted time in shader performance leaving behind a wider instruction bus, forcing them to resort to exotic expensive ultra fast memory to make up for it. Guess you can't win them all. I was really hoping for it to have an impressive feature set, so it would drive the competition's prices down, but alas it is not to be. ATI may not even need to launch a refresh of the 9700 so soon and I kinda resent that because i like to see technology advancing fast. So that we as consumers could reap the benefits from it.
 
Both

are impressive cards but as the 9700 delivered the jump significantly before the NV30 with the bulk of the useful (to gamers) features, I gotta admit to being extremely disappointed with the NV30.

So many promises seemingly unfulfilled, as of yet.
 
kid_crisis said:
...you guys want the A0 or A1 spin of the NV30 to be benchmarked against final R300 silicon?

A1 (or A01) is the stepping of the current samples. It's a sensational accomplishment being the first re-spin & given various attendees' comments about stability. If the next spin is final, then a 125m transistor chip bedded down in 2 revisions is a great feat. We may not have even seen the final PCB design...
 
128 bit not useful.. DX+9 irrelevent..

Hmm. Where did I hear this kind of talk before? YUP from 3DFX
fanatics.. (shades of 16bit color..). While ATI barely met
DX9 Spec, NV30 exceeds it. That's key point because only
being 0.13 will allow to make that implementation.

Who cares about 256bit/128 argument? That's internal
design decision. If NV30 perf exceed 9700, that's what counts.

NV30 unbalanced. NV30 unfulfilling.. Where's justification?
 
With 16 vs 32 bits, you could see a very clear difference once you laid a couple of translucent layers of effects on top of anything - with 96 versus 128 bits ... I'd like to see any pictures that actually demonstrate the difference. Anyone?

As for 128 vs 256 bits and balancedness: The fillrate/bandwidth ratio and memory compression methods of NV30 are pretty much the same as for the Radeon 9500 Pro, and we know that the 9500 Pro is horribly unbalanced: essentially 100% bandwith limited all the time. An NV30 with a 256-bit bus would (almost certainly) have creamed the Radeon 9700 Pro by 60% or possibly even more. As of now, it's really anybody's best guess which one is actually faster.
 
First, you say:
sc1 said:
128 bit not useful.. DX+9 irrelevent..

Hmm. Where did I hear this kind of talk before? YUP from 3DFX
fanatics.. (shades of 16bit color..).
Because people are saying "who cares?" about seemingly trivial differences. Then, you say:
Who cares about 256bit/128 argument?
About something that is vital to how well the card will perform. Hmmm...
 
sc1 said:
128 bit not useful.. DX+9 irrelevent..

Hmm. Where did I hear this kind of talk before? YUP from 3DFX
fanatics.. (shades of 16bit color..). While ATI barely met
DX9 Spec, NV30 exceeds it. That's key point because only
being 0.13 will allow to make that implementation.

Who cares about 256bit/128 argument? That's internal
design decision. If NV30 perf exceed 9700, that's what counts.

NV30 unbalanced. NV30 unfulfilling.. Where's justification?

If NV30 doesn't support displacement mapping, WTF you talking about with Nvidia exceeding DX9 specs..lets refresh what Thomas posted here:

Got a message from doug rogers:

No, the GeForce FX does not support displacement mapping.

-Doug Rogers
NVIDIA Developer Relations


Displacement Mapping is a Dx9 feature :LOL:

Higher precision is better for sure, would 128 bit vs 96 bit be visibably different, I wouldn't bet on it...

ATI bareley met DX9 spec ?? Please explain that comment :rolleyes:
 
While ATI barely met DX9 Spec

Really?

The R300 has 32 temporary registers in the vertex and pixel shaders (64 "total"). We currently "reveal" 12 in the pixel shader (not sure about vertex shader), following DX9 recommendations. We will raise that as caps bits allow or DX9 specs change.

Barely, you say? Search for "sireric"-s posts to learn more about the capabilities of R300.

NV30 exceeds it [DX9]

Not true: neither R300 nor NV30 meet shaders ver. 3.0 spec
 
Mintmaster said:
This article is also the only one to mention that the 9700 also has colour compression (and judging by the "Update", it seems like they almost forgot also).

Yes, I had to remind Scott about that one, hence the update.

Everybody has a blind spot on R300 AA colour compression. I think the reason being the initial PR documentation wasn't clear at all on it. The reason I was clear was because of sireric!
 
ATI must be extreeeemely pleased in its purchase of ArtX for $400 million around April of 2000. By acquiring ArtX, ATI has brought itself to within parity of Nvidia, more or less. some would even argue that ATI has surpassed Nvidia.
 
I actually just watched the launch video from start to finish. I've been real busy, so I haven't really had a chance to digest the thing, as in times past.

I should also say that I've not always been blown away by every product launch. In fact, I wasn't exactly plopping down some cash for the GF3 when it was first announced...

Having said that, I think there's a certain amount of truth in that the 9700 did diminish the impact of the GeForce-FX. It ushered in a lot of the technology that nVidia is just now bringing to the forefront.

Irregardless, this product has a lot of punch. You can say that 128-bit is not needed...All the massive programmability is overkill and/or will not have support for time to come...etc. But, this shouldn't take away from what these guys have developed, IMHO.

The level of programmability is pretty darn awesome. Throwing in Cg is, IMHO, a good move. 500 MHz per pipeline is pretty darn spectacular in its own right! 128-bit throughout the entire pipeline is the same...regardless of whether or not one thinks it might be indistinguishable from what the 9700 offers.

And then there's the performance. These boards weren't even final silicon yet, and the drivers will have plenty of time to attain even higher levels of performance. At the end of the day, I think the sheer power is going to be awesome.

If I were somebody that already bought a 9700, NV30 isn't going to be for you...and this would be the case if NV30 came out before 9700. Quite frankly, I could never understand people doing the ole' switcheroo on a card that was brand new...

I don't know...maybe I'm not looking @ this thing right, as compared to some of the nay-sayers, but I see a lot of positives from this product. I also believe that nVidia will likely reap benefits from what was put into NV30 down the road. Time will tell, but I think the hard work will end of paying off in the long run.
 
megadrive0088 said:
the thing is, GeForce FX isn't lacking in performance. the thing is clocked at 500 Mhz! What it's missing is FEATURES. like a new method of AA.
a wide bus, etc.

A wide bus isn't a feature, it's architecture. From the point of view of software or the end user, they don't care what architecture is used to render a pixel to the screen, only that the pixel is rendered fast and correct.

ATI and NVidia made different design choices. NVidia designed to save some transistors and go with faster memory for more bandwidth. The same as they did back when 3dfx and Nvidia were making bandwidth decisions (DDR vs SLI). This time around, ATI decided to make a wide bus rather than rely on exotic memory. PowerVR/Kyro relies on tiling for their bandwidth.

Kudos to ATI for doing something different this time around (256-bit bus, like Parhelia and P10), but doing something different isn't equivalent to doing something better. It's laughable to call DDR2 a "brute force approach" but not a 256-bit bus. Both are very simplistic ways of increasing bandwidth, unlike something more complex and "elegant" like deferred rendering.


It appears that NVidia has spent more of their transistor budget on shading this time around. They see large market not just in games, but in workstations and offline rendering. ATI made a different decision on where to spent their resources.

Neither is "better" or "correct", just different, depending on the audience or vision they are trying to fufill.

So now we are left with the situation of Intel and AMD. Two big players with nearly equivalent performance, but with different architectures: Intel going for the clock speed race, AMD doing more per clock. Again, both achieve the same thing, thru different means, and depending on the benchmark used, Intel wins, or AMD wins.


Is it so hard to understand that there isn't going to be an overall winner this time around? That, depending on the game or application, one card is going to win, and the other will lose?
 
Back
Top