r420 may beat nv40 in doom3 with anti-aliasing

Chalnoth said:
No, ATI realistically doesn't have that choice. In my view, they've made their mistake. What I want to happen now is for nVidia to sell quite a few more NV4x's than ATI sells R4xx's over the next 12-18 months. That will validate the position of pushing technology forward, which is what I want to see.

Your obvious bias shines through chalnoth.
 
What about the Radeon 8500? As far as I'm concerned, its problem was poor drivers and lack of multisampling AA. I don't see how this is related.
 
Albuquerque said:
Chalnoth said:
And still, that bet is the stupidest bet I've ever heard of. Of course it won't come to full fruition until their next product cycle, for the simple reason that SM3 hardware is only beginning to be available now! It's really simple: hardware must come before software. ATI's riding nVidia's coattails on this one. nVidia's paving the way for SM3 support, and possibly spending more money/receiving less this generation because of it, while ATI's claiming the performance lead by not bothering to be a first-adopter on technology.

If this decision of ATI's proves to be advantagous for them, then look out. We could very well start to have a new style of competition: let's see who can come out with new technology last. That's not the kind of competition I want to see in the 3D market.
I think we agree (in a general fashion) that ATI didn't have to pull a significant amount of work on the R420; it's largely a quad 9600 and benches in a similar fashion. But your other thoughts on this matter I believe are a little skewed. Of course, this is my opinion and we're both entitled, but let me explain:

I believe ALL the major video players (previous and current) have come to the same essential point in their business cycle where it just makes more sense to extend the current hardware rather than completely rebuild it. Examples:

3DFX Voodoo 1 -> Voodoo 2
At the time, all 3DFX needed was to keep the fillrate up and they could essentially squash everyone else. The doubling of the render cores gave them free trilinear filtering, yay!

NVIDIA GeForce256 DDR -> GeForce 2
3DFX had killer features at the time (AA, free trilinear, all that cool post-processing stuff that we haven't seen again until the last year or so) but NV really only needed to keep the speed up in order to appeal to the masses. Why spend all the cash on a new core when making the previous one faster would suffice?

NVIDIA GeForce 3 -> GeForce 4
The ATi part had more features, but who was really using them? By the time anyone cared about those extra features, NV could have their new architecture.

ATI 9800 -> X800
Welp, here we are again. NV has more features, but who is really using them? By the time anyone cares about those features, ATI could have their new architecture.

Neither of us has any right putting a corporate entity on any sort of "moral high ground", as they are BOTH guilty of taking some low strikes at the public in their past dealings. Just as 3DFX pushed us ahead without any real competition, so has NVIDIA, and so has ATI. They both understand that they can't remain stagnant; they both understand that they need to cater to developers and users alike.

It just so happens that, in this business cycle, ATI decided to stick with their tried-and-true, and NV decided they needed another solution. For each of them, it was likely the best decision. You and I both would love to see a new generation of hardware EVERY business cycle, but business doesn't always work that way for obvious reasons :)

It seems you're believing that NVIDIA made the better decision; I feel they made their only decision. The only question now is if ATI made the right decision... The answer comes only with time, not with us bickering here in this (or any other) thread.
Okay, the first two were essentially architecture milking, with the important exception that the GeForces had the feature that mattered to consumers and developers at the time - T&L. FSAA is only icing on the cake; if most of the cake is missing, the icing won't do you much good.

GF3 -> GF4, the nVidia parts dominated because of PSV1.1 which became the industry standard baseline for DX8 on and off the PC.

With the current generation, ATi is like 3dfx was; great icing features, but used last generations cake mix.
 
Chalnoth said:
No, ATI realistically doesn't have that choice. In my view, they've made their mistake. What I want to happen now is for nVidia to sell quite a few more NV4x's than ATI sells R4xx's over the next 12-18 months. That will validate the position of pushing technology forward, which is what I want to see.
Did you hold this same concept when the 8500 was undersold as-compared to the GF3 and GF4? Just curious...
I see a massive difference here. ATI made a choice to not support PS3. nVidia screwed up with the NV3x's architecture design, making it too hard to develop a compiler. I can forgive a mistake. A choice, I'm less likely to.
NVIDIA made multiple bad choices in the NV30 line (I'm not talking only hardware), which resulted in an very large overall mistake. There are a significant number of people who do not share your willingness to forgive those choices, but back to my point... You would forgive NVIDIA's many bad judgements for that entire generation of cards, but would instantly condemn ATI for not supporting a brand new unreleased feature set?

Again, you are conveying that SM3 is the only choice for this generation, which simply is not true. Is SM3 a good thing? Sure, just like PS 2.0 / VS 2.0 was a good thing, and PS 1.x was a good thing. Does that mean that all companies must now support it or their product is rubbish?

Until Intel starts releasing hardware-accelerated PS 2.0 compliant chipsets onto the market (hell, do they even have PS 1.x hardware out yet?), I will still firmly believe that SM3 is not the only choice for this generation. I'd love to have SM3 available from all the video manufacturers, but I'd also love to have X86-64 from all the CPU manufacturers too. Sometimes, the proper business choice isn't the one that has the absolute most features...
 
Albuquerque said:
Did you hold this same concept when the 8500 was undersold as-compared to the GF3 and GF4? Just curious...
The 8500 had more problems than just being a slight underperformer. It also had poor drivers and poor FSAA performance.
 
radar1200gs said:
Okay, the first two were essentially architecture milking, with the important exception that the GeForces had the feature that mattered to consumers and developers at the time - T&L. FSAA is only icing on the cake; if most of the cake is missing, the icing won't do you much good.
I generally agree, it was architecture milking indeed. And yes, T&L mattered to the consumers and developers at the time, which extended their usability. I'm not sure what you were referring to on the FSAA remark, which also leads to confusion on the icing :?: I'll just skip that part hehe...

radar1200gs said:
GF3 -> GF4, the nVidia parts dominated because of PSV1.1 which became the industry standard baseline for DX8 on and off the PC.
I would disagree strongly; the GF3 and GF4 didn't dominate because of pixel shaders, essentially nobody was using them during that time. They dominated because they had raw speed for playing all the then-current games -- games with lots of fillrate needs. Quake 3 Arena, Unreal Tournament, Half Life, Counter Strike, RtCW, stuff like that. None of those needed pixel shaders, they needed raw fillrate. Pixel shaders didn't make a big debut on the PC almost until Halo's release...

radar1200gs said:
With the current generation, ATi is like 3dfx was; great icing features, but used last generations cake mix.
I'm still not sure what the reference to icing/cake is, so I'll just skip that. ;) With the current generation, I feel that ATI is taking the "GF4" route -- they're upping the speed, milking the architecture, simply because consumers and developers are still wanting fillrate, but are also starting to beg for high speed pixel shading power. PS 2.0 is just now beginning to hit it's prime time; consumers are wanting to want to play their FarCry and HalfLife 2 and Doom3 at break-neck speeds with all the features on high. They don't need SM3 for any of this, they almost don't need all of PS 2.0 to do it.

You too seem to be saying that SM3 was the only correct choice for this generation, and again, I must strongly disagree. It's a great featureset to have, but it's far from any sort of requirement in the next year or more.
 
FSAA is expendable. Its very nice to have but not necessary, especially if a competing graphics core has more basic functionality in it.

With 3dfx and V4/5 they basically caught up to the TNT :oops: functionality wise and added a few frilly bits round the edges. nVidia was moving on to T&L and laying the foundations for pixel shaders.

Same thing with R420; its just catching up to what NV3x had featurewise (still needs to work on vertex features though) with some exotic texture formats and frilly FSAA bits round the edges. nVidia is moving on to SM3.0
 
radar1200gs said:
FSAA is expendable. Its very nice to have but not necessary, especially if a competing graphics core has more basic functionality in it.

With 3dfx and V4/5 they basically caught up to the TNT :oops: functionality wise and added a few frilly bits round the edges. nVidia was moving on to T&L and laying the foundations for pixel shaders.

Same thing with R420; its just catching up to what NV3x had featurewise with some exotic texture formats and frilly FSAA bits round the edges. nVidia is moving on to SM3.0


There's one difference between that situation and the current one. The R420 is often times faster than the NV40.
 
Ok, so I agree basically with your last statement there. FSAA really is just "icing" so long as the underlying features are pretty much the same. And obviously ATI is being a stick-in-the-mud, and they likely should have reconsidered their delayed move to SM3.

But does that make the R420 an inherently "bad" part? I would still give an emphatic "no"; the R420 is exactly in the middle of where today's Joe Consumer would want to be. The business choice won over the enthusiast choice this time; that isn't always a "bad" thing. :)

Edit Gentlemen, I will leave you now for some sleep! It's already 1:30am EST and I have grass to mow, cars to wash, and some laundry to do tomorrow! That, and the woman will kill me if I stay up half the night deliberating ATI's business models ;)
 
Really? So, then the GF4 series was bad? Edit: I say this because that is nVidia's version of the R420, so to speak. They were both mainly designed to have higher performence with little advancement in features.
 
Ok, so just one more post:

Performance is no substitute for core features. See: 3dfx, Ruby.
I would surely agree, if you're lacking features that are mainstream -- please show me a mainstream SM3 application, or an announced SM3 application that has no fallback to PS 1.x (let alone PS 2.x) :)

And in reverse, featureset is no excuse for poor performance either ;)
 
IST said:
Really? So, then the GF4 series was bad?

No, it completely supported DX8.0.

It didn't support DX8.1 completely, but, then again neither did developers.

ATi's problem that generation was that a dev couldn't write a PS1.4 program and have DX allow it to run on lower PS models. (PS1.4 was radically different from 1.1, 1.2, 1.3).

It didn't help that the GF4 was faster in legacy games. Thats where performance can be useful.
 
Albuquerque said:
Ok, so just one more post:

Performance is no substitute for core features. See: 3dfx, Ruby.
I would surely agree, if you're lacking features that are mainstream -- please show me a mainstream SM3 application, or an announced SM3 application that has no fallback to PS 1.x (let alone PS 2.x) :)

And in reverse, featureset is no excuse for poor performance either ;)

The thing is SM3.0 will allow you to fallback, but something like PS1.4 won't.
 
Albuquerque said:
Ok, so just one more post:

Performance is no substitute for core features. See: 3dfx, Ruby.
I would surely agree, if you're lacking features that are mainstream -- please show me a mainstream SM3 application, or an announced SM3 application that has no fallback to PS 1.x (let alone PS 2.x) :)

And in reverse, featureset is no excuse for poor performance either ;)

A featureset is more than an excuse for DECENT performance while the competition has higher performance. Poor performance was from the NV3x and R2x0 series, not the NV4x series.

Then again, R420 has a decent featureset while NV40 has a better featureset. I consider R420 decent in features.

I would surely agree, if you're lacking features that are mainstream -- please show me a mainstream SM3 application, or an announced SM3 application that has no fallback to PS 1.x (let alone PS 2.x)

I think you will be a little disappointed.
http://www.beyond3d.com/forum/viewtopic.php?t=9982

At this point (things may change), I'm expecting that Splinter Cell - X will only support SM 1.1 and SM 3.0 when it comes out.

It's not certain but it's likely. If Splinter Cell X doesn't support SM 2.0 R420 will have to use the 1.1 version.
 
radar1200gs said:
Albuquerque said:
Ok, so just one more post:

Performance is no substitute for core features. See: 3dfx, Ruby.
I would surely agree, if you're lacking features that are mainstream -- please show me a mainstream SM3 application, or an announced SM3 application that has no fallback to PS 1.x (let alone PS 2.x) :)

And in reverse, featureset is no excuse for poor performance either ;)
The thing is SM3.0 will allow you to fallback, but something like PS1.4 won't.
PS 1.4 allows just as much of a fallback to PS 1.x as PS 3.0 does for PS 2.0: i.e. none whatsoever. The developer has to explicitly code the shaders and app to allow the fallback.

Wrong again, Radar.

-FUDie
 
Back
Top