r420 may beat nv40 in doom3 with anti-aliasing

pat777 said:
R3x0 was king of shader architecture last generation while the 59x0 had the most memory bandwidth. ATI focused on bandwidth(programmable memory controller)in this generation while nVIDIA focused on shader architecture. I like how both companies focused on their weakpoints.
Which sounds kinda nice, until you realize that ATI totally ignored improvements for the rest of the architecture.
 
Chalnoth said:
pat777 said:
R3x0 was king of shader architecture last generation while the 59x0 had the most memory bandwidth. ATI focused on bandwidth(programmable memory controller)in this generation while nVIDIA focused on shader architecture. I like how both companies focused on their weakpoints.
Which sounds kinda nice, until you realize that ATI totally ignored improvements for the rest of the architecture.
Except for the improvements to the vertex shader, pixel shader and compression technologies. Oops, that would make you wrong.
 
I think "totally ignored" is a bit harsh, several components of their shader model were updated but nothing was seriously overhauled. I think rather they hedged their bets on SM3.0 simply not coming to full fruitition until their next product cycle. In reality, how many SM3 games are coming out in the next six to twelve months? I don't know of any, although we know that several developers may strap on a few additional features for owners of the new hardware (Crytek being the most obvious and only confirmable source thus far)

Time will tell if ATI made the right decision, but I think they did the best thing for their current business needs. They did their big architecture jump with the R200 -> R300 series, they seemingly wanted only to extend the architecture for a long enough period to give them time to ramp up their next big architecture jump.

Obviously NVIDIA chose to do a big architecture jump, which I feel was VERY necessary to keep them competitive. Kudos to NVIDIA for pulling it off successfully, and I wish them luck in their continuing effort.
 
Albuquerque said:
I think "totally ignored" is a bit harsh, several components of their shader model were updated but nothing was seriously overhauled. I think rather they hedged their bets on SM3.0 simply not coming to full fruitition until their next product cycle.
Yes, I did exaggerate somewhat, but the point still remains. ATI just didn't advance the rest of the architecture hardly at all. They made an 8-pipeline part into a 16-pipeline part, streamlined a few things, and that's about it.

And still, that bet is the stupidest bet I've ever heard of. Of course it won't come to full fruition until their next product cycle, for the simple reason that SM3 hardware is only beginning to be available now! It's really simple: hardware must come before software. ATI's riding nVidia's coattails on this one. nVidia's paving the way for SM3 support, and possibly spending more money/receiving less this generation because of it, while ATI's claiming the performance lead by not bothering to be a first-adopter on technology.

If this decision of ATI's proves to be advantagous for them, then look out. We could very well start to have a new style of competition: let's see who can come out with new technology last. That's not the kind of competition I want to see in the 3D market.
 
Chalnoth said:
pat777 said:
R3x0 was king of shader architecture last generation while the 59x0 had the most memory bandwidth. ATI focused on bandwidth(programmable memory controller)in this generation while nVIDIA focused on shader architecture. I like how both companies focused on their weakpoints.
Which sounds kinda nice, until you realize that ATI totally ignored improvements for the rest of the architecture.

Just because they decided not to support SM3 doesn't mean they totally ignored all possible improvements. :rolleyes:

Chalnoth said:
If this decision of ATI's proves to be advantagous for them, then look out. We could very well start to have a new style of competition: let's see who can come out with new technology last. That's not the kind of competition I want to see in the 3D market.

Your beginning to sound like radar. SM3 is not a big jump in technology. It's rather fruitless to support a checkbox feature for the sole purpose of marketing. Nvidia likes to do that, not ATI.
 
Chalnoth said:
And still, that bet is the stupidest bet I've ever heard of. Of course it won't come to full fruition until their next product cycle, for the simple reason that SM3 hardware is only beginning to be available now! It's really simple: hardware must come before software. ATI's riding nVidia's coattails on this one. nVidia's paving the way for SM3 support, and possibly spending more money/receiving less this generation because of it, while ATI's claiming the performance lead by not bothering to be a first-adopter on technology.

If this decision of ATI's proves to be advantagous for them, then look out. We could very well start to have a new style of competition: let's see who can come out with new technology last. That's not the kind of competition I want to see in the 3D market.
I think we agree (in a general fashion) that ATI didn't have to pull a significant amount of work on the R420; it's largely a quad 9600 and benches in a similar fashion. But your other thoughts on this matter I believe are a little skewed. Of course, this is my opinion and we're both entitled, but let me explain:

I believe ALL the major video players (previous and current) have come to the same essential point in their business cycle where it just makes more sense to extend the current hardware rather than completely rebuild it. Examples:

3DFX Voodoo 1 -> Voodoo 2
At the time, all 3DFX needed was to keep the fillrate up and they could essentially squash everyone else. The doubling of the render cores gave them free trilinear filtering, yay!

NVIDIA GeForce256 DDR -> GeForce 2
3DFX had killer features at the time (AA, free trilinear, all that cool post-processing stuff that we haven't seen again until the last year or so) but NV really only needed to keep the speed up in order to appeal to the masses. Why spend all the cash on a new core when making the previous one faster would suffice?

NVIDIA GeForce 3 -> GeForce 4
The ATi part had more features, but who was really using them? By the time anyone cared about those extra features, NV could have their new architecture.

ATI 9800 -> X800
Welp, here we are again. NV has more features, but who is really using them? By the time anyone cares about those features, ATI could have their new architecture.

Neither of us has any right putting a corporate entity on any sort of "moral high ground", as they are BOTH guilty of taking some low strikes at the public in their past dealings. Just as 3DFX pushed us ahead without any real competition, so has NVIDIA, and so has ATI. They both understand that they can't remain stagnant; they both understand that they need to cater to developers and users alike.

It just so happens that, in this business cycle, ATI decided to stick with their tried-and-true, and NV decided they needed another solution. For each of them, it was likely the best decision. You and I both would love to see a new generation of hardware EVERY business cycle, but business doesn't always work that way for obvious reasons :)

It seems you're believing that NVIDIA made the better decision; I feel they made their only decision. The only question now is if ATI made the right decision... The answer comes only with time, not with us bickering here in this (or any other) thread.
 
Well, fine, but this is the first time that a company has tried to extend the same architecture for over two years, at least since 3dfx. It's really time for ATI to move past the R3xx architecture.
 
Chalnoth said:
Well, fine, but this is the first time that a company has tried to extend the same architecture for over two years, at least since 3dfx. It's really time for ATI to move past the R3xx architecture.
I would have to disagree for this reason: in my opinion, your defintion would apply to the GF3 -> GF4 transition also. They both carried the same underlying architecture, same number of pipes, same memory controllers, with only a few minor enhnacements (nView, support for higher qty of ram, etc) over it's GF3 bunkmate.

What I'm attempting to convey is this: It wasn't the wrong answer when 3DFX did it, it wasn't the wrong answer when NVIDIA did it. Right now, I don't think you or I or anyone else here on this forum have enough empirical evidence to prove if it's the wrong answer this time either.

Again, all my opinion, and we're both entitled. No hard feelings either way. :)
 
Chalnoth said:
Well, fine, but this is the first time that a company has tried to extend the same architecture for over two years, at least since 3dfx. It's really time for ATI to move past the R3xx architecture.

I agree, I'm always for the releasing of the best-you've-got. But you can't blame ATI for this - this is what you do when your competition is lacking.
 
Albuquerque said:
Chalnoth said:
Well, fine, but this is the first time that a company has tried to extend the same architecture for over two years, at least since 3dfx. It's really time for ATI to move past the R3xx architecture.
I would have to disagree for this reason: in my opinion, your defintion would apply to the GF3 -> GF4 transition also. They both carried the same underlying architecture, same number of pipes, same memory controllers, with only a few minor enhnacements (nView, support for higher qty of ram, etc) over it's GF3 bunkmate.
That wasn't two years later. The GF4 was released one year after the GF3.
 
Chalnoth said:
I would have to disagree for this reason: in my opinion, your defintion would apply to the GF3 -> GF4 transition also. They both carried the same underlying architecture, same number of pipes, same memory controllers, with only a few minor enhnacements (nView, support for higher qty of ram, etc) over it's GF3 bunkmate.
That wasn't two years later. The GF4 was released one year after the GF3.[/quote]

The GF4 was on the market for a full year before nVidia released the 5800s, which means their GF3-based architecture sat on the market for a full two years. We've been over this before. . .would you please stop beating the same issues into the ground!!
 
exactly chalnoth so that means they extended the same tech for another year after the gf4 release!

JOHN BEAT ME TO IT!!!@ :cry:
 
John Reynolds said:
The GF4 was on the market for a full year before nVidia released the 5800s, which means their GF3-based architecture sat on the market for a full two years. We've been over this before. . .would you please stop beating the same issues into the ground!!
If process problems hadn't plagued the 5800, it would have been released no later than six months after the GeForce4 (some sources point to the FX architecture's original release date as being around the time the GF4 was released, with the GF4 delayed as "filler"). ATI has no such excuse.

Besides, I did say over two years. ATI's R3xx architecture will be the basis for their products for at least another year, meaning that architecture will be approaching 3 years old by the time it's replaced.
 
The difference atleast what i see is that the NV40 does offer both=
performance and extending the technology at once, much like the Radeon 9700 did and also they got competition in performance but not in features with the X800XT PE.
 
overclocked said:
The difference atleast what i see is that the NV40 does offer both=
performance and extending the technology at once, much like the Radeon 9700 did and also they got competition in performance but not in features with the X800XT PE.

Well the 9700 offered something that was immediately useable, something that was only a "toy" to play with and then turn off before actually gaming with previous cards; it was the first card to deliver AA and AF together at playable framerates.

As far as I can see neither of these cards has something comparable out of the gate; they merely ramp up speed (considerably), quality and resolution and, in the case of NV40, add "toy" features that won't be useable for another generation.

Sure I'd take either in a heartbeat over my 9700P, but they're not enough to make me jump...not at those prices.
 
Chalnoth said:
ATI's R3xx architecture will be the basis for their products for at least another year, meaning that architecture will be approaching 3 years old by the time it's replaced.
Agreed. So now the question is, if NVIDIA's competing product is "merely" equal in current and near-future application performance, does this somehow necessitate ATI spending more money on a new architecture? Why should they rush an in-development architecture to meet a non-existant demand? Forum members can argue semantics about the viability of SM3 and floating point frame buffers all day, but when it comes down to it, both companies are in business because they make the right business decisions.

The mindset you are conveying (of course, in my opinion) is that SM3 was the only right answer for this product cycle. It's nice that NVIDIA beat ATI to the punch on that; I'm quite happy for them. It's not unlike how ATI beat NVIDIA to the punch at PS2 -- especially if you consider PS2 at reasonable performance levels, not really attainable on the original NV30 launch products (and still questionable on the NV35 products as well.)

It's obvious from our recent past that having a single vendor with good PS2 performance was still enough merit for developers to begin PS2 games. Now that we're just starting to see those PS2 games coming out, we suddenly have NV hardware that's more than capable of performing well on those types of games.

So it might be assumed that, since we now have a single vendor with SM3 support (I'm assuming it performs well on the NV40) that developers will start coding in that direction. And by the time we start seeing those applications out, we'll have ATI hardware that's more than capable of performing well on those games too.

I guess it comes down to this: In my opinion, SM3 is not the only correct answer in this generation.
 
Mize said:
Well the 9700 offered something that was immediately useable, something that was only a "toy" to play with and then turn off before actually gaming with previous cards; it was the first card to deliver AA and AF together at playable framerates.
That is total and utter bullshit. The first card to deliver both high-performance anti-aliasing and anisotropic filtering was the GeForce3. I, for one, rarely set my GeForce4 Ti 4200 to anything lower than 1024x768 with 2x AA and 8-degree anisotropic.

All that the R300 offered, by your argument, is exactly the same thing the NV40 and R420 offer (by your argument): higher resolutions and/or better framerates.
 
Albuquerque said:
Agreed. So now the question is, if NVIDIA's competing product is "merely" equal in current and near-future application performance, does this somehow necessitate ATI spending more money on a new architecture?
No, ATI realistically doesn't have that choice. In my view, they've made their mistake. What I want to happen now is for nVidia to sell quite a few more NV4x's than ATI sells R4xx's over the next 12-18 months. That will validate the position of pushing technology forward, which is what I want to see.

The mindset you are conveying (of course, in my opinion) is that SM3 was the only right answer for this product cycle. It's nice that NVIDIA beat ATI to the punch on that; I'm quite happy for them. It's not unlike how ATI beat NVIDIA to the punch at PS2 -- especially if you consider PS2 at reasonable performance levels, not really attainable on the original NV30 launch products (and still questionable on the NV35 products as well.)
I see a massive difference here. ATI made a choice to not support PS3. nVidia screwed up with the NV3x's architecture design, making it too hard to develop a compiler. I can forgive a mistake. A choice, I'm less likely to.
 
Chalnoth said:
No, ATI realistically doesn't have that choice. In my view, they've made their mistake. What I want to happen now is for nVidia to sell quite a few more NV4x's than ATI sells R4xx's over the next 12-18 months. That will validate the position of pushing technology forward, which is what I want to see.

I'm sure you felt the same way about the radeon 8500. :rolleyes:
 
Back
Top