G80 vs R600 Part X: The Blunt & The Rich Feature

A VERY small amount of gamers (the enthusiats if you will although I don't particularly like that title) will upgrade their video card to the latest and greatest uber expensive balls to the walls video card.
Of course! :???:

And yes, quite a few gamers even try to plan out their video card purchases such that they will last at the very least 2 years. I have one such friend that just upgraded his 9700 Pro (from Spring 2003) to a 8800 GTX (Fall 2006). He doesn't plan on even THINKING about a new video card until at the very earliest Fall 2008, although he's probably going to wait until 2009.
Well, imagine someone with 6800/X800/7300GT kind of card - already 1.5 to 2.5y old. His card already a bit underperforming, IF he decides to upgrade what are his options? ATM in all segments NV cards are faster. Bit more expensive but faster. What would one do? pay the cheaper but slower card or add 10-20$ and buy the faster one? i know what a gamer will do. And thats the same thing I'd recommend to him, based on ATi's record sin last few years in delivering.
They must do cards that are not only good, but better than NV's in order to get back the mind share they got with Rxx, holded with R4xx and lost with 5xx/6xx so far.
Yes, they'll sell lots of 2400/2600 in OEM deals, will make money, but last 2 rounds Nv had won the race.
IMHO
And the winner gets more attention from game-devs because its more likely that people who will byu the game will have such card ...
 
The argumentation is bull, since the cheap bunch will also wait a year till the games land in the bargain bin, thus they'll also be ok within their own timeline.

My argumentation or his?:D
 
My argumentation or his?:D

None or both, what I mean is that there are no viable arguments for either side. "Cheap" gamers don't go buy any games prior to these landing in the bargain bin anyway, thus they can use old cards a year or so longer than our usual "hardcore" gamer. They're overall about a year behind the happenings, both SW and HW.
 
None or both, what I mean is that there are no viable arguments for either side. "Cheap" gamers don't go buy any games prior to these landing in the bargain bin anyway, thus they can use old cards a year or so longer than our usual "hardcore" gamer. They're overall about a year behind the happenings, both SW and HW.

this is me!
oblivion looks AWESOME guys

*hugs his x1800*
 
LOL, see? I am t3h smartestest m4ster of the mighty Crystal Ball! :LOL: :LOL: :LOL:

EDIT: btw, I'm still on X1800 as well, since there is no fast new card in single slot format for my HTPC out there :(
 
No graphics card holds on for around two years. None. Not at the top. It simply slumps lower and lower, and you disable feature after feature after feature.

Naturally not at the top. But for acceptable performance then you generally can get a card to last two years. My 9700 held for about a year and a half only being replaced because it was a cheap powercolour thing and it's heat paste dried out. I replaced it with a then cheap 9800pro before shifting to an X1800XL which has held nicely for about eighteen to nineteen months. I only snapped up the HD2900 because it was so cheap thanks to the exchange rate and the price it was launched at.

So my experience has been that I can game quite happily for a shade under two years with ATi cards. Your mileage may vary.
 
:?: Games are the curve.

"The curve" would be the industry as a whole. Games are usually a year or two behind the latest and greatest in graphics technology.

And what of increasing the ALU:TEX rate will give you diminishing returns when it comes to prettier pixels ?

Eventually that will happen. I don't expect ALU:TEX to grow to infinity, but I think it'll continue to grow slowly as it has ever since Voodoo2 basically.

Yet from R420 to R600 bandwidth has grown twice as fast as bilinear texturing rate.

If you look at HDR data TEX rate has grown faster than bandwidth. The R600 can sample RGBA16F textures in a single cycle.

average Joe remembers that X1800 was slower and X1900 was "late, but not faster" . No way he'll know that 1 year later , now, X1900 is "better value"
Where is the reasoning?
"I buy card now, but I won't buy the fastest card atm, but instead will buy the slower one, hoping that after 1y it'll be faster."
There is no reason is such thinking

If one card is more futureproof, then why wouldn't he buy it, even if it's slightly slower at the moment? If he plans to keep it for a while, then there's absolutely nothing strange about that line of reasoning.
Btw, the X1900 was not late, and it was faster than the competition at the time too.

How many game-devs will "rethink" algorithms?

Game-devs do that continuously. This is not a sudden bomb that has been dropped in the middle of a dev-cycle. It's a trend that's been going on for almost as long as consumer 3D graphics has existed.

Granted all of us are older (25+ with quite a few 30+) gamers with full time jobs.

That's a pretty average gamer demographics these days. :)
 
If one card is more futureproof, then why wouldn't he buy it, even if it's slightly slower at the moment? If he plans to keep it for a while, then there's absolutely nothing strange about that line of reasoning.

Thats if it's obvious that one card is more futureproof. F.e in the good old R300 vs NV30 days. There was a staggering difference also.

It's not that simple with the G80 vs R600.
 
Last edited by a moderator:
LOL, see? I am t3h smartestest m4ster of the mighty Crystal Ball! :LOL: :LOL: :LOL:

EDIT: btw, I'm still on X1800 as well, since there is no fast new card in single slot format for my HTPC out there :(

We should have a private chat...there's this big lottery jackpot coming up-some crystal ball voodoo would be much appreciated:D.
 
If you look at HDR data TEX rate has grown faster than bandwidth. The R600 can sample RGBA16F textures in a single cycle.
True, but the majority of textures are still DXT1. RGBA16F needs up to 8 times more bandwidth, yet the sample rate is the same. For the common case of sampling DXT1 textures, R600 doesn't appear to have a truly balanced tex : bandwidth ratio.

If one card is more futureproof, then why wouldn't he buy it, even if it's slightly slower at the moment? If he plans to keep it for a while, then there's absolutely nothing strange about that line of reasoning.
And how do you predict which card is more futureproof without even knowing which games you're going to play 1 or 2 years in the future, and no information on how future games in general will perform on these cards?
 
True, but the majority of textures are still DXT1. RGBA16F needs up to 8 times more bandwidth, yet the sample rate is the same. For the common case of sampling DXT1 textures, R600 doesn't appear to have a truly balanced tex : bandwidth ratio.


And how do you predict which card is more futureproof without even knowing which games you're going to play 1 or 2 years in the future, and no information on how future games in general will perform on these cards?

I'm surprised you're asking this, as this thread provides an adequate answer has been provided by this thread:you ask _xxx_, of course:D.

Seriously now, I think that the way games in the future behave is dictated by which HW currently dominates. Look at how stuff currently coming out is behaving:I don't see heavy DB focus, I don't see procedurally generated everything, I don't see lighting being done completely through math, analitically(read, stuff that would've proven the point that the forward looking design of the R5xx family enabled this and that and then some more).
 
"Btw, the X1900 was not late, and it was faster than the competition at the time too."

ATI planned the entire time to release the 1950 4 months later the whole time and everyone in this forum is santa clause.
 
"Btw, the X1900 was not late, and it was faster than the competition at the time too."

ATI planned the entire time to release the 1950 4 months later the whole time and everyone in this forum is santa clause.

He's right though. The X1900 wasn't late. It was right on schedule.

R520 was late. R520 should've been released during the May/June timeframe ahead of the GF7800GTX instead of being released in October months after GF7800GTX. The delay of R520 didn't delay the release schedule of the X1900 though. Afaik R580 was always scheduled 6/7 months after R520 like most of ATi's refresh GPUs and thus was released one month sooner than the competing GF7900GTX.
 
True, but the majority of textures are still DXT1. RGBA16F needs up to 8 times more bandwidth, yet the sample rate is the same. For the common case of sampling DXT1 textures, R600 doesn't appear to have a truly balanced tex : bandwidth ratio.

For today's workload I agree. Personally I'm a fan of the idea of having some extra cheap units that can only sample say RGBA8 and below. Should be much cheaper than equipping the hardware with additional fullblown texture units. But then I'm not a hardware guy so I don't know how feasible that would be.
It should be noted though that theorethically the R600 could up to double the TEX rate in DX10 if you have a good mix of Load() and Sample() calls since the Load() calls could be implemented with vertex fetch instructions.

And how do you predict which card is more futureproof without even knowing which games you're going to play 1 or 2 years in the future, and no information on how future games in general will perform on these cards?

The average customer would just use the past as a reference. Like "my X1900 lasted my two years while my buddy had to upgrade his 7800 after a year", or something along those lines.

I guess if your interest is something other than selling hardware...

The IHVs have to lead the industry. We couldn't just sit around waiting for games to catch up or simply implement what's in the games today. That was pretty much 3DFX's strategy and that didn't work out too well.
 
He's right though. The X1900 wasn't late. It was right on schedule.

R520 was late. R520 should've been released during the May/June timeframe ahead of the GF7800GTX instead of being released in October months after GF7800GTX. The delay of R520 didn't delay the release schedule of the X1900 though. Afaik R580 was always scheduled 6/7 months after R520 like most of ATi's refresh GPUs and thus was released one month sooner than the competing GF7900GTX.

R300=9700
R350=9800
R420=x800
R480=x850?
R520=X1800
R580=X1900/1950
I'm pretty sure i have them right.
you are aware of this right?
 
Last edited by a moderator:
Humus said:
The IHVs have to lead the industry. We couldn't just sit around waiting for games to catch up or simply implement what's in the games today. That was pretty much 3DFX's strategy and that didn't work out too well.

We seem to be talking about two different things. You appear to be citing features, while I'm talking about alu:tex ratios.

While I can certainly see the benefit of being forward looking with regard to capabilities, the same does not hold true concerning design decisions in relation to performance in current games.

One might consider NV4X as a good example...
 
Back
Top