Two new (conflicting) Rumors

Aside from a few driver problems, it's clear that ATi has the advantage right now.

Whatever NV30 brings to the table it's far too late to steal ATi's thunder. 9700 is here NOW, NV30 is still the basis of rumors and speculation. And any exotic solution NV30 can use (DDRII, 0.13 micron) can also be used by ATi.

The problem for nVidia is that ATi squeezed their performance out of "inferior" tech. Nvidia has always relied on cutting edge technology to give them an advantage, especially in the 3dfx days, but now it's coming back to haunt them. I don't think NV30 will be *that* much faster than 9700, and even if it does outperform it the delay in production now means that ATi is one step ahead and can respond with their own refresh far earlier than Nvidia.

For nVidia's sake NV30 must be clearly superior to 9700 or they've got some major catch-up to do...
 
Joe DeFuria said:
You are saying, that consideration of how well each of these cards will play Doom3....STILL probably a year away at this time...gives GeForce1 SDR more bang for the buck in the long run?!

well that was always the justification at the time - you must get a TnL card its more future-proof than the V5. If anybody argued the TnL on the Ggf1 was effectively useless at 10x7 and above as the card was fillrate limited anybody - well you were just a fanboy.

BTW this is not an arguement against progress and the need to push tech into the hands of the developers so the consumer gets the graphics they demand but whilst developers can now start developing on a DXp board I doubt I will see DX9 games in the lifetime of my first DX9 card purchase.
 
Bigus said:
Oh, and I'm still laughing at darkblu about the GF1 SDR playing DoomIII as being a valid concern at the time of purchase... if I bought a 9700 today and it was able to even play a game five years from now, I'd consider that a bonus, but it would never enter in to my purchasing decision today (probably because there's no way in hell I'd keep a videocard five years).

good for you that you're so easily amused, bigus. still, pity you didn't get my point. of course nobody at that time knew gf1 would be capable of running d3 in particular (how you read my post into that is beyond my comprehension), but a plain look at vsa's feature set sufficed to show that it lacked anything which could take the yet-another-voodoo past the circa-'96 lighting/shading models. whereas the nv10's feature set showed potential. it "accidentally" happened so that that potential was sufficient for doom3 (3 years after the part's release), alas at minimalistic resolutions & framerates <shrug>

oh, and i'm so happy about those guys who don't have issues buying a $300 vga each year, and not expecting to be able to run titles released a year from then - i admit i can't afford such a pattern.. neither would i need to, but that's another matter.
 
NV25 said:
I don't think NV30 will be *that* much faster than 9700, and even if it does outperform it the delay in production now means that ATi is one step ahead and can respond with their own refresh far earlier than Nvidia.
But that implies that delaying NV30 means delaying NV35 (or any other refresh) too. I don't think that is neccessarily the case.
 
I'd still rather have a GF1SDR than a Voodoo4 right now. (if you want to compare bandwidth)

I'd still rather have a GF1SDR than a TNT2Ultra or V3-3500 (if you want to compare products out at the same time the GF1SDR was)

I'd still rather have GF2DDR than a V5 right now (if you want to compare products out at the same time) at comparable price points.

But yes, if you had a TNT2U, it would be stupid to have upgraded to GF1SDR.

Yes, either a TNT2U or a GF1SDR is somewhat useless today, and moreso going forward.
 
Joe DeFuria said:
darkblu said:
evaluating something solely on what value it has for the time being is not particularly prudent. when i buy a $300-worth product i consider it an investment. if i had compared a gf1 to a v5500 at the time, and evaluated those solely on their present value i, too, would have picked a voodoo. and i would have erred. as purely featurewise, the gf1 would be running next-spring doom3 in 320x240 at (IIRC jc's words) 20fps, whereas the voodoo would not be running it at all (hard fact). so which would be better bang for the buck?

I'm with Bigus Dickus...I'm laughing at that as well. :LOL:

:rolleyes:

You are saying, that consideration of how well each of these cards will play Doom3....STILL probably a year away at this time...gives GeForce1 SDR more bang for the buck in the long run?!

joe, is it that you really can't distinguish simple (factological) support evidences from genuine reasoning, or you just haven't had your morning coffee yet?

What about during the what 4 years between buying your card, and Doom3 becoming available? Sorry but your logic just doesn't seem logical at all.

<irrelevant remark>
lack or presence of logic is in the eye of the beholder.
</irrelevant remark>

When I say "present value", I don't mean "only the value for the games available at that very exact moment." As I said, you do have to consider / speculate how well each card will stack up "over the life of the card." (The time between when you buy it, and when you upgrade next.).

at last something which actually is to the point of the discussion at hand. and i absolutely agree with you. now, what you need to realize is that there exist people who have a slightly different idea of the expected lifespan of their videocard purchases.

Having said that...extrapolating 3+ years away, and reaching any conclusion OTHER than "I'm going to want to upgrade to a new card to play the latest games" is a bit foolish.

there's nothing foolish if you seek to maximize the lifespan of your purchases. now, if a videocard turns to comply with titles 1 year from its release - fine. if the card turns to barely comply with titles 100 years from then - even better, as this means the card may turn to adequately comply with titles 50 years past its release date. what exactly is not clear here?

Do you REALLY think buying a GeForce SDR, because it might play Doom3 at aome absurdly low resolution with absurdly low frame rates 4 years away is a justifiable reason for purchase?

read my reply to bigus to see what i think.
 
darkblu said:
oh, and i'm so happy about those guys who don't have issues buying a $300 vga each year, and not expecting to be able to run titles released a year from then - i admit i can't afford such a pattern.. neither would i need to, but that's another matter.
Talk about misunderstanding posts and reading into them things which are not there...

On the whole, I agree with your argument, and yes... people have different expectations for the life of a videocard. For me it's 12 to 18 months. For others it stretches to 24 or even 36 months. But to use an example of a ~four year disparity between product launch and game launch is just pushing it a bit far, IMO. That was why I found your example humorous, and irrelevant. If anyone buys any videocard expecting or even hoping it will play a game four years down the road, they will be sorely dissapointed.
 
joe, is it that you really can't distinguish simple (factological) support evidences from genuine reasoning, or you just haven't had your morning coffee yet?

1) I don't drink coffee
2) Using a 4 year time frame as some "factological support evidence" is just funny.

what exactly is not clear here?

What is not clear, is that we STARTED talking specifically about Voodoo5 vs. GeForceSDR...and how you implied that the GeForce SDR is a "better buy" because it is more "future proof" as "factologically evidenced" by Doom3.

I was debunking that SPECIFIC opinion.

And now you're generalizing about "future proofness" in general, which of course no one disagrees with in theory.

oh, and i'm so happy about those guys who don't have issues buying a $300 vga each year,

I don't know who suggested spending $300 each year. Again, with the speicific case, we're talking about a $300 purchase, and it lasting well over 3 years.

If you can't afford to spend roughly $50-100 per year on average on a video card upgrade, then you simply won't keep pace with the industry.

but a plain look at vsa's feature set sufficed to show that it lacked anything which could take the yet-another-voodoo past the circa-'96 lighting/shading models.

Again, when you look at whatever "new" features you are talking about, you also have to consider the following:

1) When would those features be used
2) When they are used, will the card itself be powerful enough to run with them?

So yes, you can always say "more features is better." But the VSA vs. GeForce argument in question is not that simple, because the GeForce was not a SUPERSET of the VSA. The VSA had quality AA...which could be applied to all games: immediate support.

So, the argument was "which is better....more features that might be used by the time I need to upgrade, features that when used might not be fast enough to leave on....or a card with some other feature that improves every game I own right out of the box?

Everyone needs to make that decision for himself.

Now, the nVidia faithful back then always said "support for new features is coming sooner rather than later." And the 3dfx faithful said "support will come later rather than sooner."

And, in my opinion, I would have to say the 3dfx faithful ended up being more accurate.
 
DemoCoder said:
Remember 3dfx's famous comment "anyone whose 32-bit performance is the same as their 16-bit performance has a crappy 16-bit implementation?" This was back when some 32-bit cards on the market were running 32-bit at the same speed as 16-bit and some people were saying this proved that "32-bit is now FREE". In reality, it meant those cards had bad 16-bit implementations.

I have to disagree with this to some extent... It could also mean that the architecture was designed to run in 32 bit with 16 bit only as a secondary consideration. Although the 16 bit implementation may have been poor (on the original Radeon for example) that was likely due to design. At the time hardware sites were still stupidly testing performance in 16 bit (why?) so it made the Radeon look bad. Now most sites don't even bother testing in 16 bit... Did any sites even check the Radeon 9700's 16 bit performance?

Frankly I think ATi made the right choice by giving 16 bit the boot. The bottomline is 16 bit is horribly ugly, and the only reason to run in it is if you have a really old card that just can't cut it in 32 bit in newer games or if you're playing a really old game that only runs in 16 bit.

Randell said:
darkblu said:
evaluating something solely on what value it has for the time being is not particularly prudent. when i buy a $300-worth product i consider it an investment. if i had compared a gf1 to a v5500 at the time, and evaluated those solely on their present value i, too, would have picked a voodoo. and i would have erred. as purely featurewise, the gf1 would be running next-spring doom3 in 320x240 at (IIRC jc's words) 20fps, whereas the voodoo would not be running it at all (hard fact). so which would be better bang for the buck?

hmm purchasing a $300 card is not an investment for future use in 3d graphics at the current rate of change is it? Surely it's only prudent to consider what you get now? No one, no one is going to be playing Doom 3 at 320x240@20fps on a Gf1DDR and think wow wasnt I lucky I bought this instead of a V5? Anybody I know who would invest $300 in a video card (i.e. already fairly serious about gaming) does not have anything less than a Gf2Pro in their system now and most have at least a Gf3Ti200.

I'd also like to point out in regards to these two posts that by the time Doom3 comes out you'll be able to buy a R9000 for <=$50. So it's fairly unlikely someone is going to spend $50 on Doom3 to run it in 320*240 resolution. If money is that much of a concern you could even spend $75 and get the game and a Radeon 7500 for $25 (or probably GF2 Pro/Ultra equivalent?).
 
I don't fault anyone for not optimizing 16-bit today. But it will be a while before we have the bandwidth luxury of always rendering everything at 128-bit. Thus, just like in the days of expensive 32-bit rendering, having an optimized 64-bit fallback is nice.

There may come a time when 128-bit rendering effectively runs fast enough that we don't care to drop down to the 64-bit mode, but it will probably take a few generations.

Hell, in the first next-gen cards, the vast majority of rendering will probably still be 32-bit and only selected special effects will switch to higher precision.
 
my you have a high opinion of yourself dont you?

Uh, no. I think you're confusing me with Derek.

I have no idea why you have decided to prove your self worth by attacking me but you know if it makes youy feel important to post patronizing drivel thats up to you.

Odd you should say that, as I am not quite sure why you took my original comments so personally in the first place. I never directed my remarks at you until you personally responded to them, and even then I am merely remarking that the opinion that "competition is good" is not universally true.
 
flf said:
Odd you should say that, as I am not quite sure why you took my original comments so personally in the first place. I never directed my remarks at you until you personally responded to them, and even then I am merely remarking that the opinion that "competition is good" is not universally true.

I agree perhaps I take them the wrong way, but you did quote me then say my arguements made no sense? Then told me I needed coddling? I dunno why but I kind of took that as being directed at me :)
 
Randell said:
I agree perhaps I take them the wrong way, but you did quote me then say my arguements made no sense? Then told me I needed coddling? I dunno why but I kind of took that as being directed at me

1. Upon reviewing my own posts, I'm not quite sure why I put "your arguments make no sense" rather than "your arguments depend completely upon point of view of either consumer or business." Chaulk it up to bad form on my part.

2. The coddling was sarcasm in response to your sarcasm. Unfortunately (or fortunately, depending upon your view), I don't use smilies, so the intent was lost. Chaulk it up as my mistake also.
 
Nagorak said:
Frankly I think ATi made the right choice by giving 16 bit the boot. The bottomline is 16 bit is horribly ugly, and the only reason to run in it is if you have a really old card that just can't cut it in 32 bit in newer games or if you're playing a really old game that only runs in 16 bit.

If you think 16 bit is horribly ugly I suggest you buy new glasses. And only comparing 16 to 32 isn't really good either cause depending on what other features you use that really makes a different........
 
Nagorak said:
Frankly I think ATi made the right choice by giving 16 bit the boot. The bottomline is 16 bit is horribly ugly, and the only reason to run in it is if you have a really old card that just can't cut it in 32 bit in newer games or if you're playing a really old game that only runs in 16 bit.

Actually, 3dfx's "22bit-effective" 16bit mode was rather beautiful. PowerVR's 16bit mode was beautiful as well. It was ATI's and Nvidia's 16bit mode that was rather ugly and hideous. Not all 16bit modes were created equally.

--|BRiT|
 
DemoCoder said:
Only if you weren't doing multipass. the 22-bit filter could not protect against hideous multipass artifacts.

Perhaps, but it still looked far better than ATIs or NVIDIA's 16bit modes in all the games out at that time.

--|BRiT|
 
Joe DeFuria said:
You have to consider nVidia's "better not faster" pixels rhetoric. It may very well be that for "cinematic redering" that they are puching with all kinds of layered shader programs, a high speed 128 bit bus may be enough. Such scenes may be "VPU computational" limited on these first gen cores, rather than bandwidth limited.

So 256 bit busses may not help NV30 (or R300), when doing hevy-duty shading. For games it's another matter...still need more bandwidth.

So if an 8 pipline chip on a 128 bit bus without any exotic HSR is true...that may be idiodic from a gamer perspective, but not necessarily from the "Cinematic Renderer" perspective.

DemoCoder said:
It may be the case that future games will be "compute" limited, but older games aren't, and that will make NVidia look bad. It will be a rough sell for NVidia, selling the promise of "future compute-limited cinematic quality games" when their card is getting its ass handed to it by ATI on older games.

Joe + DemoCoder: Really interesting discussion here! 8)

While I don't think that nVidia will screw royally up with a 128 bit bus without some semi-HSR, I do think that they have put most of their focus on computational power. (they still need a lot of bandwidth for textures/data for these hefty shaders). This might suggest that NV30 could be really fast in VS/PS intense games while it may not do so well in the games of today with FSAA/AF as you point out.

nVidia have, however, always offered faster performance with a new generation since they know full well that people aren't stupid enough to buy a $400 card just to look at an increase in 3dMark 2002. But OTOH I do sense that ATI have struck a better balance between speed and future features.

In two months we will be having some hefty discussions here! (I cannot wait) :wink:
 
BRiT said:
DemoCoder said:
Only if you weren't doing multipass. the 22-bit filter could not protect against hideous multipass artifacts.

Perhaps, but it still looked far better than ATIs or NVIDIA's 16bit modes in all the games out at that time.

--|BRiT|
and far worse than ANYONES 32bit implementation.
whats you point? that not all 16bit implementations sucked as badly?
Ok, so what?
None of em are as good as 32bit...
 
Back
Top