GeFX canned?

demalion said:
Well, I'm in a fair bit of amazement that such rumors are the majority of what they've managed to deliver successfully, and that feeling has served to numb any annoyance, I guess. :-?


"...what they've managed to deliver..."Which might be...? It's Feb 8th, and as far as I know nVidia is still shipping the same product line it was shipping six months ago. Haven't shipped anything of note since. They were going to ship a product they declared competitive with ATI's higher-end R300 products, but as we now see that has been scrapped. Supposedly, in a few weeks they'll be shipping--or rather their OEMs will be shipping--a 9500Pro-level competitor. But then that doesn't qualify as "...what they've managed to deliver successfully.."(since they haven't delivered it yet), does it? So I'm a little puzzled by your remarks here.

Or do you mean you are sufficiently impressed with their past achievements to think that has a significant impact on what their future achievements are likely to be?



Eh? No, they do have a chip that competes fairly well. It just isn't feasible to release it as a product. Also, the non Ultra still competes, just not very successfully from the standpoint of those who picture nVidia as the performance leader. The Ultra has served its purpose of providing benchmarks that can be used to show the "GeForce" at an advantage to the "Radeon"

Nope, the non-Ultra doesn't compete on the high end. And the chip in the non-Ultra is the same one that would have been in the Ultra, only not over volted and over clocked and dustbusted. It's gratifying to see that you are not among those unfortunate few who see nVidia as any kind of a "performance leader." Reading you say that puts some of your other remarks in context (some of them, *chuckle*.)


Hmm...well, yes there are, but why are you telling me? I'm the guy who was lambasting people for making assumptions after E3 that it was preposterous that the R300 couldn't be faster than the nv30.

I was responding to your comments which I interpreted as a linkage between nv35 and nv30, which comments seemed to state the failure of nv30 on the high end was some sort of positive message about the state of nv35 development. If you tell me that was not your intent I withdraw the criticism.

Do you think any of this is news? Well, I'm not sure what you mean by "too many heatsinks" but I'm not particularly curious... ;)
This isn't the nvnews forums Walt, nor Rage3D, you don't have to keep pointing out things like this when no one is contesting them (atleast not at such length). All I was commenting on (read the text again) was specifically that I don't see any reason at all to assume the nv35 is necessarily delayed because the nv30 was so late...hence terms like "lends validity to the rumors".

Is "nvnews/Rage3D" somehow pertinent here? I don't recall mentioning either myself. :?:

I took your remarks to mean that you felt that there was only one reason to cancel nv30 Ultra--and that was nv35 sitting in the wings. That's how I interpreted your original remarks. I was simply pointing out that there were many reason to have cancelled the Ultra aside from whether or not anyone has expended 1 man hour of work on "nv35" at this time--of which there is no proof whatever, that's all. If you're saying you never meant to imply that the Ultra was cancelled because nv35 was ready to take its place, I'll accept that and again withdraw the criticism.


Specifically, I think this leaves room for the nv35 to come out before fall (and presumably the R400...I don't think ATi has a great deal of reason to hurry the launch of that even if they could...I think the R350 is likely to compete well enough with the nv35), and opportunity to re-associate nVidia and the GeForce, for whatever amount of time, with the concept of "performance leadership".

I think ATI is going to be smart on this and not "pace itself" according to some imagined need to stay "equivalent" with nVidia. I think ATI will strive to put as much of a performance distance between itself and nVidia as it can manage, as quickly as it can mange it, without overlapping its product spread too much. It was nVidia after all which started but couldn't maintain its so-called "six-month product cycle, " which was one of the things that gave 3dfx such a bad time (but 3dfx's problems were mostly self inflicted, sad to say.) Therefore, I think it will be good for ATI to use nVidia's tactics against it to the extent it is able to do so. ATI is not trying to maintain any sort of parity--they are trying to win back marketshare and mindshare, and the only way to do that is to get those things the same way nVidia originally took them from companies like 3dfx and ATI, and that is to set a pace of development and production such that it leaves your competitors breathless and always in a lag to catch up. The signs I see at ATI seem to point in this direction.

This does leave technical issues to be worked out, and I don't have the confidence in nVidia engineers that I would with ATI engineers at this point, but even just adding a 256-bit bus would help the GF FX catch up quite a bit, even before considering the other ideas the engineers may have in mind.

Of course, but this underscores the need for ATI to press ahead. They will have been shipping a 256-bit bus product about one year prior to nVidia doing it, and by late this summer they will be preparing to ship, if not shipping, their first .13 micron product. Which gives them several material advantages over nVidia--first, that TSMC will have a much better .13 micron process for them to use, and secondly that ATI's R400 will not be a brand new architecture, but based heavily on R300--whereas nVidia's first nv30 at .13 microns was a brand new architecture. I'm certain that R400 will bring some new things to the table, but I don't see them dropping such a terrific architecture anytime soon, or until they have something better to replace it. So I can see that as nVidia catches up ATI will simultaneously keep moving farther ahead. That's the idea, anyway--to keep nVidia in perpetual catchup mode, if possible. (At any rate, that's the way I would play it.)

Repeating myself...given the prior hints of the nv35 being the focus of intensive "debugging", it seems likely, in my opinion, that this info about cancelling the 5800 Ultra parts strengthens the likelihood of rumors of a May/June launch schedule. If you disagree with this, a brief reply like that at the end could have sufficed....

I only disagree with the inference that the cancellation of the Ultra has anything whatever to do with "nv35"--about which is known.....nothing. If I could always read the minds of the people whose posts I respond to perhaps I could do better--but then so could they if they could read mine...;)


Nowhere do I indicate that I disagree that the 5800 Ultra is a flawed part, and I've mentioned the flaws prior. I don't mention them again because they've been mentioned quite a few times already....

Granted--again, what I disagreed with was any linkage to the proximity of "nv35" and the cancellation of the nv30 Ultra product. That's what I meant about not wanting to see any more rumor mongering--spin, if you will--I'm kind of tired of spin. Now, if nVidia makes a statement to the effect that it cancelled the Ultra because it has a much better thing ahead in the nv35, which it plans to ship in the June/July time frame--then that moves the topic out of the rumor category. Somehow, I don't think nVidia is going to be making any more announcments about the date it will be shipping new chips, though...;) Not until it knows it can make such announcments with confidence. But especially here what I believe is that as of a couple of weeks ago nVidia had every intention of shipping the Ultra regardless of how impractical it was. Something happened in the last couple of weeks to restore their sanity over there...I'm glad of it.

Yes, yes....similar outlooks have been well established. For my part, that is why I was using terms like "sane"...the Ultra just strikes me as a computer OEM dud.

more reasons that I thought were discussed adequately, and I don't see my post contesting.

Again, it was the linkage that I felt you were making between the hypothetical "nv35" and the cancellation of the nv30 Ultra that put me on that path...;) Now that I understand you weren't linking the "nv35" to the cancellation of the nv30 Ultra, I suppose I can cheerfully withdraw the criticism, as I agree that the two have nothing to do with each other.


I think you make a good point, and I tend to agree. See above with my later post about the memory clock speed.

Yes, I saw where you'd said that in another post--unfortunately after I'd already posted...;) I think we agree that adding a little bandwidth to nv30 non-ultra isn't going to help it, but would simply drive up the cost.

Well, I've discussed this before...clocking the RAM the near the same frequency as the core with a 128-bit bus is more limiting than clocking near the same frequency with a 256-bit bus. Each card having roughly the same fillrate, this indicates a situation where the GF FX architecture is much more likely to "choke" as I termed it, and I think make it more likely to get greater returns from increasing RAM clock frequency (assuming there are no issues with such a memory clock disparity between core and RAM...I assume nVidia has their interface well in order).

Well, I'd call that simply a limitation of a 128-bit bus, but that's neither here nor there...Heh-Heh...I don't think at this point that I'd (personally) make any assumptions about any nVidia architecture and its performance beyond that associated with the GF4 series of chips...;)

But I agree the returns in performance are likely not to be deemed worth it for the increase cost, though I don't have any definite idea of the cost difference.

nVidia does, and as they aren't doing it, I guess we're right...;)


I'm also not convinced that the RAM on the GF FX is best considered to be "DDR-II" in regards to latencies. But that's another discussion (no, really... we've had that discussion in another thread...).

I don't think it's GDDRIII, though, either, which ATI has stated it wants to use precisely because of some of the latency issues involved with DDRII. But it may well have better latencies--but neither of us knows, of course.

Now this is a brief statement of disagreement. I still don't know why you felt the majority of the first half of your reply was necessary.

One of my constant failings has been verbosity, since my highschool days (quite a few moons ago.) *chuckle* I'm not as bad as I used to be, though.....;) (Believe it or not.)

To reply briefly in turn, I also don't think the nv35 will successfully compete with the R400, and I think nVidia has been focusing on getting the nv35 ready as soon as possible for quite a while (since the 9700 launch atleast). I think it is the best glass of lemonade they can make from the situation, and I think they are preparing it as fast as they can.

I'm not going to try and dissuade you at all about your belief on when they are going to deliver it (because I don't have any strong opinion that it is wrong), but I do find issue with your idea of "nv35 can't arrive soon because the nv30 just arrived."

No, but I just think there are deeper problems with nv30 than just bandwidth. In fact, it may well be some of those problems which have caused nVidia to can the Ultra--we won't ever really know, probably. (Maybe some of this will come to light when the 5800's actually start to ship.) That's why I think it unlikely in the extreme that the cancellation of nv30 Ultra and "nv35" have any sort of linkage whatever, apart from the basic nVidia in-house nomenclature that has "nv35" following "nv30" at some future date.

Now that I understand your position much better I would ask that you forgive my stridency if it appeared that way--I think I've just had it with the rumor mill and the incessant propaganda and all of the rest...! Not that you were intending to engage in any of it--but I suppose I was more inclined to see it that way than I might have been at another time. Really, I suppose there is little we might disagree on here.

And I'm really tired...;) Good night!....;)
 
WaltC said:
But especially here what I believe is that as of a couple of weeks ago nVidia had every intention of shipping the Ultra regardless of how impractical it was. Something happened in the last couple of weeks to restore their sanity over there...I'm glad of it.

Perhaps they got first silicon of nv35 back and were please? (complete speculation).

Anyways, has anyone wondered what this is going to do to the nv31 and nv34 projects?

What a convoluted mess!
 
Hellbinder[CE said:
]demalion
Eh? No, they do have a chip that competes fairly well. It just isn't feasible to release it as a product. Also, the non Ultra still competes, just not very successfully from the standpoint of those who picture nVidia as the performance leader. The Ultra has served its purpose of providing benchmarks that can be used to show the "GeForce" at an advantage to the "Radeon

I simply cant believe im reading stuff like this. Seriously.

Why not? You don't think the GF FX Ultra benchmark numbers are going to influence perception of the GF FX even running at 400 MHz?

The Ultra has *served its purpose* ??? what a bunch of unethical <bleep> nonsense. Give me a break. And you guys wonder why i think there is general favrotism among developers and internet sites. :rolleyes:

You do this a lot. You take a comment or a stance to a place where that doesn't resemble where we started from.

I stated that I think the GF FX Ultra benchmark numbers would serve this purpose. I did not state that I condone this.
How did you get from one to the other?

So now its an accepted practice for a company to overclock a product so far, that they cant release it like that, and somehow it counts as a real product. Well guess what, that crap does not fly. Ati could have done the same freaking thing to the R300 any time they wanted.

Well, ATI didn't have to...this is pretty well established.

Note that I don't see anyone making statements your above comments seem to address.
 
Mulciber said:
I think you're mystifying complicated engineering quite a bit are you there Walt? *chuckle* This topic and the term "3dfx technology" are getting quite old. The engineers from former 3dfx were absorbed by a larger body of engineers and the buck stops there. Anything they had been working on 2 years ago is now obsolete. Asking if "their approach" was used in the design of the NV30 is just silly, of course it was...since nVidia and 3dfx both used immediate mode rendering designs. As far as RGAA goes, nvidia apparently couldn't make it work without a huge tradeoff in performance. It did work quite well in the VSA architecture with two chips, but consumers expect better than a 50% drop in performance at 2x AA these days. Either they couldn't make it work, or thought they had something better, and failed. Anything that a T-buffer could have done can now be done with multiple buffers. Exactly what kind of information are you expecting them to provide? Some sweeping statement like "this particular transistor was designed with 'Mofo tech' in mind". That's not going to happen.


I only mentioned it because nVidia made such a point of talking about it at the product's introduction and emphasized that the letters "FX" were borrowed from 3dFX, and meant to signify a collaboration that nVidia felt, at least for PR, was signficant. It would be interesting to hear from the ex-3dfx engineers who took part in the design to get their individual perspective on what was done and what "they" contributed. nVidia's said it won't even discuss what it's actually doing for 2x and QC FSAA, citing "trade secrets" and that sort of nonsense, so you're right--fat chance in Hades we'll ever hear anything from the actual engineers involved...;)


Next time you go into a frothing tirade (though no one was disagreeing with you), you might find that heat dissipates and doesn't displace. And how many heatsinks is too many exactly? I see 2 on the GFFX, but I recall there also being 3 on the GF2 Ultra, 1 for the chip and 2 for the memory.

I wouldn't call it "frothing"--possibly exasperated with what I thought at the time was more rumor and speculation ("nv30" was bad enough, but the prospect of hearing it start up about "nv35" before the first "nv30" product shipped just about had me frothing!...;)) But we discussed it and I understand where he's coming from and our disagreements are actually petty, as it turns out.

Yea, the "displacement" thing is funny, I'll agree--but every time I think of that product all I see is a giant armored heatsink stretching from one end of the card to the other, and I can "hear" the hairdryer sound that just tops it all off beautifully...!.....Arrririririghghghg!.... Thank goodness we'll be spared endless exposures to that...!...

Now, I really am hitting the sack guys!...;)
 
WaltC said:
Mulciber said:
I think you're mystifying complicated engineering quite a bit are you there Walt? *chuckle* This topic and the term "3dfx technology" are getting quite old. The engineers from former 3dfx were absorbed by a larger body of engineers and the buck stops there. Anything they had been working on 2 years ago is now obsolete. Asking if "their approach" was used in the design of the NV30 is just silly, of course it was...since nVidia and 3dfx both used immediate mode rendering designs. As far as RGAA goes, nvidia apparently couldn't make it work without a huge tradeoff in performance. It did work quite well in the VSA architecture with two chips, but consumers expect better than a 50% drop in performance at 2x AA these days. Either they couldn't make it work, or thought they had something better, and failed. Anything that a T-buffer could have done can now be done with multiple buffers. Exactly what kind of information are you expecting them to provide? Some sweeping statement like "this particular transistor was designed with 'Mofo tech' in mind". That's not going to happen.


I only mentioned it because nVidia made such a point of talking about it at the product's introduction and emphasized that the letters "FX" were borrowed from 3dFX, and meant to signify a collaboration that nVidia felt, at least for PR, was signficant. It would be interesting to hear from the ex-3dfx engineers who took part in the design to get their individual perspective on what was done and what "they" contributed. nVidia's said it won't even discuss what it's actually doing for 2x and QC FSAA, citing "trade secrets" and that sort of nonsense, so you're right--fat chance in Hades we'll ever hear anything from the actual engineers involved...;)


Next time you go into a frothing tirade (though no one was disagreeing with you), you might find that heat dissipates and doesn't displace. And how many heatsinks is too many exactly? I see 2 on the GFFX, but I recall there also being 3 on the GF2 Ultra, 1 for the chip and 2 for the memory.

I wouldn't call it "frothing"--possibly exasperated with what I thought at the time was more rumor and speculation ("nv30" was bad enough, but the prospect of hearing it start up about "nv35" before the first "nv30" product shipped just about had me frothing!...;)) But we discussed it and I understand where he's coming from and our disagreements are actually petty, as it turns out.

Yea, the "displacement" thing is funny, I'll agree--but every time I think of that product all I see is a giant armored heatsink stretching from one end of the card to the other, and I can "hear" the hairdryer sound that just tops it all off beautifully...!.....Arrririririghghghg!.... Thank goodness we'll be spared endless exposures to that...!...

Now, I really am hitting the sack guys!...;)

Well as american auto consumers, I always remember the saying "no replacement for displacement". I rather enjoy gargantuine heatsinks ;)

I wont understand the mindset of anyone purchasing the 5800Ultra. I cant even stand my measly 3k rpm 80mm panaflo, which is towards the bottom end of the list as far as noise output goes.

Moving to the 5800 non-ultra is a decision that nVidia should have made back in november before word of the idea even leaked. My guess is they were hoping the performance would warrent that solution, but the fact that it doesn't has made them just lower their face and admit defeat. (which of course they never will in more than symbolicly)
 
Hi there,
Ratchet said:
I realize that, but you'd have to be a pretty hard core fan-boy to choose a GFFX non-Ultra over a 9700 Pro wouldn't you? I mean, the non-Ultra has literally nothing going for it compared to the 9700 Pro - definitly no speed advantage, no image quality advantage, and (for a guesstimate) no price advantage... I call it like I see it, and what I see is a DOA nV30 non-Ultra. As far as I'm concerned, the nV30 is no more.
Well, I wonder. I had an edifying encounter yesterday night: I went to have a couple of G&Ts with some old friends, who also happen to consider themselves "hardcore gamers".

From 4 people, only one even knew who ATI was. For the other three, "3D accellerator" is the same as "NV board." When I tried to explain to them about the Radeon9700, I got replies such as "what chipset does it use? GF4 or GF3?"

We're talking software developers, here. Not 3D or games developers, but these are still intelligent people.

Point is, I don't think that the vast masses who buy video cards are even aware of reviews, comparisons, or numbers--and if they are, they may very well still stick to the stuff they already know.

ta,
-Sascha.rb
 
nggalai said:
Hi there,
Ratchet said:
I realize that, but you'd have to be a pretty hard core fan-boy to choose a GFFX non-Ultra over a 9700 Pro wouldn't you? I mean, the non-Ultra has literally nothing going for it compared to the 9700 Pro - definitly no speed advantage, no image quality advantage, and (for a guesstimate) no price advantage... I call it like I see it, and what I see is a DOA nV30 non-Ultra. As far as I'm concerned, the nV30 is no more.
Well, I wonder. I had an edifying encounter yesterday night: I went to have a couple of G&Ts with some old friends, who also happen to consider themselves "hardcore gamers".

From 4 people, only one even knew who ATI was. For the other three, "3D accellerator" is the same as "NV board." When I tried to explain to them about the Radeon9700, I got replies such as "what chipset does it use? GF4 or GF3?"

We're talking software developers, here. Not 3D or games developers, but these are still intelligent people.

Point is, I don't think that the vast masses who buy video cards are even aware of reviews, comparisons, or numbers--and if they are, they may very well still stick to the stuff they already know.

ta,
-Sascha.rb

so true. my brother is a network admin, over about 1500 computers in the U.S.

he doesn't know anything about videocards, but he knows the name "nVidia". If someone sends him a requisit for a few hundred computers, you can be sure they'll have nVidia cards in them.
 
WaltC said:
demalion said:
Well, I'm in a fair bit of amazement that such rumors are the majority of what they've managed to deliver successfully, and that feeling has served to numb any annoyance, I guess. :-?


"...what they've managed to deliver..."Which might be...? It's Feb 8th, and as far as I know nVidia is still shipping the same product line it was shipping six months ago.

? As I said... "such rumors are the majority of...". The remainder is benchmarkable cards to reviewers, seemingly "cooked" benchmarks results prior to that (and other things falling under "hype"), the "CineFX" toolsuite to developers, and maybe some Quadro FX cards.

Haven't shipped anything of note since. They were going to ship a product they declared competitive with ATI's higher-end R300 products, but as we now see that has been scrapped. Supposedly, in a few weeks they'll be shipping--or rather their OEMs will be shipping--a 9500Pro-level competitor. But then that doesn't qualify as "...what they've managed to deliver successfully.."(since they haven't delivered it yet), does it? So I'm a little puzzled by your remarks here.

Hmm...well, I'm not sure why you're puzzled, but perhaps the above clarifies.

Or do you mean you are sufficiently impressed with their past achievements to think that has a significant impact on what their future achievements are likely to be?

Hmm...I suppose you can't be expected to automatically know my full opinion with regards to nVidia, can you?
Short answer: no.

Eh? No, they do have a chip that competes fairly well. It just isn't feasible to release it as a product. Also, the non Ultra still competes, just not very successfully from the standpoint of those who picture nVidia as the performance leader. The Ultra has served its purpose of providing benchmarks that can be used to show the "GeForce" at an advantage to the "Radeon"

Nope, the non-Ultra doesn't compete on the high end.

Can I presume you're one of the people who said the 8500 didn't compete with the GF 4 Ti cards? I'd disagree. You might also state that reality > perception to consumers in general? I'd disagree again.

And the chip in the non-Ultra is the same one that would have been in the Ultra, only not over volted and over clocked and dustbusted. It's gratifying to see that you are not among those unfortunate few who see nVidia as any kind of a "performance leader." Reading you say that puts some of your other remarks in context (some of them, *chuckle*.)

My various comments may be confusing.
I keep my opinions about nVidia as a company and a buying choice separate from my opinion of their products technically. Namely, concerns about the former override the merits of the latter when it comes to making a purchase choice, but they don't change my opinion of the latter in the process.
This kept me from having artificially high expectations before the GF FX was delayed, and when looking at the GF FX after launch, I feel no need to join an nVidia bashing bandwagon. What I feel is impressed by what ATI has achieved and looks to be achieving in the future.
I hope that puts my comments in context.

Hmm...well, yes there are, but why are you telling me? I'm the guy who was lambasting people for making assumptions after E3 that it was preposterous that the R300 couldn't be faster than the nv30.

I was responding to your comments which I interpreted as a linkage between nv35 and nv30, which comments seemed to state the failure of nv30 on the high end was some sort of positive message about the state of nv35 development. If you tell me that was not your intent I withdraw the criticism.

Hmm...

nVidia has an enhanced chip design that has been in progress for quite a while and was originally (before nv30 delays) scheduled to be released early this year.
I specifically disagree with the idea that this design necessarily be delayed any further due to the delay of the nv30 (as per the first sentence of my original post)...I have stipulated in the past that what would be the primary factor in the delay of this design is the demands of the market.

Further:
nVidia made a decision that struck me as unreasonable (noisy, etc. GF FX Ultra overclock as a consumer product).
nVidia modified that decision.

My comment:
This decision does, in my estimation, seem connected with this enhanced chip design and rumors that it could be released earlier than some had thought (i.e., in May/June), though I don't equate this (May, June) with the nv35 being "ready to go". Reasons include:
  • We've had hints of the "reliable sort" that nv35 has been in "debugging" for a while.
  • This decision came after this "debugging" process has had time to be evaluated.
  • The simple addition of a 256-bit bus is a pretty conservative baseline for the expected performance of such a chip, and easily achieved in the time allowed (and, also indicated by "hints of the reliable sort").
  • IMO, getting a product that is less unreasonable than the GF FX 5800 Ultra (as based on the nv30) is a relatively low target.

This means I think this idea factored into this announcement (EDIT: Hmm..."rumor with strong confirmation", I should say), not that I am proposing it is "the only" reason, or that this idea is guaranteed to manifest.

Do you think any of this is news? Well, I'm not sure what you mean by "too many heatsinks" but I'm not particularly curious... ;)
This isn't the nvnews forums Walt, nor Rage3D, you don't have to keep pointing out things like this when no one is contesting them (atleast not at such length). All I was commenting on (read the text again) was specifically that I don't see any reason at all to assume the nv35 is necessarily delayed because the nv30 was so late...hence terms like "lends validity to the rumors".

Is "nvnews/Rage3D" somehow pertinent here?

Yes, see below.

I don't recall mentioning either myself. :?:

What, I can't mention them until you do? :?:

You listed a long list of things that have been repeated here for quite a while, in reply to a post that did not seek to refute them. As you go on to outline, you responded as if addressing someone being unreasonable. Both these attributes seem to tend to ignore the typical content of this particular forum and the discussions that have already taken place, and to do this without cause. Those attributes also, in my opinion, reflect the typical nature of the reply that would be more likely to be required in those other particular forums (I mentioned those two because I've read them and formed that opinion).
I don't presume that you have to agree with that opinion, but I do think the "...you don't have to keep pointing out things like this when no one is contesting them (atleast not at such length)" that followed was sufficient to make the opinion clear.

I took your remarks to mean that you felt that there was only one reason to cancel nv30 Ultra--and that was nv35 sitting in the wings. That's how I interpreted your original remarks.

I simply don't understand where you got the "only one reason". I re-read my original post when replying before, and the connotations don't read that way to me. But I've re-stated them again for clarity.

Quoting the rest of your text would be repetitive, I hope any questions were answered above.

I will clarify that I recognize that May/June is not February/March, and state again that the purpose of such a launch would only need to be to release benchmarks.
 
Typedef Enum said:
Words cannot even begin to describe...

I have to say that I'm in disbelieve myself. When I think about the fact that we have heard about CineFX since July 2002 this is just turning out be a joke. :devilish:
 
LeStoffer said:
Typedef Enum said:
Words cannot even begin to describe...

I have to say that I'm in disbelieve myself. When I think about the fact that we have heard about CineFX since July 2002 this is just turning out be a joke. :devilish:
I don't think everybody is laughing at Santa Clara :oops:
 
WaltC said:
when you had people looking for "important features" like "AGP texturing" in their 3D cards and websites like Sharky's calling AGP texturing a "crucial technology." *chuckle*
S3 were in a good part responsible for this, because between the excellent texture unit design and free trilinear on the Savage3D and Savage4 and the limitations of a 64-bit VRAM bus, it was actually a bit faster to texture out of AGP than out of VRAM in most cases, and they demoed that quite convincingly.
 
Um, I was just wondering where this will leave the R350. Obviously, the R300 will mop the floor with the 5800 non-ultra, so the R350 would in effect be competing against the R300. Any chance Ati would consider not releasing the R350? Or would they still do it for other reasons? (cost of production, yields, bragging rights, etc.) Also, is there any chance that NV35 could be significantly close than what we think? :?: :?: :?: Speaking of NV35, anyone know if it will have the per primitive processor (PPP)?
 
BRiT said:
Its sort of funny in seeing how this news was posted on NvNews front page, but was recently removed. Seems like someone is in denial. :rolleyes:

I'd like to get a confirmation from NVIDIA in regards to the cancellation of the GeForce FX Ultra before posting it. In addition, HardOCP asked us last August not to link to any of their stories.
 
MikeC said:
BRiT said:
Its sort of funny in seeing how this news was posted on NvNews front page, but was recently removed. Seems like someone is in denial. :rolleyes:

I'd like to get a confirmation from NVIDIA in regards to the cancellation of the GeForce FX Ultra before posting it. In addition, HardOCP asked us last August not to link to any of their stories.

Why would they ask something like that? :?:
 
Mulciber said:
he doesn't know anything about videocards, but he knows the name "nVidia". If someone sends him a requisit for a few hundred computers, you can be sure they'll have nVidia cards in them.

It's about brand recognition. Sit back and think about it for a second. While the GeForce FX Ultra may be getting blasted, NVIDIA's name is in the headlines.
 
Mulciber said:
Why would they ask something like that? :?:

Heh. It's a long story that Typedef can explain to all of you one day. Bascially, they said we were stealing their news and riding their coattails.

But were going off topic and I promise not to say anything else about it in this thread (unless I get provoked :LOL: ).
 
[off topic]

Thanks for the S3/SonicBlue info on VIA's P4 license. I didn't know the situation was so iffy (having just seen quite many VIA based P4 mobos so far). I stand informed :)

[/off topic]

Beating a crippled horse, but what's the consensus on NV30's original target speed (before the delay etc.), 500/1000 or 400/800?

From another angle, in order to beat R300, did they bank on 500/1000, or did they expect more than what they got from 400/800 (considering JC's comment on the driver situation)?

Argh, I can't get it said clearly. Hokay. I'm trying to decide between these:

1) They couldn't get the chip to the 500 MHz they had intended, so they added the dustbuster. -- Hardware problem.

2) The intended 400 MHz surprisingly didn't give the performance they had expected (due to difficulties making the driver) so they had to dustbust it up to 500 MHz to get it to win reviews. -- Software problem.

[Edited a tpyo.]
 
Back
Top