Why ATI ditched the R300 architechture ?

Bob said:
You may want to initialize 'b', in case its previous value was inf or nan.

Also, have you tried sequences like this on NV4x/G7x? You might be surprised...

Care to elaborate on it? Inquiring minds want to know.
 
Bob said:
You may want to initialize 'b', in case its previous value was inf or nan.
True, if these values are supported at all.

Also, have you tried sequences like this on NV4x/G7x? You might be surprised...
I would expect it to use predication, so there's neither performance gain nor loss. (edit: a predicated texture read could actually result in substantial savings).
 
Last edited by a moderator:
Deathlike2 said:
Care to elaborate on it? Inquiring minds want to know.
Under true IEEE754, Zero*Inf = NaN and Zero*NaN = NaN. Yes, you can get a result that is not Zero by multiplying Zero by some of the special numbers.
 
arjan de lumens said:
Under true IEEE754, Zero*Inf = NaN and Zero*NaN = NaN. Yes, you can get a result that is not Zero by multiplying Zero by some of the special numbers.

I meant his other statement.. I have a pretty good idea that there are "special numbers" that are dealt with differently... I was wondering what was "surprising" on how the NV4x/G7x dealt with it..

I'm almost thinking there was a performance penalty on the psuedocode Xmas was discussing.. though logically, the code rewritten is very much different from the original...
 
I thought Bob works for NV, in which case I'd say he means NV40/G70 would benefit from such a conditional, too.

Sweet! There's a conditional in my reply, too.
 
Pete said:
I thought Bob works for NV, in which case I'd say he means NV40/G70 would benefit from such a conditional, too.
Well, if a is often zero in continuous areas of pixels, then NV40/G70 would benefit, too.

The problem however is that the compiler has no way of knowing the probability of a being zero, and the if is not free on NV40/G70. If the compiler just guessed, it could cause a significant slowdown. Although you could argue that if incrediblyComplexFunc takes 50+ cycles, 4 or so cycles for if/endif are actually not that significant any more.

It could be that NVidia did extensive empirical shader analysis and ended up with a threshold of complexity for a mul operand beyond which the probability of skipping the branch * the complexity of the branch is higher than the cost of the flow control instructions. Below that threshold, they use predication to at least save on texture bandwidth.
Or they could do shader replacement if P(a==0) is known to be high...

Sweet! There's a conditional in my reply, too.
:LOL:
 
Last edited by a moderator:
Ailuros said:
Was NV4x a quantum leap in performance over NV3x because of added units and/or SM3.0 support or because the latter didn't inherit the first's weaknesses? If I draw a parallel between R2xx and R3xx I don't think the answer is that much different in the end.

Umm, I dunno. There are weaknesses and weaknesses. Is a part "weak" because it is designed to be, or weak because it didn't work out so well? My understanding is that ATI made a business decision to go for leadership at the top end starting with R300. Shouldn't a part be judged on how well it meets --or doesn't-- the goals of its designers?
 
Xmas said:
[...] Although you could argue that if incrediblyComplexFunc takes 50+ cycles, 4 or so cycles for if/endif are actually not that significant any more.

It could be that NVidia did extensive empirical shader analysis and ended up with a threshold of complexity for a mul operand beyond which the probability of skipping the branch * the complexity of the branch is higher than the cost of the flow control instructions. Below that threshold, they use predication to at least save on texture bandwidth.
Or they could do shader replacement if P(a==0) is known to be high...
Can the compiler, while compiling "DumbMul", find out the cycle-cost of incrediblyComplexFunc ?

I suspect not, as I imagine that shader compilation is always performed "piecemeal", where every function exists in a universe that consists solely of itself. Context-free compilation, as it were.

Though that does bring up how the driver might see this shader. Surely the driver would be able to build-up execution statistics for each function.

Erm...

Jawed
 
geo said:
Umm, I dunno. There are weaknesses and weaknesses. Is a part "weak" because it is designed to be, or weak because it didn't work out so well? My understanding is that ATI made a business decision to go for leadership at the top end starting with R300. Shouldn't a part be judged on how well it meets --or doesn't-- the goals of its designers?

IHVs make propositions for future API drafts; best idea usually wins, whereby "best" depends on the vote of the majority of involved parties.

Do you really think that when NVIDIA's engineers conceived NV30 that were they were in fact aiming for a complete disaster?

Or on the other hand do you think if ATI would had been able to squeeze more out of R200, they wouldn't have? I've heard and read repeated rumours about R200 having some hardware bugs and it was actually aimed to support Multisampling. Assume there is some truth behind it, doesn't that put it's comparison to R300 in a totally different light "if" things would had been different?

***edit: in what consensus "leadership" anyway? ATI has been for ages the biggest player besides Intel (which I don't like encounting because calling an IGP a graphics accelerator is a bit of an oxymoron).
 
Ailuros said:
***edit: in what consensus "leadership" anyway? ATI has been for ages the biggest player besides Intel (which I don't like encounting because calling an IGP a graphics accelerator is a bit of an oxymoron).

This is what I'm pointing at, a business decision to change their model from "competitive mainstream" to "big dog ready to tussle over any steak with the aim of winning":

Dave Orton said:
However, the big thing that I feel, and I came in with this conviction, was that watching us trying to target the mainstream and win was a dying model and so we said we are going to have to arc up. R300 was really the first part where we really opened up the thinking to what you can do to hit performance and hit schedule and relaxing on die size to an extent and I think it helped ATI get back in the game.

And, no, I don't think NV planned for NV30 to be a disaster. I'm saying I don't think ATI planned for R200 to be the hottest cock on the walk. This is why the delta R200-->R300 isn't as indicative as your average NV generational delta, as NV has always wanted/planned to be Big Man on Campus.
 
geo said:
This is what I'm pointing at, a business decision to change their model from "competitive mainstream" to "big dog ready to tussle over any steak with the aim of winning":

In my mind that "plan" started gradually with R100 and they build it slowly up to what turned out to be R300. That being said (and you'll have to excuse the stubbornness here), R300 would had never managed to make equally great impressions if the gap to R200 would had been smaller and if NV30 wouldn't had turned out to be such a disappointment for whatever reason. That's all I'm saying and it's no way an attempt to diminish the value of R300, to avoid misunderstandings.


And, no, I don't think NV planned for NV30 to be a disaster. I'm saying I don't think ATI planned for R200 to be the hottest cock on the walk. This is why the delta R200-->R300 isn't as indicative as your average NV generational delta, as NV has always wanted/planned to be Big Man on Campus.

Since that text followed an "Orton quote", I recall David Orton not to be willing to spend any further time/resources on R200 development and pushed for an immediate release albeit it might have needed more time.

ATI did most definitely NOT design R200 to lose the battle either. Quite frankly one of those reasons I don't like these kind of theories is that way too many connect the change in success within ATI with the ArtX acquisition. I refuse to believe to any extend that ArtX were the absolute masters that came to rescue ATI's day. When there was a will there was a way and yes of course in all fairness the addition of more engineering talent and thus resources did help definitely the final outcome.

For the future I'd say that it's unlikely that we see such gaps for any of the two major IHVs between their different product generations, because I expect them both to have learned from their past mistakes and struggle as hard as possible to minimize risks even more.
 
Ailuros said:
ATI did most definitely NOT design R200 to lose the battle either. Quite frankly one of those reasons I don't like these kind of theories is that way too many connect the change in success within ATI with the ArtX acquisition. I refuse to believe to any extend that ArtX were the absolute masters that came to rescue ATI's day. When there was a will there was a way and yes of course in all fairness the addition of more engineering talent and thus resources did help definitely the final outcome.

Oh, I would agree that while adding more highly-skilled engineers and IP is always a good thing, it isn't the be-all. Hence why I provided the Orton quote showing that they actually made a conscious business decision to change their model, starting with R300. The will at the top of the company to compete for the performance crown is at least as important as the engineering talent to do so. NV has always had it --ATI acquired theirs as part of the ArtX acquisition, and it was at least as important a part of that acquisition as the engineering talent that went with it.

And, I think, NV would agree that ATI shifted their stance. I can remember many quotes from them in Rage/Radeon/R8500 days where NV smugly liked to observe that ATI was always shooting to beat NV's current part with their upcoming release, rather than to beat their next part. And if you look at the history that holds up. I recall, for instance, my Rage 128 Fury. For about a month it was performance king, dethroning TNT that had been available for some months. And then the first Dets came out, and it wasn't. . .followed quickly by TNT2 and it *really* wasn't.
 
Ot

I think, actually, quite a lot can be attributed drectly to the ArtX takeover, although not necessarily from an engineering standpoint, but from the top: Orton. Its been characterised to me before that ATI has traditionally been a research and engineering company before they were a product company. One of the first things that Orton did was look at the what was being developed and say "OK, whats that going into?" if it wasn't clear then it either found a home or was out. He got the engineering to focus on productisation of particular technologies.

Having said that, I don't think its a coincidence that all the desktop technologies produced since R300 have primarily been lead by the California office, which I assume exists mainly due to the migration of S3 engineers and ArtX. The last desktop product lead by Marlborough (who's office name is "ATI Reasearch") was R200, and while they were the lead on Xenos, R600 as a desktop product is still being lead by California (which means there was possibly a handover at some point of the unified architecture and the result may be quite a different take on what was seen Xenos).
 
Ailuros said:
I refuse to believe to any extend that ArtX were the absolute masters that came to rescue ATI's day.

except with the flipper, which was after all, ATI's introduction into consoles. Nintendo seems very satisfied with their partnership with ATI as does Microsoft.
 
Jawed said:
Can the compiler, while compiling "DumbMul", find out the cycle-cost of incrediblyComplexFunc ?

I suspect not, as I imagine that shader compilation is always performed "piecemeal", where every function exists in a universe that consists solely of itself. Context-free compilation, as it were.
Sure it can. At least the one in the driver, not the one from Microsoft, as it doesn't know cycle costs.
 
Xmas said:
Sure it can. At least the one in the driver, not the one from Microsoft, as it doesn't know cycle costs.
Well, I imagine there's lots of fun to be had in testing that. I wonder if Tridam has poked around...

Jawed
 
SugarCoat said:
except with the flipper, which was after all, ATI's introduction into consoles. Nintendo seems very satisfied with their partnership with ATI as does Microsoft.

From Dave's post above:

The last desktop product lead by Marlborough (who's office name is "ATI Reasearch") was R200, and while they were the lead on Xenos,....

Flipper was in a very advanced development stage when the acquisition took place.
 
All I know is there is pretty clear evidence ATI is going in the wrong direction.

I think the Memory Controller is a prime example. It seems it isn't really needed.

And the TMU's need to be beefed up.

The fact is Nvidia's tradeoffs are winning right now. Although not by a huge margin..

ATI has a lot of stuff that sounds neat, but if it's not used right now what good is it? As I say, people who buy these real high end cards buy a new one every six months anyway. So what good is looking to the future?

Also big PC game releases are growing fewer. It may only be every six months major new releases like FEAR even come out.
 
I don't think it's that much in nVidia's favor at the current time. But it may be once the G71 is released.

Anyway, I don't think ATI is going in the wrong direction at all. It may just be that they moved forward with some optimizations a little sooner than they should have.
 
Back
Top