Was the "ATI got Xbox2 contract" rumor true?

Oh God here we go Look at the actual underlying architectures and then tell me which is better suited towards a time when there are unified shaders and which will scale easier?

It is likely won't scale, it will be new architecture, when they get there.
 
Oh God here we go Look at the actual underlying architectures and then tell me which is better suited towards a time when there are unified shaders and which will scale easier?

I'm tired of this nearsighted talk and playing of semantics that so many use in favor of ATi. For example, "playable" FP-16? Give me a break, they have the ability to call multiple precisions in the pipeline for a reason - just because ATi's hardware is lacking, plain and simple, shouldn't be overlooked. I fought this battle back in 2000 with 32-bit color and I now see my faults, I was nearsighted and ignorant - I won't make this mistake again.

PS. The cooling solution is more due to TSMC's problems with 130nm low-K dielectrics AFAIK.

I still see ATI comming out ahead. They produced the better design through better design decisions. Look forward all you want, the NV3x just keeps on sucking. They have FP24 ALL the way through the pipe AFAIK. If you need lower percision, you just take the value and round it as you need it. NV3X seems to have a mish-mash to have all the legacy support.

As for a unified shading model and all that. Whoopie! It's not here (DX), so why rush? The current stuff statisfies the current demands and the demands to come for a LONG time -- WRT the product's life cycle. So them not having it now is no big deal, they still have 4xx and 5xx to go.

In the future when this is required, ATI will be at the front again, most likely. Why? Well if Nvidia keeps on throwing dedicated hardware rather than a unified approach LIKE ATIs, then they're hosed. Why waste time with other data types when you do NOT need to support them INTERNALLY? I see ATI having more than enough time (4XX and 5XX) to keep up with or out pace Nvidia.

This isn't like the P10, where they come out with a new arch once in a bluemoon and ride it into the ground -- Nvidia does this. The P10 is a BIG leap in terms of design philosophy, but Nvidia and ATI tend to make smaller leaps more frequently, things should balance out.

I'm more of the thinking that Nvidia was to blame for a lot of it's 0.13 low-k woes. Might have something to do with a lot of excess logic for legacy too.
 
I'm in a state of shock. So we're going to forget how much IP resuse and build-on to the established architectures there is in "New Architectures"? How long did the TNT architecture live on in new iternations? 4 years?

Speaking of this.. what state is the R400 in? heh.

As for ATI's going full out FP-24, I can't see how it's a smart idea. Read the developer's that B3D interviewed and they all want control over the precision. With PP in the API, why wouldn't you want total control so that you can be more effecient? Why use all FP-24 if FX-12 is acceptable?

Developer lazyiness? Defending ATI's bad decision that yeilds them better preformance at the developers expense? I think Tim Sweeney's comments are best... support legacy and transition via PP then make the leap to full out IEEE-32.

Well if Nvidia keeps on throwing dedicated hardware rather than a unified approach LIKE ATIs, then they're hosed. Why waste time with other data types when you do NOT need to support them INTERNALLY? I see ATI having more than enough time (4XX and 5XX) to keep up with or out pace Nvidia.

This is the best part. So, taking a not industry supported substandard and running it threw your entire pipeline is better than making a transitional part that can run the widely supported legacy standards quickly aswell as transition threw to FP via DX9+'s PP and then run IEEE-32 in the future is worse then having a part that could very well fail to run any new apps in the post-2004 timeframe when IEEE-32 is the norm?

Oh yes, makes sence to me. :rolleyes: So, what are you going to say when a game based on Unreal Engine 3 just won't run on an R300? That the short term preformance was worth it? That Unreal doesn't power good games anyway? That Sweeney is lazy? Going to be funny...

PS. Does anyone know when FP-24 was added to the API? Was it designed into DX9 from the beginning or was it an afterthought?
 
Oh yes, makes sence to me. So, what are you going to say when a game based on Unreal Engine 3 just won't run on an R300? That the short term preformance was worth it? That Unreal doesn't power good games anyway? That Sweeney is lazy? Going to be funny...

what are people saying now that doom 3 wont run at 24bit fp on an nvidia gpu . it will only run at 16bit. all the ati users are getting the better quality. Btw sweeney is being stupid. Its all about money. He will make it run on 24bit fp cards. Because if he doesn't he is neglecting the biggest dx9 card on the market. Which the suits will never stand for.
 
"I can't imagine how you will actually program it," he said. "You do all these tasks in parallel, but the results of one task may affect the results of another task."

Tim Sweeney on PS3.
 
Jvd said:
what are people saying now that doom 3 wont run at 24bit fp on an nvidia gpu . it will only run at 16bit. all the ati users are getting the better quality

I may be wrong, but this is what John Carmack stated in an interview with Rev:


Rev/Camack said:
Rev: Your .plan indicates that the NV30-path that you use implements only 16-bits floating-point (FP), i.e. half precision FP, for most computation, which should be sufficient for most pixel shading. The ARB2-path does not have 16-bits FP, and so all computation are done with 32-bits FP on the NV30. With regards to the R300, there shouldn't be a difference since it is always 24-bits FP on the R300. According to your .plan, NV30 is twice as slow on 32-bits FP - that is why the NV30 is slower than the R300 on the ARB2-path, but faster on the NV30-path. The question is what sort of quality difference are we talking about (in DOOM3) for such a difference between FP formats?

Carmack: There is no discernable quality difference, because everything is going into an 8 bit per component framebuffer. Few graphics calculations really need 32 bit accuracy. I would have been happy to have just 16 bit, but some texture calculations have already been done in 24 bit, so it would have been sort of a step back in some cases. Going to full 32 bit will allow sharing the functional units between the vertex and pixel hardware in future generations, which will be a good thing.

So, running the ARB pathway, you're correct you're not running at FP-24, but the superior precision offered by 128-bit color. My impression is that the "NV30" path is utilizing PP for much better effeciency at negligable/zero visual difference.

So, hey if an Nv30 can run as much as possible at FX12 or FP-16 and have the same image quality as an R300 that's ineffeciently running everythign at FP-24 - more power to nVidia.


Just to cement this point:

Rev/Carmack said:
Rev: My interpretation from your .plan :

In terms of Performance :
NV30+NV30-path is faster than NV30+ARB2
NV30+NV30-path is faster than R300+ARB2
R300+ARB2 is faster than NV30+ARB2
R300+R200-path is faster than R300+ARB2

In terms of Quality :
NV30+ARB2 is better than NV30+NV30-path
NV30+ARB2 is better than R300+ARB2

R300+ARB2 is better than NV30+NV30-path
R300+ARB2 is better than R300+R200-path

Am I correct?


John: Correct.


cybamerc said:
"I can't imagine how you will actually program it," he said. "You do all these tasks in parallel, but the results of one task may affect the results of another task."

Tim Sweeney on PS3.

What is this some perverse reverse-psychology you're trying here? Tim Sweeney is a top-tier PC developer, he knows that platform intimatly. For all we know he's comment on PS3 is based on less information that Panajev can regurgitate at this point in time (as we now have a patent and more information).

So, please...
 
cybamerc said:
Vince just likes Nvidia because they've been linked to Sony and PS3. Why even bother with his fanboyish nonsense?

How little you know, I hate nVidia with a passion. They've not only taken my money by suceeding, they made me look like an ass several times over the past 3 years. Yet, I infinatly respect them for this and revere their ability to suceed and their design methodologies. I see the position ATI is in and how people are defending them and it's reminescent of myself and several others here circa 2000. And I've learned....

The only 'fan-boy' here is you, go look up my posts on nVidia from 3 years ago on Fool or THG. Better yet, ask Tag why she likes me so much. ;) lol. Besides, I still think nVidia will be in Xbox2.

So please refrain from making yourself look dumb, it reflects badly.
 
Vince said:
I'm in a state of shock. So we're going to forget how much IP resuse and build-on to the established architectures there is in "New Architectures"?

What does that matter NOW?

Sometimes you are SO SILLY. Who CARES if x years down the road, Nvidia's "sea of functional units" works better than ATIs dedicated traditional pipelines or whatever other approach they may have developed for an upcoming iteration of products, if the chips we have NOW clearly shows that ATIs approach totally and completely KICKS Nvidia ASS all over the place??? Your NV3x series will never exceed R3xx series performance at pixel shading. Never!

You are spouting absolute, utter NONSENSE here. If you base your purchase today on how you predict an architecture will look like/perform in the future, you're crazy. Or at least half crazy, hehe.

Speaking of this.. what state is the R400 in? heh.

True answer is, nobody outside ATI knows for sure so why speculate about it? They haven't announced anything publically.

As for ATI's going full out FP-24, I can't see how it's a smart idea. Read the developer's that B3D interviewed and they all want control over the precision.

Not all. You could argue a majority wants it, but then again you hardly have a statistically representative selection in that article, so it would mean nothing. Those guys are mainly "enthusiast" coders aiming at "enthusiast" players, ask guys from places like EA or such, more mainstream codeshops and ask if they're really that interested in bothering with special-casing rendering paths in their engines for quirky architectures like NV3x.

Besides, why would you need/want control if there is no speed difference from one format to the next, it just complicates things. The only reason you'd want control over the pixel format is that going flat-out full precision on current Nvidia products makes 'em run dog-slow.

It's not as if pixel formats are completely analoguous with integer sizes used in a microprocessor, they're not and you should know that.

With PP in the API, why wouldn't you want total control so that you can be more effecient? Why use all FP-24 if FX-12 is acceptable?

Why not turn that silly argument on its head and ask, why go through the extra work and make another shader just for the technically inferior FX12 format if FP24 comes for free from a speed POV and doesn't require twice the work effort? You know that DX9 requires FP24 minimum. FX12 is not compliant with API specs.

Defending ATI's bad decision that yeilds them better preformance at the developers expense?

WHAT expense? You calling FP24 at full speed as compared to FP32 at half "a bad decision"? You're stark raving mad if you seriously believe that. Nvidia is wasting transistors on features that breaks DX9 compatibility, you shouldn't be the one talking about bad decisions.

I think Tim Sweeney's comments are best... support legacy and transition via PP then make the leap to full out IEEE-32.

Sweeney's never really been up there with the big ones, he's said and done too many stupid things to really be considered an authority. It's totally unreasonable to expect an engine within the foreseeable future to REQUIRE FP32 internal precision. Shaders so complex they'll give noticeable artefacting on FP24 hardware won't be used to any large extent for a long time. Certainly not until ATI has developed FP32-capable hardware. Sweeney's certainly smoking some heavy weed if he thinks the next unreal engine will look unacceptable on current R3xx FP24 hardware, yet remain playable on NV3x FP32 hardware...

A huge big LOL @ you and Tim for that one dude.

This is the best part. So, taking a not industry supported substandard and running it threw your entire pipeline is better than making a transitional part that can run the widely supported legacy standards quickly aswell as transition threw to FP via DX9+'s PP and then run IEEE-32 in the future is worse then having a part that could very well fail to run any new apps in the post-2004 timeframe when IEEE-32 is the norm?

What you talking about? Again you make no sense. ATI hardware supports full FP32 frame buffers, no software will fail to run on it. Internal pipes calculate with 24-bit precision yeah, but since that is within API specs that hardly matters. It is totally invisible to software anyway, it only sees the output, which is FP32.

You seriously think a NV3x chip is going to run your FP32-requiring "post-2004" software at a playable speed at anything other than a postage stamp sized screen resolution? You gotta be joking!

Also, there is no such thing as "DX9+". You need to stop reading those Nvidia PR pressreleases, heh heh. :LOL:

Oh yes, makes sence to me. :rolleyes: So, what are you going to say when a game based on Unreal Engine 3 just won't run on an R300?

I'd say the game's buggy, and what else is new? Sweeney-engined games always are. Unreal was a total disaster from a stability standpoint when it came out in 1998, I never ever played a game that crashed that much, same thing again with Unreal2. Worst p.o.s. in years (and what's up with those load times anyway? Did the game secretly convert my harddrive to a C64 tape deck?).

FP24 internal presicion is specced in the API. FP16 (or FX12 for that matter) AREN'T. Get over it pal, your team lost the match, stop bitching and whining. :rolleyes:


*G*
 
Grall said:
You are spouting absolute, utter NONSENSE here.

First of all, you're the one spouting nonsence as your confusing the ability of an IHV to produce a more advanced 3D architecture and how it reflects on their future products in a closed-box enviroment (as would be used in a console in two years) and the ability for an IHV to produce a card that's hardly forward looking on an architectural level and is a very good general purpose parallel computer that's utilizing yesterdays architectural layout. Last time I checked there was a difference between more advanced architectures and just preformance, but appearently when you bring PC IHV's into the conversation people go stupid.

<checks forum again> :rolleyes:

I'm also going to not respond to your entirely PC based argument parts (but I did throw in some) that don't belong here... but hey, aslong as you can pimp your IHV!!

Why not turn that silly argument on its head and ask, why go through the extra work and make another shader just for the technically inferior FX12 format if FP24 comes for free from a speed POV and doesn't require twice the work effort? You know that DX9 requires FP24 minimum. FX12 is not compliant with API specs.

Why make a shader for FP-24 when you can code entirely in IEEE-32 and it'll run perfectly on tomorrows cards that have fused shading/computational resources? We can play this game all night, but the fact remains that the Nv3x supplies those who aren't enthusiests and those who upgrade every year with the widest selection of precisions that are not only transient, but future-proof for the greater part of the next decade atleast. Can you say the same for the R300? Nope.

You need to drop this elitest enthusiest POV and realise that many of us don't care about >100fps - we just want the damn game to run when we want to play. God you remind me of myself, the quicker you learn the better.

Sweeney's never really been up there with the big ones, he's said and done too many stupid things to really be considered an authority.

So the mans had how many best selling first party titles with how many 3rd parties utilizing his 3D engine and we're going to listen to you? Who are you again?

Also, there is no such thing as "DX9+". You need to stop reading those Nvidia PR pressreleases, heh heh. :LOL:

As in >DX9... I'm going to assume that DirectX9 isn't the epitome of graphics and that there will, in fact, be a DX10... <insert starteling revelation here>

FP24 internal presicion is specced in the API. FP16 (or FX12 for that matter) AREN'T. Get over it pal, your team lost the match, stop bitching and whining. :rolleyes:

Hmm... So I guess Microsoft put PP into DX9 why? They support the IEEE standard precision, they support FP-16, and FX-12 for the transient period and this is done utilizing PP as per DX9. Whats the problem?

My "team" - dude, get a life.
 
Well, I think what Vince might be getting at is that in a fixed platform (ie: console), the more flexible architecture of the NV3x would be greatly appreciated over the R3x0's more traditional/conservative design. And when it comes to PC games, since the software lags behind the hardware (by an order of magnitude), this difference behind the architures isn't really apparent.

If I had the choice behind NV3x and R3x0 for a console that'd ship today, I think NV3x would make more sense, simply because of the flexibility.

Someone somewhere once wrote that NVIDIA's past workstation cards were gaming cards scaled up to the needs of professionals.. whereas the NV3x was most definitely a workstation card scaled down to the gamer. Interesting to think about!
 
This is confusing, but I want to try,

NV30+ARB2 is better quality than NV30+NV30-path because
NV30+NV30-path is faster than NV30+ARB2.

So, trade off between quality and speed.

NV30+ARB2 is better quality than R300+ARB2 because
R300+ARB2 is faster than NV30+ARB2.

Again, trade off between quality and speed.

R300+ARB2 is better quality than NV30+NV30-path because
NV30+NV30-path is faster than R300+ARB2.

R300+ARB2 is better quality than R300+R200-path because
R300+R200-path is faster than R300+ARB2

Same again.

Is this correct ? What happend to Carmack statement, when he said its going to be the same quality accross the board ?
 
Vince:

> .. they made me look like an ass several times over the past 3 years

I don't think you can give Nvidia all the credit for that. You do such a fine job of that yourself.

> So the mans had how many best selling first party titles with how many
> 3rd parties utilizing his 3D engine and we're going to listen to you?

What do you think about his comment on Cell?



zurich:

> the more flexible architecture of the NV3x would be greatly appreciated
> over the R3x0's more traditional/conservative design

If that is true why aren't we seeing more software engines? A CPU offers the most flexibility after all. Who cares about speed?



Simon F:

> I think you meant "platonic".

No, you see.... when a person says something "gay" on the internet but at the same time makes a point of not being a homosexual (although Vince used the word "heterosexual" which makes the point less clear) then it becomes funny. Or something like that.
 
I THINK people here are confusing... melting the Pc and the console worlds together... when it actually isn't like that.
we can talk about Pc video cards. THEN, separately, we can talk about console structures...
but that's it...
the discussion about whether the NV3x is better/worse than R3xx is cool and all. but this is a console forum and i think we should stick to that.
does anyone really think that the difference between an ATI powered Xbox2 and an Nvidia one will be so noticeable??? especially on normal TV's? hell, even on current HDTV's..... the gpu in the Xbox2 will be capable of a certain amount of STUFF, whether it is an ATI gpu or an Nvidia gpu...

see what i mean....

we can't even see the difference between FP24 and FP32 on super hi resolution monitors, how are u going to see the difference on a Tv????

still, u WILL see the advantages of a more flexible architecture over the others... see what developers are making little old PS2 doing for a practical example.

see my point?
 
As far as the NV3x vs R300 in a console environment I understand the fact you want control over your application and you might want to fine tune it well and NV3x allows you to do that...

As far as the future of R3xx is concerned, well if they wanted internal 32 bits precision they could add it... increasing memory bandwidth ( clock-rate is increasing ) should allow them to upgrade to FP32 and still support FP24 ( which will be faster than FP32, but not by much... bandiwdth reasons ), it will require re-engineering, but they can do it...

As far as the future is concerned, well they have competent teams ( I trust the ex-Art-X team ) and I think they can push in the direction of flexibility ( sharing of FP resources for VS and PS... shared shared HW )...

Sure NV3x is more on track towards that than the R3xx, but ATI seems to be putting a lot of enphasis on circuit level design trying to make sure they use the available manufacturing process better than their competitors: this is an advantage the Alpha guys used ( when they could, when they had the budget ) and that Intel has been using for several years...

Vince:

> .. they made me look like an ass several times over the past 3 years

I don't think you can give Nvidia all the credit for that. You do such a fine job of that yourself.

You are talking from experience, right ?

Simon F:

> I think you meant "platonic".

No, you see.... when a person says something "gay" on the internet but at the same time makes a point of not being a homosexual (although Vince used the word "heterosexual" which makes the point less clear) then it becomes funny. Or something like that.

I know... You feel he is a hot stud and you want a piece, but he won't give it to ya... wait, was that what you just said ?
 
Back
Top