Official: ATI in XBox Next

Status
Not open for further replies.
A few more questions.

Does ATI have a history of always going for a older fabrication process than Nvidia? Considering the disclosures made by TSCM in regards to the maturity of there .13 process would it not be more correct to change the adjective “conservativeâ€￾ to “prudentâ€￾in regards to ATI’s decision?

With the C.O.M. of consoles being so important does anyone think that IBM will make any part of the Xbox2 unless it helps them further offset there own R&D expenses (like the Gekko)?
 
once had a guy who just finished flight school try to tell me that sails in sailboats don't operate according the same laws of physics as wings on airplanes. Like objects moving through air horizontally are somehow different from objects moving vertically ?!? Once he had staked out his postion, nothing could convince him of his wrongheadedness

Not to go too much off topic but he's right, "laws of physics" might be a wrong term to use but i understand what he means, planes rely on pressure created by a vortex along the wing's surface for the atmosphere to "suck up" the plane in the air. Its a totally different approach than a sailboat who just rely on its sail to catch the wind.
 
DaveBaumann said:
Actually, Vince's second option doesn't make any sense. "Physical reuse" is just that, physical - so for something to be "reused" it is has to be reused in a process that it was initially used from, i.e. you can only go from one 90nm process to another 90nm process, you can't go from 90nm to 65nm process via "physical reuse".

Um, that's exactly what was intended Dave. My theme has been lithography and the possible problems in adopting bleeding-edge lithography when the development pipeline isn't within a single entity.

Thus, it's logically derived that if a form of physical resuse is adopted (it was just an option remember) that it would be bound to a particular set of libraries. What you stated is just a subset of this.



PS. Joe do you not have a mind of your own? This is simple, I refuse to play your semantic game of backpeddling and not stating opinions while eating other people up based on linguistics. Your ideas have little bearing and are nonexistent. If you refuse to put up - then shut the heck up.

Dave Baumann is a great guy, but I'm not asking him - which I would have precisely done if I wanted his opinion. I want to hear yours and what you think ATI is going to do. Perhaps expand on the consequences (pro & con) of a 3rd party taking over a netlist for an advanced IC and running with it. I mean you're the ATI investor right? I mean, give us somethign to work with - your debating on merely my ideas and semantics.

Am I supposed to take that like I should be afraid, or that I was hoping you would "overlook it?"

Dude, get off yourself. I just wanted to let you know that I wasn't going to forget about the time you put in (yes, the whole 2minutes it took you to say, "Maybe Yes, Maybe No") and that I'd continue the conversation. Was more for tact than anything else as I know how frustrating it is to prolong an argument and have people evade points. But, some people are just jagoffs.
 
Joe DeFuria said:
I think there's just as clear reasoning that says that if what you're saying is correct, then nVidia did not time the market properly, or they sacrificed performance for what is in reality a "half-step" to the "new paradigm"....which in the end makes it more of a jack of all trades, master of none type part.

Might this "half-step" benefit them toward making the "full step?" Perhaps. This is assuming that nVidia made a half-step in the right direction, and not the wrong one.

N maybe good, N maybe bad. N may or may not exist. You have just furthered this conversation a whole 0%.

Joe, look, I don't have your desire to play semantic games. I layed out what I thought 3D graphics would evolve into - I've heard similar comments from master Carmack, developers here, Gary Tarolli... it's easy to see the pattern emerging to anyone who wants to see it.

I showed how the NV3x fit into this, and how the R300 didn't. What you're stating is semantic BS my friend, it has no fundimental relevence to the line of reason that I've put foirward and am supporting. We can play these games about anything, anywhere, anytime - and they never further the discussion.

Joe DeFuria said:
Thing is, Vince, nVidia was no different with respect to your arguments back in the X-Box 1 era. (nVidia having a more true fabless semiconductor model, vs. a licensing one.)

And apparently, Microsoft found out that those characteristics didn't pan out all that well, and didn't want to repeat that same mistake twice.

We don't know the specific characteristics surrounding the choice, nVidia was in arbitration with Microsoft as well as other things that have manifested themselves in Microsoft's DirectX workings in the past.

But, while this maybe better finacially, it has no relevence to preformance. Which is the attribute we concern ourselves with in the Console Forum. If you wish to debate the finacial aspects then leave. May I suggest Fool.com which several others can vouch for as a site filled with literate, intelligent and likeminded individuals who will tell you what you want to hear Joe.

Once again, I refer you back to my posts concerning the importance of lithography and how this is a problem. This has still not been addressed - and I'd hope that if you'd fight me on it, you can atleast explain why.

Guess who... said:
Stages of nVidia fan coping:

1) Complete Denial: "Nah---this is just a bluff for Microsoft to get nVidia, the one they really want, to cave into a better deal."

2) Technical Denial: "Surely corporate politicing....nVidia clearly has both better technology and a better business model. I see no no good technical reason for MS to choose ATI, thus, it's down to corporate politics.

3) No more progress beyond 2 is ever made.

< shrug > Good post Joe, thanks for the example of that "melodramatic statements and blowing things out of proportion" you were talking about. It was really unnecessary, but I appaud you for going out of your way for the betterment of this board. Good Times.
 
Cyborg said:
once had a guy who just finished flight school try to tell me that sails in sailboats don't operate according the same laws of physics as wings on airplanes. Like objects moving through air horizontally are somehow different from objects moving vertically ?!? Once he had staked out his postion, nothing could convince him of his wrongheadedness

Not to go too much off topic but he's right, "laws of physics" might be a wrong term to use but i understand what he means, planes rely on pressure created by a vortex along the wing's surface for the atmosphere to "suck up" the plane in the air. Its a totally different approach than a sailboat who just rely on its sail to catch the wind.


Sailboats catch the wind when the wind is behind them, when the wind isn't behind them the sail acts like the wing on a plane to move the boat forward.


 
DaveBaumann said:
I'd suggest that its more likely that if ATI do not do the back end then they will hand over the core logic to MS in its entirety and MS will seek someone to do that. If MS have a particular fab in mind, and ATI have previously dealt with that fab then its probably the case that ATI would do it, if there is another fab in mind then it may well be someone else.

Exactly. The more I think about this the more I think they'll hand off a completed GDSII to Microsoft. The things stated earlier by WHQL and Joe about how it's just a design (AFAIK - correct me if wrong guys) is improboble. ATI has the tools, experience, set-piece infastructure and relationship with TSMC/UM in place.

From what I've heard, the lvl of integration between front-end RTL guys and back-end guys doing the routing, synthesis and placing is rapidly increasing. To a point where on advanced lithography, it's a given that a high level of syngergy will be there (eg. knowing the effects of RTL on physical implimentations and designing with this in mind).

To hand off the synthesis of a 60M+ Gate device to a 3rd party just doesn't sit well with me. Granted there are people who do this (eSilicon and , but I've never known them to do ICs with this level of complexity.
 
Yes... it's Joe again said:
Indeed. Try this one on for size: MS is partnering with ATI for the next x-box. And they did so because they believe that the combination of ATI's future technology, (which includes performance, cost, power consumption, features, etc.) and ATI's businessmodel, is overall superior to nVidia's.

End of story. No denial required.

Alright, and this would make a great discussion at Fool.com. ATI's [sic] "buisnessmodel" is of no concern to us here. Just as IBM's, NEC's, Toshiba's or ATI's (Before you and XBox) were of any concern to us.

What is a concern in the Console Forum (Like, as opposed to the Investment Forum) is preformance and features. This, as I proposed, is highly linked to lithography and the manufacturing abilities of the respective comapnies. We talk of IBM's success in SOI or Toshiba's success in eDRAM because it's reflective of PS3. We talk of NEC's achievements and Nintendo for a reason. We also talked of ATI and how we don't Nintendo's part to be absolutely bleeding-edge for a reason - ATI is a small part of this.

The same holds of ATI/Microsoft and nVidia. If you can't see this, then you're just wasting space and limiting people's ability to suggest their own ideas because your arguing over semantics.

As far as we know, R400 is a DX10 based architecture. What happened to it, is that ATI realized that, considering they already have the superior DX9 core, that what they really need is a [/i]mostly[/i] a faster core, and don't need a new nbext-generation core at this time to remain competitive in the PC space. DX10 won't be useful until 2004 or 5. Technology and features wise, R3xx is superior to NV3x. ATI is the "deafcto standard" upon which DX9 games are built....and if desired, nVidia cards are "optimized for."

I think Archie covered the DX10 lvl question I was thinking of. But, I have an OT question - how many games on the market are built utilizing the defacto standard? Just wondering because I've seen a bunch of nVidia's Way it's ment.. stickers and haven't seen.. well... anything for ATI. Perhaps you can PM me the answer as this is OT. Thanks in advance.

In short: ATI does not need at this time, to improve the characteristics of their core, just performance.

I smell a 3dfx parallel.

The same situation that nVidia was in with the DX8 era. nVidia didn't NEED the NV30 to comete with the Radeon 8500. They just needed a "faster" Geforce3 core. Hence, the GeForce4 ti.

Well, this is factually incorrect. nVidia had a NV20, then the NV25 refresh. Then the new NV30, refresh NV35. New NV40...

ATI is basically going R300, R350, R3xx. I see a difference, but it must not be appearent to those not with a vested interest in ATI. < shrug >

So where is the R400? It was scrapped, because it wasn't a wise business move to persue it at this time. It is smarter (less risky) to go after Loki.

We'll see when they anounce NV4x and Loci, won't we?

Joe DeFuria said:
nonamer said:
First, I think you should read Vince's post since that was what I was refering to in this whole line of talk. This whole thing was made in the assumption that ATI will continue there practice of holding back in using the latest process, and thus will stick to 90nm when 65nm is first available.

Right...and that's a rather large assumption to make if MS is licensing technology, and not buying graphics chips.

No it's not when you think about it objectivly. But, you still have yet to articulate your thoughts on ATI's agreement and it's short/long-term effects on preformance, quality, et al.

Thus, we really can't debate much as you've claimed and stated nothing. Basically being a jerk and not taking a firm, defined, stand - needless to say, this hasn't stopped you from attacking others like nonamer.

No, MS will not have to do "redesigning". MS's contracted engineering team will have to take ATI's core logic design and build a chip using it.

You know this how? You'd want this why?

I can't understand why people don't like ATI

I can't understand why you don't accept and love Natoma's life choices either. :rolleyes:
 
nelg said:
I am a bit confused here.
Vince said:
nVidia, as someone on B3D stated, is a semiconductor company as opposed to ATI and their current IP agreement.
How is ATI any less or more of a semiconductor company than nVidia?

Good Question. In toto ATI is in no way less of a semiconductor oriented company, nor would I say this. But, with regards to this particular agreement - it's been stated that nVidia would only accept a semiconductor based agreement (which is probobly smart overall*) where as ATI is willing to sell IP (whatever that entails).

* It's smart overall as this way the given company controls the output and it's quaility. Not only in the uber preformance aspects I've been talking about thus far, but also in overall quality control. Like it or not, the ATI name is going to be associated with the IC - and all that goes with it.. Regardless of who does the work.
 
Well, this is factually incorrect. nVidia had a NV20, then the NV25 refresh. Then the new NV30, refresh NV35. New NV40...

ATI is basically going R300, R350, R3xx. I see a difference, but it must not be appearent to those not with a vested interest in ATI. < shrug >

This year there is no difference since, at the top end, ATI will have produced R300, R350 and R360 and NVIDIA will have produced NV30, NV35 and NV38. I suspect we'll see similar parallels as time goes on.

I think Archie covered the DX10 lvl question I was thinking of. But, I have an OT question - how many games on the market are built utilizing the defacto standard? Just wondering because I've seen a bunch of nVidia's Way it's ment.. stickers and haven't seen.. well... anything for ATI.

NVIDIA have a larger marketting budget for TWIMTBP, its as simple as that - ATI are currently focusing all their resources on the engineering side (something I wish NVIDIA would do more of). TWIMTBP titles still operate within the confines of the developers API of choice.

As for DX10 it will be being worked on now, you can rest assured of that. The IHV's, specifically ATI and NVIDIA in this instance because those are the ones you can guarantee with a reasonable level of certainty will actually produce something, drive the content of the API just as much as MS does. MS will be fully aware by now of the likely functionality of NV50 and R500 and the API will be shaped around those - if the timing of DX10 is close to the XBox2 then it wouldn't surprise me at all if the major focus for DX10 is put on ATI next generation part as it clearly was for DX9.
 
Vince,

i am pretty much with you that the architecture of graphics chips will dramatically change when the distinction between fragment and vertex disapears, though, i very much doubt that nv30 is architecutrally closer to that in any significant way. People like you and Uttar seem to base your assumptions (as of nv30 being already designed with this in mind) solely on nvidias implementation of what is commonly referred to as vertex shaders, where nvidia basically goes with a single large vertex unit with a large number of scalar fp pipelines whereas r300 uses distinct building blocks consisting of a 4-way simd unit and a scalar pipeline each (like ps2's vus btw). To me this is basically a design decision in the way of, "do we replicate simpler control per unit at the cost of loosing flexibility in handling more unusual computing patterns (for a vertex shader at least), or do we shift our transistor budget more towards control, sacrificing a bit of theoretical performance for better situational "adaptability". So to speak in this forums terminology, nvidias approach is much more like integrating all vertex shading into a single multi-scalar processor, while ati has something that can be more thought of as one of your CELL PEs, a smaller global vertex logic that has access to several replicated distinct "vector processors". Yet none of both principles alone (imo)represents any significant architectural step towards the above mentioned paradigm compared to each other (don't want to imply that adding flow control to vertex shaders isn't a major step forwards, this, as i understand it though, has not been part of our discussion). The reason why i (and others propably) mentioned pixel shaders or fragment shaders or however you want to call the rasterizer/renderer part of the chips is that these are architecturally way more complex then the vertex shaders for contemporary dx9 class chips (they really are the interesting parts of dx9 chips imo, as the dx8->dx9 vertex capabillities grow is rather small by comparison (numerous dx8 era chips are already vs2.0 compliant)). Here, we're talking about by far the largest share of on chip logic and it is here that imo r300 looks clearly better architecturally (not in terms of features).
 
PiNkY said:
where nvidia basically goes with a single large vertex unit with a large number of scalar fp pipelines

NV30's array is still Vec4 based - the only one that is fully scalar is P10.
 
NV30's array is still Vec4 based - the only one that is fully scalar is P10.

Thanks for adding this info, i guess this almost nullifies nv30s above stated relative performance advantages when lots of scalars have to be transformed per vertex.
 
PiNkY said:
i am pretty much with you that the architecture of graphics chips will dramatically change when the distinction between fragment and vertex disapears, though, i very much doubt that nv30 is architecutrally closer to that in any significant way. People like you and Uttar seem to base your assumptions (as of nv30 being already designed with this in mind) solely on nvidias implementation of what is commonly referred to as vertex shaders,

Well, the first is a good thing. The second I understand what you're saying, but don't confuse myself with Uttar. Unlike him, I don't believe that NV3x was designed with this in mind - rather it's just a more advanced concept and progression than the basic use of discrete functional blocks as you see in the R300.

The concept of a Unified Shader, in my mind so I welcome your comments as I learn, would entail a pool of fundimental computational resources with few arbitrary bounds imposed on how they're used. Granted, some level of granularity is desirable but I can hardly see how you'd want R300 VS lvl.

So, if you can agree to this basic 'pure' philisophical view where you have component blocks that are shared on a per task basis with few arbitrary limitations - then it follows that the NV3x front-end is a progression over the R300. Is it even close to this vision, hell no (and to note, I never implied this). But, I don't see how the R300 is comperable in this respect as it's basic architecture is just a [advanced] progression of earlier designs where you had these fixed function logic blocks.

PS. I appologize for not making the distinction between FS/PS and VS earlier. I had no want to drag fragment shading in this conversation and I saw it's inclusion as nothing more than a, "well maybe they do this - but we have this!" type responce from whql (not you). Which is why I've total avoided it. I also avoided dynamic flow control/conditional branches because I saw where the argument was going early on and didn't want to allow others to use them as arguing points against how they're not needed, not used and wasted die space, etc. I think you can understand.

Ohh, and thanks for the posts. I've enjoyed them.
 
At an operational/functional level how does an "array of Vec4 procesors for vertex processing" differ from an "array of vertex processors"?
 
DaveBaumann said:
At an operational/functional level how does an "array of Vec4 procesors for vertex processing" differ from an "array of vertex processors"?

Arrangement? Functionality? Their inherient flexibility and programmability?

Just for kicks - what's that "Vertex Processor" made out of? and how are they arranged?
 
Vince said:
Arrangement? Functionality? Their inherient flexibility and programmability?

That doesn't answer the question. Why is it that you think that an array of VS is any different to an array of Vec4 processors? Why can't multiple VS's be just as functional and capable?

Just for kicks - what's that "Vertex Processor" made out of? and how are they arranged?

Its in the review - for each VS there is one Vec3 and 1 Scalar processor.
 
DaveBaumann said:
That doesn't answer the question. Why is it that you think that an array of VS is any different to an array of Vec4 processors? Why can't multiple VS's be just as functional and capable?

Its in the review - for each VS there is one Vec3 and 1 Scalar processor.

You just answered your own question. I was talking (as I've stated like 10 times now) that I was speaking forward looking and in a philisophical light. You tell me why looking forward to a time of Unified Shaders, you'd want your computational resources free of arbitrary bounds (like a VS construct).

If you want to prove me wrong, then explain to me why in a forthcomming time of Unified Shaders - the architecture is going to be made up of a plurality of R300 style VS's. I mean, this is what I'm stating and have been saying. Your talk of the current praxis is quite irrelevent Dave.

So, to answer the origional question. Multiple VS's are probobly just as effecient today as an array of lower-level processing units. But that says nothing of the advancedness of an architecture and is quite irrelevent.

And I still contend as I said a few posts above that in the comming time, a unified shader will be made of basic constructs with a finer granularity than a VS. What's the lvl of effeciency in a R300 style VS? How often are units idle? How often are the VS's idle when the FS/PS are the bottleneck? What happens to the individual VecUnits effeciency when you have a plurality of these granular units in parallel?
 
I was talking (as I've stated like 10 times now) that I was speaking forward looking and in a philisophical light. You tell me why looking forward to a time of Unified Shaders, you'd want your computational resources free of arbitrary bounds (like a VS construct).

What "bounds" are there on a VS contruct though Vince? At a basical level each processor is going to process an instruction, or a vector - how is this processed differently in a "VS unit" to a Vec4 processor in an array? At a technical leve in the arcitectures of R300 and NV30 the Vertex procesor system in R300 is probably wastes less computational resources than NV30 since its units are still Vec4 based and anything that requires 1-3 operations will be wasting resource.

Having your resource "free of bounds" isn't necessarily the best way of going either - what happens when there isn't enough instructions to aportion to your array efficiently? With the bounds of VS units, you have smaller operational "chunks" which makes it easier to efficiently aportion operations to those units indipendantly.

And for that, its not necessecarily a give that the unified route will not operate in and "array" systel anyay. Its highly likely that "fragement pipelines" will continue to operate in one fashion or another, so having a vertex shader array may not fit with in a unified solution.

Its not as balck and white as you appear to think Vince - "array" doesn't necessarily mean inherantly better or more akin to a unified route, it just means different. Its to early to say that an "array" will work when we start asking the VS and PS to share resources.
 
DaveBaumann said:
What "bounds" are there on a VS contruct though Vince? At a basical level each processor is going to process an instruction, or a vector - how is this processed differently in a "VS unit" to a Vec4 processor in an array? At a technical leve in the arcitectures of R300 and NV30 the Vertex procesor system in R300 is probably wastes less computational resources than NV30 since its units are still Vec4 based and anything that requires 1-3 operations will be wasting resource.

So, all things being equal with the exception of the constructs - which would be more effecient? I mean, you're entering not-related variables into this debate and needlessly complicating it. Perhaps I see this wrong, but we've gone threw a similar situation in the past when we were discussion how an architecture gains effeciency be detaching the TCUs from a pipeline. It's all about allocative effeciency and while the smallest construct isn't the most effecienct, neither is something the size of a R300 VS (Which is larger than anticipated - need to read your reviews... < runs away>)

Having your resource "free of bounds" isn't necessarily the best way of going either - what happens when there isn't enough instructions to aportion to your array efficiently? With the bounds of VS units, you have smaller operational "chunks" which makes it easier to efficiently aportion operations to those units indipendantly.

Interesting. I'd have to think about this one more, but why would this be a problem. I'm guessing, hoping, you'd have some abstraction from the actual constructs, no?

Its not as balck and white as you appear to think Vince

Ok, good sentance, I like it. heh. I need sleep, it's 5:18 and I'm getting the pounding headache I didn't expect untill after I got up.
 
Vince said:
Having your resource "free of bounds" isn't necessarily the best way of going either - what happens when there isn't enough instructions to aportion to your array efficiently? With the bounds of VS units, you have smaller operational "chunks" which makes it easier to efficiently aportion operations to those units indipendantly.

Interesting. I'd have to think about this one more, but why would this be a problem. I'm guessing, hoping, you'd have some abstraction from the actual constructs, no?

Having smaller units (vec3+scalar) is more flexible than bigger ones (vec4) so you could get higher performance, on the other hand you need to schedule instructions to all your units, so it could be a loss too.

Dave said:
Its not as balck and white as you appear to think Vince - "array" doesn't necessarily mean inherantly better or more akin to a unified route, it just means different. Its to early to say that an "array" will work when we start asking the VS and PS to share resources.

Well since vertices are 4D (homogeneous ones anyway) and pixels are 4D (when you consider alpha) both should fit nicely in a vec4-unit arrangement (FLOAT32 everywhere).

Cheers
Gubbi
 
Status
Not open for further replies.
Back
Top