Official: ATI in XBox Next

Status
Not open for further replies.
Joe DeFuria said:
Oh, so then Sony was just "dogging it then?" I mean, they were IDIOTS for not targeting 150nm or even 130 nm for console launch! Doomed for failure PS2 is...

This is a joke. It's so dumb and ridiculous... it has to be. What you're stating is just insane. What you're saying is basically like this:


  • Why didn't Australopithecus Afarensis get their electric needs from Fusion and travel in a Lear-Jet?

(I want to hear an answer to this and then write about how it related to the PS2 lithography thing)

Yes, I see that despite all of this "mircale of lithography", Sony did not create PS2 on 150nm, or even 180nm for product launch. Sony should have had the chance to UTTERLY DISMISS any comptetition for cryin' out loud! Why the hell did Sony take the obviously wrong road go with something as pathetic as 250 nm?

Backpeddling, backpeddling.... You're wasting all of our time, and it's useless. For starters, the 250nm process was bledding-edge considering their use of eDRAM - in fact, SCE won several awards for the GS design and implimentation. But to answer your ridiculous underlying question, I think these simple thought experiments will prove uselful too:

Why wasn't the 486 built with 1 Billion transistors? Why didn't ARPA use DNA-based computing in ENIAC? Why didn't the NSA use Quantum based computing to break Soviet cyphers in the 1950's and prove Church-Turing wrong?

(want answers to these too)
 
Vince said:
This argument has decended back into rhetoric and bullshit quickly, huh?

You call this quick? :oops:

Joe DeFuria said:
And if 65nm "that youhave in mind for production" is not READY when you're done? (See Dave's post on X-box.)

Then you wait it out or cut your losses (as XBox ultimatly did). What did the NV2A loose by initially targeting 130nm and then being bumped back to 150nm as opposed to a 150nm design? precious little.

So why didn't SONY "wait it out" and go for a more aggressive process for PS2 launch?

This is like a common sence thought based on that situation. Why are we arguing about something so simple?

Because, apparently, even the simplest alternative common sense points are lost on you.

If it makes you feel like a better man - then sure Joe. What's that - everyone must toe the same line as you. What was that again? Oh yeah... "I can't understand why anyone wouldn't like ATI"

Vince, I suggest you take this inherent feeling of being "threatened" by ATI and see shrink. If you can't see the blind facts that ATI shipped a 100 million transisror plus processor at 0.15, at the same time the "market leader and your personal saviour" nVidia couldn't get one out on 0.13....what do you think happened?

Causuality. It was 1998. They were saying "Get it done by embedding 4MB of eDRAM on a 250nm die." Which was an achievement for that time.

Oh, so what you're saying is lithography is NOT everything. That's what we've been telling you all along. Why put up such pointless and transparent resistance?

JoeDeFuria said:
Or are you implying that there's no time to market risks for consoles? Tell that to me again as we near 2005.

Joe, you're wasting my Goddamn time. You stated this...

Vince, learn the difference between *zero* time to market pressures, and *reduced* time to market pressures.

I ask you again, and use a number here since this is a "technical forum" after all....Are you implying that there is no (ZERO) time to market risks for consoles?

This isn't an investment forum. We're here for the preformance - if you want to talk about risks and assessements then go elsewhere. This is Beyond3D afterall.

:rolleyes: Yes....delivered technolgy exists in a vacuum....

You've got to be fucking me. You can't be serious.

Awww....the profanity surfaces yet again? I'd check that foam coming out of your mouth...bitten by any "strange acting" squirrles or racoons lately?

Yes, I am serious. You asked if I "plan" for random events. And I gave you my answer. Of course, we suspended belif for a moment by playing along with your implication that estimating fab process utility is like some random activity....

You took a comperason that was pretty simple:

You don't plan your vacation for next summer around if your otherwise healthy Aunt may or may not decide to drop dead during that period. You put your ass on the line and go for it.

Yep, and I answered you. I said I don't plan my vacation around suspected relatives health...but that doesn't mean I'm not screwed if my aunt dies.

But to point out the obvious flaw in your analogy....no one KNOWS the "health" of the advance process even 2 years ahead of time. You have a best guess, but certinaly nothing definitive either way. CERTIANLY not assumed to be "otherwise healthy."

If I got a call from my Aunt saying that she's got a 50/50 chance she'll die during the week I'm thinking of planning my vacation...you better believe that could very well impact my plans.

What the hell doesn't insurance have to do with anything in this debate?

Lol...you bring up some example about my Aunt dying....and you get your panties tied in a bunch over an insurance analogy, which is exactly an illustration of planning for these "random events" that you DEMANDED that I give you an example of?

Hahahahahahaha! :D

When are you going to just STFU with these useless comments?

And whe are you just going to STFU?

You are wasting my time.

Heard that before. Somehow, I'd wager that you'll put yourself though even moretime wasting...rabies drives you mad, you know.
 
Vince said:
  • If you're designing a closed box console (that has cutthroat competitiors) for release in 3 years, do you target the most advanced lithography process your R&D tells you will be ready or the process that came before?

That's it. No conditions, no-ifs-and-or-buts. The R&D projected this, that's their job and their ass if they F*up. What do you choose?

Problem is, and this keeps getting said, that you simply cannot condense it like that. Unless we are looking at it from a purely technical standpoint, it's just not that simple. You can't toss out all the other factors.

Backpeddling, backpeddling.... You're wasting all of our time, and it's useless. For starters, the 250nm process was bledding-edge considering their use of eDRAM - in fact, SCE won several awards for the GS design and implimentation. But to answer your ridiculous underlying question, I think these simple thought experiments will prove uselful too:

Why wasn't the 486 built with 1 Billion transistors? Why didn't ARPA use DNA-based computing in ENIAC? Why didn't the NSA use Quantum based computing to break Soviet cyphers in the 1950's and prove Church-Turing wrong?
He isn't backpeddling, he is trying to make a point by taking your argument and stretching it to absurdity. Fact is, Sony didn't use 150nm (which they very well could have) because it would have meant too much risk (late to market, investing in proper facilities).
 
Vince said:
Joe DeFuria said:
Oh, so then Sony was just "dogging it then?" I mean, they were IDIOTS for not targeting 150nm or even 130 nm for console launch! Doomed for failure PS2 is...

This is a joke. It's so dumb and ridiculous... it has to be. What you're stating is just insane. What you're saying is basically like this:


  • Why didn't Australopithecus Afarensis get their electric needs from Fusion and travel in a Lear-Jet?

SAY IT WITH ME VINCE: BECAUSE SONY WANTED A CONSOLE AT A PARTICULAR TIME, AND THUS DECIDED THAT TIME TO MARKET WAS MORE IMPORTANT THAN THE DEVINE TECHNOlOGICAL ADVANTAGES THAT WAITING A FEW MONTHS WOULD BESTOW UPON THEM

(I want to hear an answer to this and then write about how it related to the PS2 lithography thing)

Backpeddling, backpeddling.... You're wasting all of our time, and it's useless.

BTW...You spend more time talking about your wasted time than anyone I know.

For starters, the 250nm process was bledding-edge considering their use of eDRAM -

Oh, Vince, remember this:

http://pc.watch.impress.co.jp/docs/2003/0421/sony1_03.jpg

I see 0.18u within about a year of 0.25. Talk about back peddling. According to you, Sony should've just "pushed back" the PS2 for a year, and targeted 0.18 from the onset! IMAGINE THE POWER!

Why wasn't the 486 built with 1 Billion transistors? Why didn't ARPA use DNA-based computing in ENIAC? Why didn't the NSA use Quantum based computing to break Soviet cyphers in the 1950's and prove Church-Turing wrong?

Simple answer to all of them: TIME TO MARKET. You make an evaluation of ALL the tools available to you, weigh their pros and cons and you make your move.

And "ALL" of the tools available just doesn't mean "what might possibly be the most advanced process available at our estimated launch time." It's also "what is DEFINITELY" available.

(want answers to these too)

Yeah, I can hardly wait to see to see the spectacle of another rabid Vince explosion upon reading the response! Just wait for me to pop some corn...would you? Who needs consoles when we have Vince!
 
Nexiss said:
He isn't backpeddling, he is trying to make a point by taking your argument and stretching it to absurdity. Fact is, Sony didn't use 150nm (which they very well could have) because it would have meant too much risk (late to market, investing in proper facilities).

Why are these things patently obvious to I believe EVERYONE in this thread, except Vince :?:
 
Joe DeFuria said:
So why didn't SONY "wait it out" and go for a more aggressive process for PS2 launch?

Because they had a launch window (2000) and they looked to what technology would just be comming online (eg. 250nm and eDRAM) and went for it. As you could see by the inital GS supplies and yeilds it was very advanced for that process and untill they went to 180nm, it was costly. But, it worked well overall and they got a better part out of it.

Vince, I suggest you take this inherent feeling of being "threatened" by ATI and see shrink.

HA! :LOL:

If you can't see the blind facts that ATI shipped a 100 million transisror plus processor at 0.15, at the same time the "market leader and your personal saviour" nVidia couldn't get one out on 0.13....what do you think happened?

  • I don't give a shit about nVidia or what they said, get off it. You're becomming obsessive-compulsive.
  • And IBM shipped a 180nm SOI processor with 170M transistors that clocked at over 1.4Ghz internally at TJ. Watson. Pixel Fusion was 75M transistors on a 220nm process!

Give me somemore time and I'll find more examples. This is no acievement - only to you fanb0y's who want arguing-points in your little flamewars that make you feel better about *your* IHV. Get a new hobby.

Causuality. It was 1998. They were saying "Get it done by embedding 4MB of eDRAM on a 250nm die." Which was an achievement for that time.

Oh, so what you're saying is lithography is NOT everything. That's what we've been telling you all along. Why put up such pointless and transparent resistance?

We must not be reading the same responce. The GraphicSynthesizer wouldn't be the same calibur chip without it's eDRAM and the lithogrpahny that made that possible. End of Story.

Vince, learn the difference between *zero* time to market pressures, and *reduced* time to market pressures.

:rolleyes: You, yourself, admitted that in a 5year dev cycle things are NOTHING like in a PC cycle with TTM worries. You plan things out and work on it.

In fact you were so sure of the slower pace that you said that unlike the PC arena, you CAN utilize licensed IP sucessfully and negate the problems of lithography by a seperate party due to the time involved.


To refresh your memory:

What Joe said then: [url said:
http://www.beyond3d.com/forum/viewtopic.php?t=7406&postdays=0&postorder=asc&start=240[/url]]
Con: whenever you have multiple parties involved which assume some sort of responsibility, with Information flowing between them, you tend to have some more issues with managing the project and getting it done. There is a high degree of cooperation involved, and it doesn't always run as smoothly as expected.

This is exactly why I've argued in the past, (usually related to IMGTEC) that I feel the IP licensing model is FINE for the console market, but I do not like it for the PC market. Console designs flip over every 5 years or so, meaning you have more time to plan and sort things out. The PC race is break-neck, and having a more true fabless semi-con model is more beneficial.

Stop backpeddling already! Unless you're going to attempt to differentiate between the added time that allows for a 3rd party (with nowhere near the experience of an IBM, Intel, nVidia, ATI, etc) to impliment IMGTEC's IP in lithogrpahy and the time necessary to design a cuttingedge chip in a single dev pipeline targeting a single process?

This is futile.

I ask you again, and use a number here since this is a "technical forum" after all....Are you implying that there is no (ZERO) time to market risks for consoles?

Of course, but we're not debating this. You've basically lost this argument and you're spinning it in this direction.

Very simple, you already proved me right:

JoeDeFuria said:
Yep, and I answered you. I said I don't plan my vacation around suspected relatives health...but that doesn't mean I'm not screwed if my aunt dies.


Of course you don't know the future, but you don't design a bleeding-edge preformance IC based around what *might* happen or what *could* happen. Just as you don't live your life afraid to go outside for fear of a flaming toilot seat falling from the heavens and killing you.
 
Wow, look at this debate go. :oops:

Anyhow, I feel like I need to make a few points:

All comparisons relating to the subject on hand to the R300/NV30 situation is mostly moot. .15u --> .13u was a pretty small upgrade; it was only a half-generation jump. If the gap was bigger, like say, .15u --> .11u, we may not be talking about the R300/NV30 situation at all. I mention this because we are talking about a full generational jump (.09u --> .065u).

One issue constantly brought up is that lithography isn't everything, we have to look at the market, take timing in account, etc. Ok, let's do that. 65nm will be ready by 2H2005 from all the major fabs, including IBM, TSMC, Intel, etc., (sources: 1 2 3) right at the same time as the next generation of consoles. 65nm is indeed possible and it should be ready at the right time from multiple places. 65nm should not affect timing unless something goes wrong at every fab, but even a 6 months delay to 1H2006 should not be too much trouble, since it is not a 18 month delay like the Xbox 1 was to the PS2.
 
It's all pretty irrelevant though. Everything with PS3 and Xbox 2 is going to boil down to the quality of the games and the brand awareness that Sony and MS are going to be able to generate. It won't matter at all if the PS3 is 2x as powerful as Xbox 2 in the overall scheme of things and I doubt they will be more than 25% apart in real-world performance anyway.
 
cthellis42 said:
I'm rather expecting that to be kept in mind by them the next gen--at least by Sony and MS. Sony's I rather see from the inherent nature of Cell itself, as "scalability" reads "easy upgradability" to me, especially as they refine their process. Why simply reduce costs when they can offer faster speeds and market on that as well? (Especially if they can play up the ability to link with one's OTHER PS3 to make things even faster! :oops: ) Through programming depth and scalability, they'd be able to kick ahead of the competition even if they don't start ahead of them by one means or another.

MS, I think, is likely to embrace a PC-ish stance, as they ALSO like being "on top no matter what" so may be more apt to want a GPU and CPU that will scale well so they have an upgrade path to pursue if it's called for. (CPU the kind of get by default, all things considered. Hehe...) But of course this may well keep the Xbox just as lossy or lossier...

Nintendo? <shrugs> A bit of an enigma right now. I can certainly see them getting more aggressive, since they've basically proclaimed a willingness to throw ALL their cash reserves into the ring (which is not insubstantial), but if they have any big plans for this upcoming generation they're hiding it well. From Sony we've seen big movements and MASSIVE cash movements since 2001, from MS and Nintendo we've seen...? <shrugs again> We've seen kinda the same approach as with the current gen, so I'm not very excited overall. MS's desires are usually easier to read, though. Nintendo's are unknown--and usually defy expectations anyway, so which WAY will they choose to defy expectations this time? Hehe...

As far as the Xbox 2 goes, Microsoft will harvest technology from the PC ecosystem, just as you suggested. This time around Microsoft has had more time to formulate a plan, and comb through a lot of technology from Intel, ATI, and others. If Intel is working closely with Microsoft, I think there will be a few very intresting twists and turns during this Shakesperian play involving Microsoft and Sony.
 
nonamer said:
One issue constantly brought up is that lithography isn't everything, we have to look at the market, take timing in account, etc. Ok, let's do that. 65nm will be ready by 2H2005 from all the major fabs, including IBM, TSMC, Intel, etc., (sources: 1 2 3) right at the same time as the next generation of consoles. 65nm is indeed possible and it should be ready at the right time from multiple places. 65nm should not affect timing unless something goes wrong at every fab, but even a 6 months delay to 1H2006 should not be too much trouble, since it is not a 18 month delay like the Xbox 1 was to the PS2.

Amen! Joe must think that there is no roadmap for lithography, he keeps talking about TTM as if the developers have NO idea whats going on when they lay down their parts.

It's not like they just throw darts at a dart board with lithography processes on it. They know what's comming online when and the chnaces for sucess, etc. Comapnies like IBM or Intel are profitable because they can judge and time it right.

But, yet, Joe seems to think that you can't coordinate a specific IC to a specific launch window with a specific design. Perhaps it's because he's not doing the work himself (you can tell the quality thought that he has), but I think it's because he just has no clue.

Vince [url said:
http://www.beyond3d.com/forum/viewtopic.php?t=7406&postdays=0&postorder=asc&start=80[/url]]To cause a 'borderline performance revolution' with type of architecture relies on bleeding-edge lithography and it requires the design team to push the process to the edge and beyond into the realm of poor-yields with the understanding that future lithography will bring the yields and costs under control. It requires massive investment like STI is doing ($8Billion in total) and it requires technologies like SOI/SS, Low-K, 65/45nm and lower lithography and other such advancements that are pushed hard.

When I saw IBM and nVidia team up and basically gain access to STI's advancements combined with some comments I heard a while back from a little bird - I thought it was over. nVidia has the balls to push and stick with it. When 3dfx was in the corner touching itself with .25um, nVidia was on Cu utilizing 180nm and utterly destroying 3dfx in everyway, performance, features, per IC cost, yields.. it goes on and on.

And I see the same now. While nVidia is testing with 130nm Low-K dielectrics, ATI is off pissing in the wind on a 150nm process. Sure, nVidia had problems this time, but it's the exception. What's ATI going to do when nVidia is utilizing a derivative of STI/AMD's 10S or 11S (11S is Cell's 65nm process slated for 2H 2004/1H 2005 production) process at IBM? Have you fanpeople tell us that SOI isn't necessary? That the thermal or 30% performance increase seen on Power4 isn't that big of a deal? That TSMC's sub-par roadmap and execution of <100nm is adequate? Don't even get me started on UMC, are they serious in going alone for 90nm and below then everyone is concentrating their R&D? HA! Give me a break.

Today is the first day I can say that Sony will be alright, that if I was Okamoto or Ken, I'd be happy as a pig in shit.

PS. Check out my post here: http://www.beyond3d.com/forum/viewtopic.php?t=7188
Please note the timeline and make note of IBM's involvement and that SCE/Toshiba are currently in production of the EE+GS@nm and have been producing the GS (eDRAM+logic) at 130nm since 2001.

I stand by this.
 
Nexiss said:
Problem is, and this keeps getting said, that you simply cannot condense it like that. Unless we are looking at it from a purely technical standpoint, it's just not that simple. You can't toss out all the other factors.

But we ARE looking at it from a technical PoV. This is Beyond3D afterall. There are other boards on the internet that concern thmselves with the more finacial aspects - this isn't one.

He isn't backpeddling, he is trying to make a point by taking your argument and stretching it to absurdity. Fact is, Sony didn't use 150nm (which they very well could have) because it would have meant too much risk (late to market, investing in proper facilities).

He is backpeddling and for this reason. My initial comment indicated that you need to "push the process to the edge and beyond into the realm of poor-yields with the understanding that future lithography will bring the yields and costs under control."

The comment is time-indifferent. You can push N process in Y year looking to stabilize yeilds at N+1 process in Y+1 year.

Or you can push N+13 process in Y+16 years looking forward. It's insignificant as the design team has a set launch window and can target N process for that period T - thus being open ended and just a fundimental rule that I'm speakin of.

Joe on the otherhand can't comprehend this, nor is he willing to because of some of the comments he's been caught up on concerning the question posed: What process is better all things equal?

Thus, this is just a charade in which he's adding context to support his case that's not a general basis for study or understanding - which is what I've stated on multiple occasions I'm after.
 
Vince said:
Because they had a launch window (2000) and they looked to what technology would just be comming online (eg. 250nm and eDRAM) and went for it.

But 180nm is BETTER damnit! What's a few more months? Just "push it back a little bit" as I believe you said several times. They're just selling themselves short. :(

If you can't see the blind facts that ATI shipped a 100 million transisror plus processor at 0.15, at the same time the "market leader and your personal saviour" nVidia couldn't get one out on 0.13....what do you think happened?

  • I don't give a shit about nVidia or what they said, get off it. You're becomming obsessive-compulsive.
  • And IBM shipped a 180nm SOI processor with 170M transistors that clocked at over 1.4Ghz internally at TJ. Watson. Pixel Fusion was 75M transistors on a 220nm process!

And how is anything you just said relevant to the fact that someone turned out a better product, in an earlier timeframe on 0.15 than someone else in direct competition on 0.13?

Give me somemore time and I'll find more examples. This is no acievement -

Apparently it is quite an anchievement to find relevant examples...

only to you fanb0y's who want arguing-points in your little flamewars that make you feel better about *your* IHV. Get a new hobby.

Honestly, do you say these things for my utter amusement? :) Does anyone else not get a guffaw out of these repeated pot-kettle-black comments like that are littered in almost every post you make? :D

We must not be reading the same responce. The GraphicSynthesizer wouldn't be the same calibur chip without it's eDRAM and the lithogrpahny that made that possible. End of Story.

The GS would be a much better caliber chip if it were designed for 0.18. Lithography is everything remember!

You, yourself, admitted that in a 5year dev cycle things are NOTHING like in a PC cycle with TTM worries.

I, myself, repeatedly said there ARE time to market worreis for consoles. A longer development cycle does not NEGATE time to market worries. You, yourself, continue to just be hypocritical.

In fact you were so sure of the slower pace that you said that unlike the PC arena, you CAN utilize licensed IP sucessfully and negate the problems of lithography by a seperate party due to the time involved.

Vince:

Repeat after me: A longer devleopment time does not NEGATE time to market pressures..

To refresh your memory....

Lol...

Yes, thanks for reaffiriming that NO WHERE did I say that time to market is no longer an issue, or even a low priority.

Does this simple concept need to be spelled out to you even more?

The longer Dev time with consoles lowers the risk for an IP licensing situation. It does not mean that time to market is not highly important or even critical, it means that you have more time to work the issues that IP licensing can bring...such that the RISK OF MISSING THE CRITICAL TIME TO MARKET WINDOWS is not as severe with consoles.

Stop backpeddling already!

Start reading what I wrote already!

Of course you don't know the future, but you don't design a bleeding-edge preformance IC based around what *might* happen or what *could* happen.

Um, Vince....when you decide to produce a chip on a process that won't be up and running for several years, you are EXACTLY desiging an IC based around what *might* happen. You'll do your damnedest of course to get it up and running, but that is the real risk.

Just as you don't live your life afraid to go outside for fear of a flaming toilot seat falling from the heavens and killing you.

I guarantee that you the chance of 65nm cell fab not being ready to produce PS3 cell chips for a 2005 launch, is, um slightly higher than being hit with a flaming toilet seat.

Though decidedly not high a chance as this post being answerd with more one-sided, hypocritical, hysterics.
 
Who said it was or needed to be an order of magnitude difference? R300/NV30 is not "5 years after" the NV2x.
No, consoles that come 5yrs later... are at least two orders of magnitude above their predecessors... they'll be here... likely in about 18-20months approxx...

But, VERY generally speaking, each new core generation is about "twice the difference" from the last one.

That would make R500 generation roughly 8x that of NV2x/R200.

Peak xgpu verts is about 125-150M(IIRC) approxx, it has directx 8 functionality... is the R500 peak near a billion verts? It's b/w how is it? Lower than... flipper? gs?(yes, I know I know, those are embdded sols.)

As is every architecture before it, ans as is every architecture afterward.

As happens every generation...

100s of GBs of b/w... fastest trans. smallest embdd memory in the whole wide world... I think the Taiwan Super Cheese Manufacturer is far from being the best... the arch.s done on their fab.s cetainly don't carry the creme de la creme... now if you mean as happens to intel, and the like...
 
nonamer said:
One issue constantly brought up is that lithography isn't everything, we have to look at the market, take timing in account, etc. Ok, let's do that. 65nm will be ready by 2H2005 from all the major fabs, including IBM, TSMC, Intel, etc., (sources: 1 2 3) right at the same time as the next generation of consoles. 65nm is indeed possible and it should be ready at the right time from multiple places. 65nm should not affect timing unless something goes wrong at every fab, but even a 6 months delay to 1H2006 should not be too much trouble, since it is not a 18 month delay like the Xbox 1 was to the PS2.

As stated earlier, ATI prefer not to make major architectual changes at the same time as making a massive process change - they like to trial that process first, or trail the architgecture. If (and thats a big if, since I still think there's a hell of a lot more to come from who does the back end) the part is spun off the R500 architecture then they will have already made that architectural change, hence they may then choose to target another process.
 
Vince said:
Amen! Joe must think that there is no roadmap for lithography, he keeps talking about TTM as if the developers have NO idea whats going on when they lay down their parts.

Wrong. Developers have very good ideas at what SHOULD be coming on-line. They just don't know exactly when and in what capacity it will actually happen.

It's not like they just throw darts at a dart board with lithography processes on it.

Damn...you got me. I thought they rolled the dice though...

But, yet, Joe seems to think that you can't coordinate a specific IC to a specific launch window with a specific design.

Wrong. Joe seems to think that there are different risks associated with targeting with different fab processes.

Get it?

And I see the same now. While nVidia is testing with 130nm Low-K dielectrics, ATI is off pissing in the wind on a 150nm process.

What the hell are you doing bringing the PC IHV wars into this? Oh, wait, I forgot...you're VINCE for cryin' out loud! Set self-imposed and thoroughly asinine "standards" for others, which of course, you see the foolishness of yourself by virture of breaking them on a continual basis.

Oh...Come back and talk to me in a little bit about who's "experimenting" with what process.

The fact nvidia barfed all over themselves trying to push an immature process they were informed was "immature", has no bearing on what ATI is, or is not going to successfully get out the door in the future.

]What's ATI going to do when nVidia is utilizing a derivative of STI/AMD's 10S or 11S (11S is Cell's 65nm process slated for 2H 2004/1H 2005 production) process at IBM?

Whatever it is, it will be in x-box2. :)

Have you fanpeople tell us that SOI isn't necessary?

This is where you completely lose it.

Who ever, ever, ever, said that SOI, Low-K, or advanced processes aren't significant? Will it be "necessary?" That all depends on the competitive landscape of course. It actually won't be necessary if nVidia turns out another NV30...how did you put it..."exception to the rule..."
 
Vince said:
This argument condenses very easily for someone not keeping up:

  • If you're designing a closed box console (that has cutthroat competitiors) for release in 3 years, do you target the most advanced lithography process your R&D tells you will be ready or the process that came before?

That's it. No conditions, no-ifs-and-or-buts. The R&D projected this, that's their job and their ass if they F*up. What do you choose?

Quite honestly, Vince, you just crimped your central arguement with this. "...do you target the most advanced lithography process your R&D tells you will be ready..."

Will be ready! You are bringing timing into what you JUST condensed the arguement into. 0.13 WASN'T READY in time for NV30 and the demands nVidia wanted to put on it. 0.13 WASN'T READY in time for NV2A and they had to backslide and change designs. Could nVidia have infused TSMC with more cash to ensure 0.13 was ready for their requirements? Perhaps, but personally I don't think they could have mustered enough in time to do so. Could Microsoft have infused nVidia and the proper partners with enough cash to make 0.13 ready for NV2A? Perhaps, but I don't think they had nearly the time allotted, and the outlay would have been way more than Microsoft wanted. The results of both were easily seen in A) chip failure or B) making concessions. Microsoft COULD INDEED have had 0.13 on NV2A and had an even more ball-busting chip, but they certainly wouldn't have been out in 2001.

Phil said:
better lithography could be the advantage, especially when you're biggest competitior is going to push the process with CELL. Of course risks are bigger, but then again, isn't that business? Especially when you want to outbid your biggest competitor, you can't hope to stand a chance if you're not willing to take risks.

I don't think this is the case even strictly from a hardware side, and I don't think you do either. Is litho the ONLY advantage CELL will have? Does litho exist within its own box only having to do with litho? Of course not. I'm much more willing to say that OVERALL hardware design carries the weight, in which litho is of course a big part. Is CELL only a litho change, or is it pushing an entirely new chip scheme and pushing boundries and changing expectations all over the place in hardware and software alike?

The issue most of us are taking with Vince is that he is pushing "Litho uber alles" and even by some of his earlier comments he does not believe it himself. Throughout history and certainly today we see different hardware designs do different things with different lithos, both in plain nm and other etching technologies. Undenyable it is heavily weighted (I would not care to guess HOW much, and it would always be speculation anyway), but undenyable we also see other hardware decisions able to close the gap and even surpass simply the litho. We see that it can cause concessions or make effects that carry over and punish other areas of the hardware, or the broader marketplace... Even simply from a hardware stance, litho does NOT exist in a box, and it is NEVER the only factor contributing to the overall success of a product. Proper decisions must be made on all levels, and sometimes... Sometimes you get lucky. ;)

Deepak said:
Vince said:
Deepak said:
Thank GOD it is only a web forum otherwise.... :D

...otherwise this discussion would have ended after a page? ;)

no....I was thinking something else.... ;)

<wraps a leather strip around Joe's and Vince's left hands and gives each of them a switchblade>

That closer imagery, Deepak? ;)

And Vince, I berated Joe in the other thread, and I think most will agree with my comment here: "Cut it OUT!" You're coming on much more heavy-handed, and you two are sniping at irrelevancies, and you of late are consistently going after Joe's comments and disregarding everything else mentioned. (And heck, you seem to be making replies that I'm sure mystify more than just me. Australopithacus with Lear Jets? This is from OUR side of the arguement. You can wish all you want, but you can't make the impossible happen. Push boundries, certainly, but you always run the risk of it backfiring--and to minimize your chances you now have to spend BILLIONS, which neither nVidia nor ATI can do.)

Joe can come off heavy-handed as well, and you guys are in many ways chewing the same cud and spitting it in each other's mouths (such on THAT imagery, y'all! :LOL: ), and it's getting rather annoying. I don't care WHAT your history is with him, nor do I know his history in other threads or can gauge any levels of "fanboyism," but right here--ONLY HERE--there is only one person "debating" with uncomfortable force and language.

Vince said:
I don't give a shit about nVidia or what they said, get off it. You're becomming obsessive-compulsive.

Yet the reason you hold your opinions of ATI's methods and capabilities right now and in the future is because you gauge their previous history and compare between them and nVidia in PC chip architecture? Even just using Xbox and the Gamecube as examples there are many similar parallels to be drawn...

Vince said:
We must not be reading the same responce. The GraphicSynthesizer wouldn't be the same calibur chip without it's eDRAM and the lithogrpahny that made that possible. End of Story.

The GraphicSynthesizer wouldn't be the same calibur chip without a veritable shitload of decisions that went into its entire design and construction.

What most of us seem to be balking at, and from what your most virulent remarks before seem to be saying, is that the PS2 wasn't pushing boundries and was making foolish decisions because they COULD have designed for an even smaller process it should be considered "weak" like ATI's decision to keep 0.15 for R300. And you certainly don't believe that...

Litho doesn't exist simply as a process on a plate for PC, consoles, or whatever. For other comments you've made you don't seem to believe this EITHER, yet you have repeatedly stressed a stance with almost religious fervor that many of us balk at because we read "Nanometers over everything!"

What might be most useful is to stop--take a deep breath and count to 10--and state just what YOU mean by "lithography," as this just all might be a matter of misunderstanding. Most of us by far seem to be reading some kind of doomsday speak by anyone who doesn't push things to the absolute breaking point--sometimes seriously breaking--are fools destined to die. If what you are looking at is larger in scope than nanometer scratchings, then that is pretty much what the rest of us--Joe included--have been saying the entire time.

Lithography is the template upon which a chip designer can make the BEST CHIP THEY CAN come to light. Lithography itself is a process that is ultimately bound by enormous fabs and BILLIONS in R&D. It seems to me neither nVidia nor ATI has the capacity to push this enough, so until situations change they are both ultimately going to exist within bounds that OTHERS determine and define--and within those bounds they do the best possible job they feel they can. For that situation to change, it will take either of them to team up with enormous players and invest BILLIONS into stretches the bounds the way THEY desire, and neither do so. Both are pikers compared to the size of IBM, Sony, and Toshiba who ALL are working together on this ONE project! Microsoft and Intel may have the resources to combat (yet I highly doubt they have the inclination to compete on those levels), but neither nVidia nor ATI can command them.

Lithography may make the best designs even better, but it exists within bounds of TIME and RESOURCES, and it takes monsterous entities to sufficiently push this, and immense amounts of both. And in the end, no matter how MUCH time or resources on chooses to spend, there are ultimately bounds to define for any project and concessions to be made no matter what. Within those, companies still have to show they can make the BEST POSSIBLE USE of it.
 
Status
Not open for further replies.
Back
Top