Competiton over ?

incurable said:
nonamer said:
Joe DeFuria said:
They still have just as much chance as ATi to make a decent DX10 GPU, if they're willing to put the work and effort into it.

Disagree completely.

ATI has a much bigger chance at having the "best" DX10 GPU. (Best defined as combination of time-to-market, most DX10 feature support, and LEAST amount of wasted transistors for "non DX10 supported" features.)

I'm in no way claiming that ATI can't really mess up and drop the ball. Or can't fail out of pure bad luck. Anything can happen in this industry. But to say that nVidia (or anyone but ATI) has just as much a chance at the leading DX10 GPU is pretty silly, IMO. ;)

In short: ATI clearly has the best chance...though that doesn't guarantee success.

You're essentially wrong. Intel supposely has x86 under its hand, yet AMD can pull out a better CPU with far less resources. In the end you see it's all about how well they manage to do it, it has nothing to do with who controls what.
What a useless analogy :rolleyes:, x86 isn't a moving/evolving target like DirectX which gets major features added with every new version.

cu

incurable

:LOL: It's not like they change it every 5 minutes, or even change at the whim of a single manufacturer. You can be assured that it sticks to whatever design is best at the time.
 
DaveBaumann said:
Well, I've heard talk that NVIDAI will move to that as well (n fact Alain Tiquet sid as much hen I asked him at the NV30 launch last year), however it never reeally seems to be featured much on customer fabs roadmaps (not that I've studied them intently). There's a lot of fuss over 90nm next, partly becuase thats what Intel is shifting to - however it seems that the graphics manufcaturers often seem to use intermediate steps that Intel doesn't.

NV41/NV43/NV45 are 110nm ( or was it NV42 and not NV43? One of the two is 130nm ) last I heard.


Uttar
 
incurable said:
Tim said:
incurable said:
What a useless analogy :rolleyes:, x86 isn't a moving/evolving target like DirectX which gets major features added with every new version.
MMX+SSE+SSE2+SSE3(Q403)+SSE4(Q404), the instruction set does not seem that static to me.
Those extensions are not part of the x86 ISA and btw usually not supported by AMD for years after their original introduction.
Whether or not that SSE and SSE2 are part of the x86 ISA is debatable. I would argue that after their incorporation into non-Intel processors, they are at least implicitly part of the x86 ISA. Before then they are only (certainly) part of the ISA of 'IA-32' processors. http://developer.intel.com/design/pentium4/manuals/

It is arguable that IA-32 processors define at least part of the x86 architecture (that part not defined by AMD in e.g. 3DNOW!).

AMD seems to get support in in 'the next chip after the Intel one comes out'. Which is probably about as fast as they can do it.

I would say x86 is a moving target - a slow-moving target, but a moving target nonetheless.
 
Joe DeFuria said:
And IBM is not accustomed to the business model that TSMC has, and what TSMC's customers typically need. You're right, it's not fair.

Joe, last time I checked IBM's founderies were bending-over-backwards to entice customers over. Not to mention your obscure reference to "Buisness model(s)" that allow you to just comment and not provide any semblence of empirical fact. Care to explain the differences to me? I genuinly have yet to hear about them (besides basic library, design differences that exist between all founderies, hell even individual lines) and am interested.

:rolleyes: Who said I turned more options into a negative? There must be a reason for you making things up? At worst, I'm saying IBM (in the short term) doesn't appear to me to be able to nVidia's disadvantage of being a back seat to DX10 development.

And why is this option exclusive to nVidia by the way? Is ATI not permitted to become an IBM (or Micron, or anyone else), that they feel offers the best tech?

Wow, you're really into this leap-of-faith thing huh? Lets tackle these one at a time:

(a) Your entire post projected your negative view of nVidia's future. I count 3 negative comements concerning nVidia's future

  • "Things don't look too good IMO, for nVidia..."
  • "IBM is having issues with low-k...(isn't better low-k at IBM the suppossed reason for nVidia switching the high end parts to IBM?)"
  • "I don't see nVidia having a good shot at redeeming itself"

(b) Your comments about "Low-K" dielectrics was also wrong and ignorant. Yes IBM, like much of the industry, had problems implimenting SiLK. Yet, there are other dielectric compounds in existence. East Fishkill, which will produce nVidia's ICs, has been verified for CVD based production with actual products to ship within a month IIRC. And IBM has patented a dual phase CVD based process that's is said to lower the K value significantly. So, even at worst - they're still ahead of the founderies when it comes to R&D. Check your sources next time.

Also, you totally overlooked the positive intrinsic factor that having IBM as a foundry bring. sSOI alone is pretty neat IMHO; 1.7-3X decrease in power requirements, ~30% preformance increase. That's delta is basically akin to being a process node generation ahead of the competition.

(c) I never said it was 'exclusive' or anything of that sort, nor did I ever imply it. Infact, ATI probably will shore up this end of the development pipeline soon. But, untill then they're behind. And Micron still doesn't approach an IBM or Intel in terms of R&D and bleeding-edge implimentation of lithography processes and technologies last I checked.

Yes, we know, Vince..."lithography is everything!" :rolleyes:

Yes, yes it is in the preformance computing sector where computational resources via concurrency of design dominates. And what's going to be really humorous is if ATI moves (as they should) and secures additional foundry resources for production and technology reasons.

But hey bud, what else would I expect out of the guy who said – what was that line, ‘I can't imagine why someone wouldn't like ATI.’ Keep downplaying lithographies importance until ATI publicly moves on it, then you can come join my side and I'll supply the drinks.
 
Where have they in their press been "downplaying lithography"? Or do you get it from them not being overly aggressive in its use and/or press statements as nVidia was with NV30? Frankly, even this particular statements seems to me to be what ATI did for this generation as well, as they approached even 150 nm aggressively, pushing more out of it than others thought they could, as well as showing efficiency of design, and their statements about 110 nm were followed with " If it was creating pressure, I don't think we would be banging our heads against the wall," which reads to me that if they thought the process unready or impractical (as they did rightly with 130 nm from TSMC for R300's timetable) they wouldn't be pushing for it. Granted nVidia was "confident" with NV30 as well, but considering ATi's recent track record, I give them the benefit of the doubt.

Regardless it remains to be seen how EITHER of them do at 110 nm whenever and wherever it shows next gen.

Meanwhile, seriously, stop getting distracted by Joe and ignoring everything else in the meanwhile. You seem very much willing to, and Joe you seem VERY much willing to antagonize as well. Take the random crap to PM's if you both must knock horns, but keep it out of the public threads.
 
cthellis42 said:
Where have they in their press been "downplaying lithography"?

I don't think anyones fighting over that. I was responding to how he basically diminished all the positive effects that IBM can provide as a foundry partner by invoking a fallicious case about dielectrics vis-a-vis SiLK.

What interests me about this 110nm argument is twofold:

  • Outside of a select few RAM manufacters, whose using it in mass production? Possible case for Micron?!?
  • With 90nm production happening now (eg. Intel, IBM, Sony OTSS), 110nm seems... suspect
 
Yeah, but you skipped right over my question earlier about if you even think IBM would let their high-process fabs and techniques get used for GPU's right now. Why divest their key process fabs for others' sake, and/or less than optimal profit margins. (As obviously their capacity is not infinite.)

If ATi or nVidia or the like were intricately partnered with IBM (or even Intel) and/or purchased wholesale to be integrated in I could certainly see it, but since neither ATi nor nVidia have total fab control or this kind of partnership, I don't think they can latch onto the absolute best for even their top-end chips. (And certainly not for the sectors where the bulk of their chip sales will come from.)

I could see this more of either decided to make the very top chips designed ONLY for that extreme end, but it seems an inefficient way to design, considering they want to lean on similar processes and architecture designs for chips going all the way down to sub-$100. They could deliver potentially HUGE chips (at huge premiums), but I think they would lose out by splitting their design resources in this manner. And I don't think they can deliver what the extreme end SHOULD get if they're keeping design considerations for a much larger range of chips the whole time.

110 seems like a proper step for them at this point, as 90 nm is not just the optimal process available right now, it is also going to be very expensive, and quite occupied otherwise, and would not seem to be the best option for chips as fiercely competitive in the bulks they want, low margins they have, and chips they need.
 
cthellis42 said:
Yeah, but you skipped right over my question earlier about if you even think IBM would let their high-process fabs and techniques get used for GPU's right now. Why divest their key process fabs for others' sake, and/or less than optimal profit margins. (As obviously their capacity is not infinite.)

Do they even sell that many of their own like the Power 4 and 5 processors? I'm honestly asking... I haven't seen the figures. What I do know is that Nvidia can sell a hell of a lot of GPUs... so maybe it is in IBM's best interest to divest some of there key process manufacturing capacity to someone who could actually utilize it.
 
Vince said:
(a) Your entire post projected your negative view of nVidia's future. I count 3 negative comements concerning nVidia's future ...

Must I correct you again?

My post projected a negative view specifically about nVidia in the high-end market until the post DX10 era.

And it projected a negative view, exactly because I have a negative view of it, and I said why I felt that way. No mysteries here.
 
cthellis42 said:
Yeah, but you skipped right over my question earlier about if you even think IBM would let their high-process fabs and techniques get used for GPU's right now. Why divest their key process fabs for others' sake, and/or less than optimal profit margins. (As obviously their capacity is not infinite.)
IBM runs all their subunits as seperate businesses, each responsible for its own budget and profit/loss. If one unit needs the services of another unit, it solicits a bid from it as if it were a completely seperate company (and indeed often outside companies are invited to produce competing bids). IBM's chip division doesn't get any better pricing or access to IBM's foundry division than Nvidia or anyone else.

Moreover, IBM's foundry division is doing quite terribly, and lost something like a hundred million or so last quarter. They may not have infinite capacity, but they might as well. They would love to be able to turn away customers because of undercapacity, but it's not too likely anytime soon.

Vince said:
Also, you totally overlooked the positive intrinsic factor that having IBM as a foundry bring. sSOI alone is pretty neat IMHO; 1.7-3X decrease in power requirements, ~30% preformance increase. That's delta is basically akin to being a process node generation ahead of the competition.
First off, "1.7-3x decrease in power requirements" is bunk by any standard. 30%, maybe.

Second, those two 30%'s are mutually exclusive: you can use SOI to give you a 30% reduction in power OR a 30% clock speed increase. But not both at the same time. (Of course this is just a trade-off; you can get a little of both at the same time, but not the full effect.)

Third, that's all pretty theoretical; in the real world, the benefits of SOI have been tougher to come by. Look at K8 for example; despite being arguably years late and having a slightly rebalanced pipeline, it doesn't seem to achieve much of anything better than what K7 achieved a year ago in terms of power consumption or clock speed. Yes, Hammer is a great processor with pretty compelling performance, but that's all to do with integrating the memory controller. So far it doesn't seem that Hammer is doing anything with SOI it couldn't have achieved nearly as effectively and a whole lot more easily with bulk Si.

And in the meantime, SOI is hard to design for. In many cases it requires all new low-level circuit design techniques; this is plausibly said to be a main cause of Hammer's long delays. (Getting a working SOI chip was the easy part; developing working high-speed circuit designs to replace some from K7 that didn't survive the transtition was the hard part. Hence those 800 MHz Hammer samples from so long ago.) This may be less of a problem for GPUs, which tend to rely on design cells furnished by the foundry rather than the sort of full-custom design reserved for critical parts of a high-end CPU. Or it may not. In general, it's likely enough of a problem that Nvidia (or ATI) wouldn't risk switching over to SOI until it's quite a bit more mature than now.

And speaking of, that's perhaps the most important point: if IBM's foundry is so good that it will provide some incredible competitive advantage for Nvidia over ATI, then what's to stop ATI from using them? Answer: absolutely nothing.
 
Remember that essentially all of the K8 benchmarks released so far have been within a 32-bit environment. There's a whole lot of the K8 processor that's remaining unused...
 
Good point, Dave, and I know IBM's been having some troubles, but do you know which foundries offhand are the ones providing most of the loss? Methinks the ones they have set up for their bleeding-edge processes are the ones most likely to be running at capacity, and if not--as you say--then chances are they likely get bitten by the overall difficulty in designing to those processes. (Or in the case of certain IFV's, it not being the most optimal choice for their needs.)
 
Dave H said:
First off, "1.7-3x decrease in power requirements" is bunk by any standard. 30%, maybe.

I think IBM published a paper on their attempts to create an accurate benchmark of SOI/Bulk preformance using the Power core, they actually achieved pretty close to the theoretical predictions. I'm tired, but when I have time tomorrow I'll look for it.

Second, those two 30%'s are mutually exclusive: you can use SOI to give you a 30% reduction in power OR a 30% clock speed increase. But not both at the same time. (Of course this is just a trade-off; you can get a little of both at the same time, but not the full effect.)

I know and that's why I didn't say "x and y", I phrased this wrongly. Thanks for correcting me. Instead of a comma, a slash or "or" would have been better.

As for SOI, I don't think the "Hammer" line is something you should use as a baseline for any effective increases. Power5 or Cell would be preferable IMHO. Power4 is better than Hammer, that low-power derivative, for a present-day parallel. Yet, I think the next generation (Power5 or Cell) that was designed with SOI in mind and the lessons learned will be most indicative of potential. So, by the end of 2004 we'll have a much better picture.
 
First time posting in this part of B3D, hey everyone...

cthellis42 said:
110 seems like a proper step for them at this point, as 90 nm is not just the optimal process available right now, it is also going to be very expensive, and quite occupied otherwise, and would not seem to be the best option for chips as fiercely competitive in the bulks they want, low margins they have, and chips they need.

I'm confused, I didn't know there was a 110nm process. I thought the industry consensus was to go 130 -> 90. The 130nm process can give transistor gate lengths of 110nm, which some companies (esp memory makers) call 110nm chips. Is there actually a seperate 110nm process node?

Dave H said:
Moreover, IBM's foundry division is doing quite terribly, and lost something like a hundred million or so last quarter. They may not have infinite capacity, but they might as well. They would love to be able to turn away customers because of undercapacity, but it's not too likely anytime soon.

That's what I thought.
http://www.iht.com/articles/105249.html

"IBM cited higher costs at its new factory in East Fishkill, New York, which is moving toward full-scale production, and unexpectedly weak demand by outside customers."

First off, "1.7-3x decrease in power requirements" is bunk by any standard. 30%, maybe.

Second, those two 30%'s are mutually exclusive: you can use SOI to give you a 30% reduction in power OR a 30% clock speed increase. But not both at the same time. (Of course this is just a trade-off; you can get a little of both at the same time, but not the full effect.)

You probably know more about SOI than I do...but as far as I can tell, SOI is really the magic bullet for getting rid of diffusion capacitance, by providing an impermeable diffusion barrier...combined with high-k (yes, high-k) and strained silicon, you can get savings of 50% percent or more (savings grow w/ process node)

Head to your local EE dept library for info I this - a good survey paper I read was in the Proceedings of the 20th ICCD, by Duarte et al. This paper is by Intel researchers, btw, but its about SOI in general, so it still applies.

About ATI vs nVidia:

Seriously, if any of you can say who will win this, you need to start working at Morgan Stanley or Goldman Sachs. Cuz they sure as hell don't know. IMHO, looks pretty even to me.
 
nondescript said:
You probably know more about SOI than I do...but as far as I can tell, SOI is really the magic bullet for getting rid of diffusion capacitance, by providing an impermeable diffusion barrier...combined with high-k (yes, high-k) and strained silicon, you can get savings of 50% percent or more (savings grow w/ process node)
There's no such thing as an impermeable diffusion barrier. It would be interesting to know, however, exactly how SOI works...I haven't read much on it to date.

Update:
I went ahead and did a quick Google search, and came up with IBM's press release on SOI:
http://www-3.ibm.com/chips/bluelogic/showcase/soi/

The company expects SOI to eventually replace bulk CMOS as the most commonly used substrate for advanced CMOS in mainstream microprocessors and other emerging wireless electronic devices requiring low power.
This makes it seem to me like SOI is an insulating layer between layers of logic on the silicon. I could see how this would be a massive benefit. And IBM does indeed claim the following stats:

SOI technology improves performance over bulk CMOS technology by 25 to 35%, equivalent to two years of bulk CMOS advances. SOI technology also brings power use advantages of 1.7 to 3 times.
If true, this could be quite impressive. However, since I currently am not aware of the specific ratios of power consumption between different methods in modern microprocessors, I have no idea whether or not this is realistic.

Another update:
I went ahead and started reading the white paper on the page I linked above. It's amazing how incorrect the description of transistors is. But at least I see now what SOI is. It's simply using an insulator between the gate and the source/drain of the transistor (and when describing this part, I think the diagrams were incorrect...but anyway...). This dramatically reduces the current draw of a transistor, which is definitely a good thing.
 
Chalnoth said:
If true, this could be quite impressive.

What, did you think I just made up that 1.7-3X figure? ;)

SOI is pretty neat IMHO, and the next-generation architectures designed around it will show the effects of using it; As opposed to trying to baseline SOI preformance around Hammer.

For example, Toshiba is working with IBM on SOI under the STI Alliance and has created an eDRAM-on-SOI cell sans a capacitor by 'virtually' creating one threw manipulating the FBE inherient in SOI.

http://www.eetimes.com/issue/mn/OEG20030623S0025
 
Chalnoth said:
There's no such thing as an impermeable diffusion barrier.

Well, its not perfect, but prevents dopants from getting too deep in. The diffusion selectivity of Si and its oxide is one the main reasons Si is used in the first place. It stops diffusion pretty well, and its almost impermeable, at least to the stuff that's relevant here.

This makes it seem to me like SOI is an insulating layer between layers of logic on the silicon.

Well, there is oxide insulator between transistors, but all the logic is on one layer. SOI refers to a oxide layer beneath the silicon the logic is on, hence silicon-on-insulator. (The oxide layer is itself on a silicon substrate)
 
Vince said:
As for SOI, I don't think the "Hammer" line is something you should use as a baseline for any effective increases. Power5 or Cell would be preferable IMHO. Power4 is better than Hammer, that low-power derivative, for a present-day parallel. Yet, I think the next generation (Power5 or Cell) that was designed with SOI in mind and the lessons learned will be most indicative of potential. So, by the end of 2004 we'll have a much better picture.

I agree. SOI will only reach its potential after a few more years of process tweaks and, perhaps more importantly, experience on the circuit design side. (On the other hand IIRC SOI's theoretical benefits decrease at smaller chip geometries...)

That is certainly relevent to a discussion of the theoretical potential of SOI. But it's not terribly relevent to a discussion on a hypothetical first-generation SOI part from Nvidia that might be targeted at mid '04 or earlier. I would be surprised if Nvidia could achieve anywhere near the level of expertise with the SOI process that AMD has with Hammer (which, as I said, doesn't seem to have resulted in any clear advantage over bulk Si yet), much less the theoretical potential of the technology.
 
OK, one thing I have to say about ATI and XBox2. Did the XBox1 deal help NVIDIA corner the high-end market? then why was NV3x such a goof?

rather, NVIDIA lost their crown in the high-end after XBox. I'm not saying the same thing will happen to ATI, but who's to say it won't either.
 
Back
Top