Competiton over ?

RussSchultz said:
Actually, I think Joe's point was that NVIDIA put in work, but their vision of what DX9 was going to be wasn't the one that ended up being ratified, so it was either "make do with what ya got" or "redesign and be even more late"

Would you believe that I made my last post before reading this? You're quite the prophet. ;)
 
Joe DeFuria said:
Things don't look too good IMO, for nVidia in the "high end discreet GPU" business for the next few years. IBM is having issues with low-k...(isn't better low-k at IBM the suppossed reason for nVidia switching the high end parts to IBM?)

Whoa... You know Joe, there is more to dielectrics than SiLK (eg. CVD has been verified and production soon at E. Fishkill). And there do exist other superior process technologies, like sSOI. This comment, IMHO, is quite obtuse when you look at what IBM could potentially offer to a given company compared to what TSMC and UMC could - hell, it isn't even fair. IBM's R&D is just a "little" bit more extensive.

Not sure how you could turn a plus (eg. More options, as in TSMC + IBM) into a negative, but I'm sure there's a reason behind it... Can you atleast admit that having more options is intrinsically better than less? And we'll leave the R&D/technology aspects out of it since that argument is futile.

EDIT: Although, the prospect of a GPU on a 11S derived process (65nm SOI) is quite appealing. IMHO, IBM has the potential of being a decisive dynamic in nVidia's "preformance" sector. With the trend towards computational resources becoming paramount, process technology is vastly important and if utilized fully, this will become appearent the moment a product based on a bleeding-edge IBM process is released.
 
Joe DeFuria said:
They still have just as much chance as ATi to make a decent DX10 GPU, if they're willing to put the work and effort into it.

Disagree completely.

ATI has a much bigger chance at having the "best" DX10 GPU. (Best defined as combination of time-to-market, most DX10 feature support, and LEAST amount of wasted transistors for "non DX10 supported" features.)

I'm in no way claiming that ATI can't really mess up and drop the ball. Or can't fail out of pure bad luck. Anything can happen in this industry. But to say that nVidia (or anyone but ATI) has just as much a chance at the leading DX10 GPU is pretty silly, IMO. ;)

In short: ATI clearly has the best chance...though that doesn't guarantee success.

You're essentially wrong. Intel supposely has x86 under its hand, yet AMD can pull out a better CPU with far less resources. In the end you see it's all about how well they manage to do it, it has nothing to do with who controls what.
 
Well, I certainly would not have guessed ATi would outclass nV with the 9700, let alone almost across the board. nV could still surprise us, and the FX may have more legs in it than the R300.

NV30:Willamette P4::R300:Athlon ... NV40:Northwood P4::R420:Athlon XP? It could happen. I doubt it (and hope neither ATi nor nV produce a weak upgrade), but the tables can turn with one false tumble of the silicon dice.
 
I know that the comparison is not very accurate, but when I hear about nVidia talking about diversifying I can't get 3dfx's silly TV card out of my head.

(Uh... meaning the thought of the card, not the actual card! :D )
 
horvendile said:
I know that the comparison is not very accurate, but when I hear about nVidia talking about diversifying I can't get 3dfx's silly TV card out of my head.

Speaking of which, that old VoodooTV 200 card sucked. The picture quality was so-so, recordings would miss a frame every few frames so you end up with something like 20fps. I eventually put back in my old ATI tv wonder. Too bad they only had time for one beta driver update... and then they were gone. :(
 
First off, does anyone know where Jen-Hsun's original comments are stemming from? I'd like to read them outside of Legion's "simplification."

Though if it IS right, my main niggle is if this is telling us something about nVidia's expectations with NV40 vs. R420 when they show up. Considering their current marketing trend, I would have expected "not only are we going to be diversifying all over the place, but we will be kicking everyone's ass again come NV40 time, to boot!" Oh, you know, something to that effect. ;) This just sounds a little "meek" by their current trend--hence the niggling feeling.

Meanwhile, you make good points Vince, but do you actually see IBM freeing up space on their top-of-the-line fabs for anyone's GPU's any time soon? I'd love to see them as well, but GPU's don't usually clear as much profit, and I imagine IBM wants to utilize their best fabs first and foremost on their own chips, and then afterwards for their tight partners and high profit-bearing chips. nVidia is shifting some work to IBM, but I doubt anyone is pushing high-end GPU's to 11S just yet.
 
nonamer said:
Joe DeFuria said:
They still have just as much chance as ATi to make a decent DX10 GPU, if they're willing to put the work and effort into it.

Disagree completely.

ATI has a much bigger chance at having the "best" DX10 GPU. (Best defined as combination of time-to-market, most DX10 feature support, and LEAST amount of wasted transistors for "non DX10 supported" features.)

I'm in no way claiming that ATI can't really mess up and drop the ball. Or can't fail out of pure bad luck. Anything can happen in this industry. But to say that nVidia (or anyone but ATI) has just as much a chance at the leading DX10 GPU is pretty silly, IMO. ;)

In short: ATI clearly has the best chance...though that doesn't guarantee success.

You're essentially wrong. Intel supposely has x86 under its hand, yet AMD can pull out a better CPU with far less resources. In the end you see it's all about how well they manage to do it, it has nothing to do with who controls what.
What a useless analogy :rolleyes:, x86 isn't a moving/evolving target like DirectX which gets major features added with every new version.

cu

incurable
 
Vince said:
This comment, IMHO, is quite obtuse when you look at what IBM could potentially offer to a given company compared to what TSMC and UMC could - hell, it isn't even fair. IBM's R&D is just a "little" bit more extensive.

And IBM is not accustomed to the business model that TSMC has, and what TSMC's customers typically need. You're right, it's not fair.

Not sure how you could turn a plus (eg. More options, as in TSMC + IBM) into a negative...but I'm sure there's a reason behind it...

:rolleyes:

Who said I turned more options into a negative? There must be a reason for you making things up? At worst, I'm saying IBM (in the short term) doesn't appear to me to be able to nVidia's disadvantage of being a back seat to DX10 development.

And why is this option exclusive to nVidia by the way? Is ATI not permitted to become an IBM (or Micron, or anyone else), that they feel offers the best tech?

With the trend towards computational resources becoming paramount, process technology is vastly important...

Yes, we know, Vince..."lithography is everything!" :rolleyes:
 
incurable said:
You're essentially wrong. Intel supposely has x86 under its hand, yet AMD can pull out a better CPU with far less resources. In the end you see it's all about how well they manage to do it, it has nothing to do with who controls what.

What a useless analogy :rolleyes:, x86 isn't a moving/evolving target like DirectX which gets major features added with every new version.

cu

incurable

Exactly.
 
On the subject of processes, it sounds as though ATI want to push things a little more:

http://biz.yahoo.com/rc/030926/tech_taiwan_ati_1.html

ATI Chairman and Chief Executive K.Y. Ho said the company was designing all new chips with circuits just 0.13 microns wide or less, allowing it to pack more computing power into each product than the previous generation of 0.15 technology.

"We don't have any problems with 0.13 margins," Ho told Reuters in an interview on the sidelines of Taiwan's Computex computer trade show.

"Actually, it's the opposite. We want to go into 0.13 and 0.11 very aggressively. If it was creating pressure, I don't think we would be banging our heads against the wall," Ho said.

Mmmm, 110nm
 
incurable said:
What a useless analogy :rolleyes:, x86 isn't a moving/evolving target like DirectX which gets major features added with every new version.

MMX+SSE+SSE2+SSE3(Q403)+SSE4(Q404), the instruction set does not seem that static to me.
 
Nvidia's nforce3 has a bug and that is why vendors are not so excited plus where it the sound storm?

Chalnoth said:
People learn to memorize instead of to think. I'm currently the TA in a physics course for bio majors (who largely go on to pre-med and stuff), and it's just scary how resistant they are to actual thought.

Yes this also encourages cheating, b/c memorizing always seems silly as you can look things up in a book. In addtition it is much harder to cheat if the questions are thought provoking not regurgitation.

On a side note it is easier on a teacher to have folks memorize, and then parrot ona test that is why it is the way it is.
 
DaveBaumann said:
On the subject of processes, it sounds as though ATI want to push things a little more:

...

Mmmm, 110nm

For what it's worth, it's not only ATi eyeing up 0.11.
 
Well, I've heard talk that NVIDAI will move to that as well (n fact Alain Tiquet sid as much hen I asked him at the NV30 launch last year), however it never reeally seems to be featured much on customer fabs roadmaps (not that I've studied them intently). There's a lot of fuss over 90nm next, partly becuase thats what Intel is shifting to - however it seems that the graphics manufcaturers often seem to use intermediate steps that Intel doesn't.
 
AFAIK, the plan was always for nVidia to shift to 0.11 on the next refresh (a la NV45), then go to 0.09 on whichever refresh required it (so NV55 most likely). I'd be suprised if they change process on a next generation part again.

That plan may have changed, however, given recent events.
 
Pete said:
WTH, Russ is correctly interpreting people's posts now? Inconceivable!

“You keep using that word. I do not think it means what you think it means.â€￾ ;)
 
Tim said:
incurable said:
What a useless analogy :rolleyes:, x86 isn't a moving/evolving target like DirectX which gets major features added with every new version.
MMX+SSE+SSE2+SSE3(Q403)+SSE4(Q404), the instruction set does not seem that static to me.
Those extensions are not part of the x86 ISA and btw usually not supported by AMD for years after their original introduction.

cu

incurable
 
Back
Top