Prediction: In a year, NVIDIA buys the combined AMD/ATI

Umm. Given that the market has had a week to digest this now, AMD/ATI is a $14B company, not a $9.2B one. Any market perceived "overpricing" of ATI has already been taken out of AMD's hide. :LOL:

Intel proved you don't have to have the fastest chip to survive, didn't they? You need to be able to feed the OEM's the way they want to be fed, from bottom to top. That's been what this deal has been about for AMD from the beginning, and why they approached ATI.

I know that this is an enthusiast-heavy board, people, but guess what --we aren't where all the profits are. We aren't even where most of them are. We are where the juiciest ones are, and anyone would like to have them --but they are neither necessary nor sufficient to being a successful company, and the bigger you are (or want to be) the truer that is by inescapable necessity.

What a first post! Oy.
 
geo said:
I know that this is an enthusiast-heavy board, people, but guess what --we aren't where all the profits are. We aren't where even most of them are. We are where the juiciest ones are, and anyone would like to have them --but they are neither necessary nor sufficient to being a successful company, and the bigger you are (or want to be) the truer that is by inescapable necessity.

What a first post! Oy.

Curse you Geo. Everyone knows the only thing that matters is who has the fastest single card, and it darn well better be on one pcb! ;)

This is exactly why I think AMD/ATIs road ahead is long and rocky- I don't believe America/world has gotten over the whole "Intel Inside" mindset. "AMD inside" has a LONG way to go before overcoming that stigma.

We enthusiasts are about the only people in the world who have preferred AMD.
 
Rollo said:
Curse you Geo. Everyone knows the only thing that matters is who has the fastest single card, and it darn well better be on one pcb! ;)

This is exactly why I think AMD/ATIs road ahead is long and rocky- I don't believe America/world has gotten over the whole "Intel Inside" mindset. "AMD inside" has a LONG way to go before overcoming that stigma.

We enthusiasts are about the only people in the world who have preferred AMD.

Odd you say that. I think the reason why this can actually works is because that a lot of the people are finding out about AMD now, they know who AMD is now, AMD has produced a name for themselves finally in the market.

Perfect example is the fact that Dell is even considering AMD based systems. The fact that Dell is even serious about such a thing is already saying a lot about the strides AMD has made.
 
Rollo said:
Curse you Geo. Everyone knows the only thing that matters is who has the fastest single card, and it darn well better be on one pcb! ;)

"Everyone" also knows if you don't you should have the good grace to immediately lay down in the road and die too. :LOL:

Yet the history is it didn't kill ATI pre-9700, didn't kill NV in 2003, and didn't kill Intel pre-Conroe. Won't cause AMD to shrivel up and blow away now.
 
Skrying said:
Odd you say that. I think the reason why this can actually works is because that a lot of the people are finding out about AMD now, they know who AMD is now, AMD has produced a name for themselves finally in the market.

Perfect example is the fact that Dell is even considering AMD based systems. The fact that Dell is even serious about such a thing is already saying a lot about the strides AMD has made.

The reason I think this is that people AREN'T "finding out about AMD". Their marketshare Q1 2006 was 21%, the first time it had been over 20% since 2001.

Having 1/5 of the market isn't exactly encouraging when you've just been forced to cut your prices in half. :(

When you consider AMD probably owned the top 10% of the market in Q1 2006, where they fared on low to mid is especially depressing.

It's true they've got ATI IP now, but they aren't exactly reknowned as the motherboard leaders either. It's been very recent history they've even had good boards and established some parity.
 
  • Like
Reactions: Geo
Rollo said:
The reason I think this is that people AREN'T "finding out about AMD". Their marketshare Q1 2006 was 21%, the first time it had been over 20% since 2001.

Having 1/5 of the market isn't exactly encouraging when you've just been forced to cut your prices in half. :(

You're still tlaking nonsense. Sure, they'd like to have more market share, and yes, Conroe will hurt them, and yes, Intel can certainly afford more bad quarters of price war than AMD, but you are assuming that AMD will do nothing to stem the tide for the next year or two -- to say nothing of AMD's continuing market share growth and a vast shortage of Conroes. That strikes me as... well... overly optimistic???
 
poopypoo said:
You're still tlaking nonsense. Sure, they'd like to have more market share, and yes, Conroe will hurt them, and yes, Intel can certainly afford more bad quarters of price war than AMD, but you are assuming that AMD will do nothing to stem the tide for the next year or two -- to say nothing of AMD's continuing market share growth and a vast shortage of Conroes. That strikes me as... well... overly optimistic???


You'd be wrong about that- no way I want to see AMD fail. Think about what that would mean to us as consumers of cpus/motherboards/video cards.

There's no "nonsense" in what I've posted, I just don't think having ATI in house is going to magically transform AMD into a competitor for Intel. We'll have to see how it plays out, but anyway you cut it, I don't see any growth for AMD in the near future.

I'll put it this way: They've been able to claw to a 20% market share with superior processors. You see them going way up from there because they got 3rd place motherboard maker ATI in the mix?

That is what I'd call "optomistic".
 
My thoughts on the matter...

Short-term (next 2 years): AMD will struggle somewhat against Intel, but having ATI will help them out. NV will continue doing what it does best... being profitable. Intel will revel in no longer "sucking" (in the words of JHH).

Mid-term (2-7 years out): AMD + ATI synergy occurs... company finally becomes highly profitable. NV's pc marketshare will probably decline somewhat, but they are diversified enough to survive. Intel... who knows... could return to complacency but might not.

Long-term (8-10 years out): AMD achieves parity with Intel, but a new player enters the CPU market...
 
Over the next year NVidia's margins take a big hit, because:
  • D3D10 forces them to add in the features, performance and IQ they've been leaving out
  • superscalar architecture wastes die space on functional units that are often idle
  • lack of unification also costs excessive die space.
Jawed
 
Last edited by a moderator:
Jawed said:
Over the next year NVidia's margins take a big hit, because:
  • D3D10 forces them to add in the features, performance and IQ they've been leaving out
  • superscalar architecture wastes die space on functional units that are often idle
  • lack of unification also costs excessive die space.
Jawed

G80 already has those features you mentioned Jawed,

And the last two a little bit but not really, since going unified will be a more complex unit using more transistors then tranditionally for a pipeline + vertex shader.

Have ya guys wondered if a r600 64 ALU chip will compete against a 32 pipeline (64 ALU's, + 16 vertex and geometry shaders)?

I don't care if its unified or not, the r600 will end up underpowered compaired to the g80 unless the r600 has that much more clocks, which then ATI will end up running into the same issues with heat and power usage. This is why I'm hoping the r600 has more then 64 shader units, I mentioned this along time ago, ATi would be doing a big mistake by going only 64 units.
 
Last edited by a moderator:
Don't take this too serious ;)...
How about the opposite alternative (I know, I know... you may start laughing), what if it will be happened like at the end AMD/ATi buy out Nvidia instead?
 
Razor1 said:
G80 already has those features you mentioned Jawed,
Of course, which costs transistors. Big time.

And the last two a little bit but not really, since going unified will be a more complex unit using more transistors then tranditionally for a pipeline + vertex shader.
I'm doubtful. Because you're comparing against a pipeline that has terrible dynamic branching performance.

This is why I'm hoping the r600 has more then 64 shader units, I mentioned this a long time ago, ATi would be doing a big mistake by going only 64 units.
I seriously think that only 16 TMUs and 16 ROPs, as some are predicting, would be a serious problem. 16 double-rate ROPs might be enough, but 16 TMUs (even assisted by 16 vertex fetch units that can also function as point-sampling texture units in pixel shaders) just looks too marginal to me.

I guess R600 will grow from R580 by relatively little compared to the ~doubling in transistor count for G80 against G71. Though a lot of G80's extra transistors will be memory, I guess, and memory tends to be much denser...

If R5xx is ~35% margins on 90nm and G7x is ~45% margins, and then you increase transistor count on the former by 20% (total guess) and reduce die size with 80nm tech by 15% whilst G80's transistor count increases by 90% whilst remaining on 90nm, well...

Jawed
 
Skrying said:
Odd you say that. I think the reason why this can actually works is because that a lot of the people are finding out about AMD now, they know who AMD is now, AMD has produced a name for themselves finally in the market.

Perfect example is the fact that Dell is even considering AMD based systems. The fact that Dell is even serious about such a thing is already saying a lot about the strides AMD has made.

That is true AMD has gained more popularity recently, but only in the last year or so, which isn't enough to overcome Intel's marketing on conroe. AMD in the next few quarters will have a speculated decline of over 30%, with Intel being cheaper and having better processors for the price, AMD can't really even go into a price war. Intel's processors are cheaper to produce as well at .65 microns and mid next year will already be producing them on .45 microns, Intel has at least full 2 quarters leg up on AMD on process techonolgy and 1 year on core techonolgy (maybe more because we still don't see any plans for AMD to go to .45). AMD got complacient over the past year, I remember AMD stating Intel is two years behind earlier this year and they wouldn't catch up to AMD until 2008, well they are 1 year ahead of AMD now.
 
Jawed said:
Of course, which costs transistors. Big time.


I'm doubtful. Because you're comparing against a pipeline that has terrible dynamic branching performance.


I seriously think that only 16 TMUs and 16 ROPs, as some are predicting, would be a serious problem. 16 double-rate ROPs might be enough, but 16 TMUs (even assisted by 16 vertex fetch units that can also function as point-sampling texture units in pixel shaders) just looks too marginal to me.

I guess R600 will grow from R580 by relatively little compared to the ~doubling in transistor count for G80 against G71. Though a lot of G80's extra transistors will be memory, I guess, and memory tends to be much denser...

If R5xx is ~35% margins on 90nm and G7x is ~45% margins, and then you increase transistor count on the former by 20% (total guess) and reduce die size with 80nm tech by 15% whilst G80's transistor count increases by 90% whilst remaining on 90nm, well...

Jawed

I think the r600 will have fairly large transitor counts if its based off of Xenos tech which is about 330 mill tranys for the xenos chip, not sure if that includes the edram though. Anyways IMO I think it will end up around 480 mil along with the g80, depending on the process used it will make a bit of difference on clocks as you said.

Really not sure about the dynamic branching performance of the g80, it would seem a bit too much for nV to be able to make changes on that too if IQ is improved. I would think it would have to be one or the other, but the g80 is definitly getting a new AA engine, and I would think nV will stress on the removal or giving the choice of removing angle dependent AF.

I think ATi will be going for a GPU that is more xenos then r5xx, just because they have the unified pipes working and what Ortan said about it. And they should be increasing the number of ROPs and TMU's if they go that way, hopefully, staying at 16 16 will hurt them as it does hurt the r5xx's at the moment.
 
Last edited by a moderator:
satein said:
Don't take this too serious ;)...
How about the opposite alternative (I know, I know... you may start laughing), what if it will be happened like at the end AMD/ATi buy out Nvidia instead?

I think it unlikely that regulators would allow such a combination, whoever the "senior partner" was. Unless the landscape changes quite a bit in the interim.
 
Jawed said:
Over the next year NVidia's margins take a big hit, because:
  • D3D10 forces them to add in the features, performance and IQ they've been leaving out
  • superscalar architecture wastes die space on functional units that are often idle
  • lack of unification also costs excessive die space.
Jawed

Do you know something about G80 that you're not sharing? :smile: Or is the concept of unified shaders so romantic that it just "has" to be more efficient? I don't get the first point either. Even when Nvidia had more features and a bigger chip (NV40) their margins were better than ATi's.
 
can somone try to compare unified vs non-unified in terms of transistor count?

It seems to me that non-unified will need less transistors, although I'm not sure how branching may affect the comparison (ie if making fast branching is much harder to achieve on non-unified architecture)
 
chavvdarrr said:
can somone try to compare unified vs non-unified in terms of transistor count?

I don't think we know. I use to think that the argument was over whether a combined ps/vs was enough bigger in transistors than a ps-only that you could come to a reasonable difference of opinion about whether you were better off to not unify.

Now I'm not so sure that the ps vs ps/vs transistor count is really the major transistor difference --that the major cost is in the caching and control logic to route this stuff around and store in-flight information while you do it.

What ought to be true is that a unified part ought to be much more agnostic to the relative balance of ps compared to vs workload that any given app is requesting, whereas developers would need to give more consideration to the "baked in" ratio provided by a non-unified part.

Some people are wondering as a result, if NV doesn't unify, that R600 might be a professional workstation market killer for ATI (since that kind of work typically requests a much greater vs workload) --unless there is somewhere else in the pipeline where they get limited.
 
geo said:
Some people are wondering as a result, if NV doesn't unify, that R600 might be a professional workstation market killer for ATI (since that kind of work typically requests a much greater vs workload) --unless there is somewhere else in the pipeline where they get limited.

Hmmmm interesting point. That could be quite a watershed for ATi in the professional market.
 
Back
Top