Oh I would LOOOOVE that!Acert93 said:Ugh, I don't even want to contemplate that scenario...
Can you imagine all the infighting/nastiness that would go on?
Oh I would LOOOOVE that!Acert93 said:Ugh, I don't even want to contemplate that scenario...
geo said:I know that this is an enthusiast-heavy board, people, but guess what --we aren't where all the profits are. We aren't where even most of them are. We are where the juiciest ones are, and anyone would like to have them --but they are neither necessary nor sufficient to being a successful company, and the bigger you are (or want to be) the truer that is by inescapable necessity.
What a first post! Oy.
Rollo said:Curse you Geo. Everyone knows the only thing that matters is who has the fastest single card, and it darn well better be on one pcb!
This is exactly why I think AMD/ATIs road ahead is long and rocky- I don't believe America/world has gotten over the whole "Intel Inside" mindset. "AMD inside" has a LONG way to go before overcoming that stigma.
We enthusiasts are about the only people in the world who have preferred AMD.
Rollo said:Curse you Geo. Everyone knows the only thing that matters is who has the fastest single card, and it darn well better be on one pcb!
Skrying said:Odd you say that. I think the reason why this can actually works is because that a lot of the people are finding out about AMD now, they know who AMD is now, AMD has produced a name for themselves finally in the market.
Perfect example is the fact that Dell is even considering AMD based systems. The fact that Dell is even serious about such a thing is already saying a lot about the strides AMD has made.
Rollo said:The reason I think this is that people AREN'T "finding out about AMD". Their marketshare Q1 2006 was 21%, the first time it had been over 20% since 2001.
Having 1/5 of the market isn't exactly encouraging when you've just been forced to cut your prices in half.
poopypoo said:You're still tlaking nonsense. Sure, they'd like to have more market share, and yes, Conroe will hurt them, and yes, Intel can certainly afford more bad quarters of price war than AMD, but you are assuming that AMD will do nothing to stem the tide for the next year or two -- to say nothing of AMD's continuing market share growth and a vast shortage of Conroes. That strikes me as... well... overly optimistic???
Jawed said:Over the next year NVidia's margins take a big hit, because:
Jawed
- D3D10 forces them to add in the features, performance and IQ they've been leaving out
- superscalar architecture wastes die space on functional units that are often idle
- lack of unification also costs excessive die space.
Of course, which costs transistors. Big time.Razor1 said:G80 already has those features you mentioned Jawed,
I'm doubtful. Because you're comparing against a pipeline that has terrible dynamic branching performance.And the last two a little bit but not really, since going unified will be a more complex unit using more transistors then tranditionally for a pipeline + vertex shader.
I seriously think that only 16 TMUs and 16 ROPs, as some are predicting, would be a serious problem. 16 double-rate ROPs might be enough, but 16 TMUs (even assisted by 16 vertex fetch units that can also function as point-sampling texture units in pixel shaders) just looks too marginal to me.This is why I'm hoping the r600 has more then 64 shader units, I mentioned this a long time ago, ATi would be doing a big mistake by going only 64 units.
Skrying said:Odd you say that. I think the reason why this can actually works is because that a lot of the people are finding out about AMD now, they know who AMD is now, AMD has produced a name for themselves finally in the market.
Perfect example is the fact that Dell is even considering AMD based systems. The fact that Dell is even serious about such a thing is already saying a lot about the strides AMD has made.
Jawed said:Of course, which costs transistors. Big time.
I'm doubtful. Because you're comparing against a pipeline that has terrible dynamic branching performance.
I seriously think that only 16 TMUs and 16 ROPs, as some are predicting, would be a serious problem. 16 double-rate ROPs might be enough, but 16 TMUs (even assisted by 16 vertex fetch units that can also function as point-sampling texture units in pixel shaders) just looks too marginal to me.
I guess R600 will grow from R580 by relatively little compared to the ~doubling in transistor count for G80 against G71. Though a lot of G80's extra transistors will be memory, I guess, and memory tends to be much denser...
If R5xx is ~35% margins on 90nm and G7x is ~45% margins, and then you increase transistor count on the former by 20% (total guess) and reduce die size with 80nm tech by 15% whilst G80's transistor count increases by 90% whilst remaining on 90nm, well...
Jawed
satein said:Don't take this too serious ...
How about the opposite alternative (I know, I know... you may start laughing), what if it will be happened like at the end AMD/ATi buy out Nvidia instead?
Jawed said:Over the next year NVidia's margins take a big hit, because:
Jawed
- D3D10 forces them to add in the features, performance and IQ they've been leaving out
- superscalar architecture wastes die space on functional units that are often idle
- lack of unification also costs excessive die space.
chavvdarrr said:can somone try to compare unified vs non-unified in terms of transistor count?
geo said:Some people are wondering as a result, if NV doesn't unify, that R600 might be a professional workstation market killer for ATI (since that kind of work typically requests a much greater vs workload) --unless there is somewhere else in the pipeline where they get limited.