Sound_Card
Regular
Anybody here think they would honestly buy a $600 HD 4870?
Anybody here think they would honestly buy a $600 HD 4870?
I'm missing why exactly people want the ability to bounce between "scalar" and "vector". It isn't like they need separate registers like LRB. Running scalar means you're tossing out 16x or 32x or whatever of your ALU capacity, and if you want to do that then it's easy, just predicate out all but one lane of your vector. What is really needed beyond what is already there (in NVidia's GPUs) is branch/call by register.
Anybody here think they would honestly buy a $600 HD 4870?
Each MP cluster has its own program counter, which determines the MIMD (MKMD in shader terminology) level of parallelism, sort of.Is RV770 MPMD, where each of the 10 clusters runs independently?
That's because i can't disclose it.To bad you are 1000% wrong... I mean come on, you didn't even bring any data to backup your claim.
What you really need to know is the profit margins of each segment. Enthusiast while being only 4% in revenue (this number seems to be too low btw because workstation being only 5% of revenue is clearly wrong for NV today) generates massive amount of profits because of much higher margins.
So back then the $80-$250 market made up 88% of the revenue...
While the enthusiast was only 4%...
Hmmm... who was wrong?
That's because i can't disclose it.
Each MP cluster has its own program counter, which determines the MIMD (MKMD in shader terminology) level of parallelism, sort of.
How is that a rhetorical question when you have already judged the AFR cards as shitty?So who is right - the one who dumps shitty AFR cards on that market or the one who makes special products for it also while still using shitty AFR cards as a temporal solutions? Don't answer, that's a rethorical question.
You have another opinion? (As in you, not AMD who may have it's reasons for bailing out of high end GPU production.)How is that a rhetorical question when you have already judged the AFR cards as shitty?
What you're basically saying is this: "why earn $300 when you can earn $100". Do you really want to say something like this? Because that's clearly b.s.
Bolded part isn't true at all. I know this for sure. So all the other parts of this phrase are wrong also.
Doing more with less resources is uninteresting? Doing things that are impossible on AFR system is uninteresting? That's certainly an interesting point of view. Maybe we should go back to Voodoo days since all that flexibility and programmabililty is uninteresting?
Nothing is tricky in the inefficiencies of AFR. The tricky part is when you try to avoid them. And for that you often loose that flexibility.
However these servers don't use middle class CPUs to achieve that and they certainly aren't selling in mainstream market. Why? If anything we're seeing the opposit process with CPUs: more cores are getting integrated into one big chip. Have you ever thought about this?
Dual chips will always have some logic that isn't needed in dual chip configuration and that means that their efficiency will always be less than the efficiency of single chip. Single chip will always have some algorythms where it will beat dual chips because of the limitations of AFR mGPU scheme.
So you've saved some bucks on the die and you've wasted nearly 2x bucks on the price of the card. Are you in the green after that?
What if you've missed the sweet spot and even one GPU competitor's card is faster than your mGPU card? If you have some GPU faster than you're using in your mGPU card you may be able to use it in the new mGPU card (GTX295 is an example although not the best one), if not -- you're truly fucked.
AMD is leaving it's high end dangerously open for a possibility like that.
It's a question of having full line up. AMDs line up is missing high end at the moment. Were NV will use two chips in Quadro/Tesla market AMD might need to use four with appalling efficiency and costs. That's a possibility that you should think about when you're speaking of multi-CPU servers.
And can someone explain me why ATI earns zero on all these great small GPUs and NV earns nearly the same on that big ugly GT200 now selling in cards for less than $200? I've always had a problem with that pricing argument since it's kinda always was "assumed" that RV770 is much better for AMD than GT200/b for NV from the pricing point of view but in reality i'm not seeing any results of this "greatness" in AMDs balance sheets -- ATI earned less in the 1Q09 than it did in the 1Q08 when all they had was RV670 against G92 and G80.
Nobody is have to do anything. NVIDIA is doing what they believe will earn them money. AMD is doing the same. Whose way is the best -- we don't know. But what everyone should consider is that NV's way is essentially nothing more and nothing less than AMD's way plus big GPU dies for high-end/workstation/server markets. AMD has simply left that market segment.
It's funny that you say this right after you've said why single big GPUs are neccessary after all.
That's because i can't disclose it.
What you really need to know is the profit margins of each segment. Enthusiast while being only 4% in revenue (this number seems to be too low btw because workstation being only 5% of revenue is clearly wrong for NV today) generates massive amount of profits because of much higher margins.
So who is right - the one who dumps shitty AFR cards on that market or the one who makes special products for it also while still using shitty AFR cards as a temporal solutions? Don't answer, that's a rethorical question.
The 8800GTX cost me in November 06' 650 Euros which by todays conversion should be ~$840.
I'm pretty sure each cluster in ATI has a program counter per thread. I wonder what's the marketing name?Each MP cluster has its own program counter, which determines the MIMD (MKMD in shader terminology) level of parallelism, sort of.
David, not to contest your points (well made ), but I'd say that we should not undervalue the importance of having the performance crown...
Sure, bankrupting your company to achieve it would not be a sound strategy, but having such crown can do a lot of good to your image and help your marketing teams sell your profitable mid-range and low-range segments than tons of TV and magazine ads could ever do IMHO (gives the customers confidence in all products sold by that manufacturer).
4-way AFR is way crappier than 2-way AFR and if we assume that AMDs 2-way AFR is going against NVs 1 GPU in high end then we are in position where AMDs 4-way AFR is competeing against NVs 2-way AFR.Don't throw any stuff at me but to the application it doesn't matter if use a dual-chip GPU, two GPUs or a dual-GPU board, it's AFR through SLi/CF anyway. Whether crappy or not it was NVIDIA that first re-introduced the whole idea after the NV40 introduction (if memory serves well).
I agree with you, but your rhetorical question seemed to be pointed to another direction, AMD and nVidia. And you obviously know that AMD had its reasons for this.You have another opinion? (As in you, not AMD who may have it's reasons for bailing out of high end GPU production.)
AMD doesn't have GPU for the high-end at the moment. It's that simple.
NVs problem right now is that it doesn't have a competitive GPU for middle end. We're still in G92 vs RV770 situation, almost a year later. That's the real problem for NV, not GT200 being slow or big or some kind of abstract strategy.