Clearly I'm out of touch. Bit embarrassing really. I was actually planning on buying the 2GB HD5870 (even though I hate the cooler, damn it's yucky and despite the fact the performance is worryingly low), but that's because I always buy "excess" memory.
Jawed
Hi Jawed,
My post had only purpose to present my point of view and to see if you agree with me...
I have no technology background and i don't think you are out of touch at all...
I just thought that since the memory capacity is doubling every 2 years and the resolution is changing with a much slower tempo
(In the future we will see..., but i don't think
bezeliousfinity lol will change anything for the vast majority of the market)
don't you think that it makes sense the game engines to use the extra memory capacity (that their customer base has) over each cycle per resolution target?
The thing that gets me about the performance of HD5870 is that it appears AMD is basically saying "that's it, we're bandwidth limited and GDDR5 won't go much, if any, faster".
I think that AMD made a very good decision with the 256bit controller (if you think what are the positives and what are the potential negatives of a 512bit controller for ATI, definetely a good business decision...)
I perceive the AMD slides regarding GDDR5 differently.
This is a dangerous point because I strongly believe Larrabee is considerably more bandwidth efficient. So, either R900 is a total rethink in the Larrabee direction or AMD's fucked in 18 months. I don't care how big Larrabee is (whatever version it'll be on), I want twice HD5870 performance by the end of 2010.
About Larrabee i know very little.
From what i understand Intel wants to control the whole GPGPU direction and secure the future CPU/GPU status in their favor...
I really don't like Intel practises in a lot of issues but the guys are smart and this particular time they are very strong (You can see the expirements they are doing with their current CPU product line and with the upcoming 32nm one, certainly they can afford them right now, it is the best time for them to test ideas/models/practises in the market...)
Larrabee from what i can understand is not going to be competitive from a performance standpoint with the future high end GPUs that ATI & NV can make, but it doesn't have to be.
Intel to control the GPU market has to stay in the market at least 2 engines mini cycles (2+2 years) and must use such a pricing model, business tactics and a marketing strategy so to entice partners and consumers to their solutions...
Despite reason, i am not optimistic that Intel will achieve this, that's why my original plan when i heard about Larrabee's GPU was that Intel will implement a custom
GPU socket solution lol and develop their strategy with this direction...
The dumb forward-rendering GPUs are on their last gasp if memory bandwidth is going nowhere.
Of course if AMD can make a non-AFR multi-chip card that doesn't use 2x the memory for 1x the effective memory, then I'm willing to be a bit more patient and optimistic.
Jawed
If you read a JPR report 1-1,5 month back, JP was talking that nearly
half of the new PCs in 2012 will be sold with
multi AIBs GPU solutions (scaling will be done with
Lucid Tech chip solutions according to him) (lol).
I hope his prediction to be wrong.
I don't see how AMD/NV will like this direction...
I think that ATI & NV have the technical capability to make homogeneous multi-core designs like SGX543MP (I am not talking about the tile based rendering method...)
(that's how i see the
progression of the ATI/NV
future shared memory GPU designs)
so they will not need Lucid for perf. scaling (why NV/ATI to lose all the money that customers are going to pay for Lucid based solutions when this money can go directly to NV/ATI?)