AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

nVidia has become too unpredictible, but the Southern Islands cards should launch in late Q3 -> Q4.
Definitely 2011, though.

In fact, I bet the first 28nm chips we'll see in shelves will be Radeons.
 
Just figured this belongs to here, accidently posted into R9xx thread since I didn't remember this thread exists :D

http://hardforum.com/showthread.php?t=1612784
Mark your calendars for July 16th! AMD and HardOCP are teaming up to deliver the community a GamExperience! Tourneys will be played. Raffles and tons of free stuff will be had. Free-for-all headshots will be made. Winners will be crowned. Losers will be chastised! VIP lounge for Tourney players, plenty of Fusion, Eyefinity, and Big Screen demo stations. And yes, we will give the HardOCP community their first hands-on GamExperience with AMD's next generation unannounced hardware. This event will be open to the public in Dallas, Texas. More details coming soon.
 
Not having as succesfull API as nVidia doesn't mean they haven't "embraced" GPU Compute

AMD's architecture is still very much graphics focused. The GDS is a good example of how they've addressed compute using the least expensive and least flexible approach. I don't think ECC or a large address space are necessary or even useful. However, future game engines will lean more heavily on compute where features like generic global memory caching would be useful.

Does STREAM SDK ring a bell?

That bell stopped ringing years ago.
 
That bell stopped ringing years ago.
.. when they changed over to the OpenCL standard.

Tools and libraries would be a good start.
http://developer.amd.com/zones/OpenCLZone/pages/toolsandlibraries.aspx :?:

Edit: PS when are we going to get some damn juicy rumors.
Only thing out sofar seems to be the OP which is admitted to be pure made-up-stuff-by-some-guy-on-the-internet.
Hell, we haven't even had anyone suggest ATI will use a ROP chip with 3 or 4 SP/TEX chips on the same card yet.
 
Last edited by a moderator:
AMD's architecture is still very much graphics focused. The GDS is a good example of how they've addressed compute using the least expensive and least flexible approach. I don't think ECC or a large address space are necessary or even useful. However, future game engines will lean more heavily on compute where features like generic global memory caching would be useful.
GDS isn't even exposed to ocl.

Caching is a must.
 
AMD definitely has embraced the GPGPU concept -- for comparison, NV still doesn't deploy OCL v1.1 in their public drivers. It is just the fact, that AMD gives priority (read: betting everything) to the open standard witch is still lagging behind in adoption, compared to CUDA. But the other more fundamental problem is really the underlying GPU architecture, as was hinted already in the discussion, of witch the most pressing need for change is the memory model.
 
Seriously? You expect AMD or anyone else to implement the CUDA runtime+compiler?

Not at all. CUDA isn't the point. Someone asked what else AMD needs to do on the compute front. Fermi+CUDA represent the current state of the art in feature support. OpenCL is lagging and is hardly a benchmark for progress. Wasn't expecting such an uproar to be honest. Thought it was pretty obvious that AMD could stand to benefit from more investment in GPU compute.

GDS isn't even exposed to ocl.

Caching is a must.

Yep, exactly.
 
AMD definitely has embraced the GPGPU concept -- for comparison, NV still doesn't deploy OCL v1.1 in their public drivers. It is just the fact, that AMD gives priority (read: betting everything) to the open standard witch is still lagging behind in adoption, compared to CUDA.

That's a pretty charitable way of phrasing it. One could also say that, having tried and failed to win significant mindshare with their own standard, AMD let Apple come up with another as plan B, which, despite having been widely hailed as the vendor independent standard that the computing world was supposedly clamoring for, hasn't exactly managed to set the market on fire so far.

Does letting your competitors run the show and placing your bets on a clone equal embracing?

But the other more fundamental problem is really the underlying GPU architecture, as was hinted already in the discussion, of witch the most pressing need for change is the memory model.

That could be fixed, if the even more fundamental problem wouldn't be that AMD has been living off scraps for years and doesn't have the muscle to break out of this cycle of having to rely on architectures that are neither fish nor flesh (like Llano).
 
Not at all. CUDA isn't the point. Someone asked what else AMD needs to do on the compute front. Fermi+CUDA represent the current state of the art in feature support. OpenCL is lagging and is hardly a benchmark for progress. Wasn't expecting such an uproar to be honest. Thought it was pretty obvious that AMD could stand to benefit from more investment in GPU compute.

99% of CUDA is exposed in OCL 1.1, so it's hardly lagging.

Apart from caching, AMD's got everything on hw side.
 
That's a pretty charitable way of phrasing it. One could also say that, having tried and failed to win significant mindshare with their own standard, AMD let Apple come up with another as plan B, which, despite having been widely hailed as the vendor independent standard that the computing world was supposedly clamoring for, hasn't exactly managed to set the market on fire so far.

Does letting your competitors run the show and placing your bets on a clone equal embracing?
They weren't the first, so using anything except an industry standard would have been monumentally stupid.

That could be fixed, if the even more fundamental problem wouldn't be that AMD has been living off scraps for years and doesn't have the muscle to break out of this cycle of having to rely on architectures that are neither fish nor flesh (like Llano).
Their discrete GPU's are very good. Their APU's are state of the art at this time.
 
Back
Top