AMD: Sea Islands R1100 (8*** series) Speculation/ Rumour Thread

Yes, but I think it means existing IP for the basic blocks (GPU, CPU cores) and custom silicon for the rest (memory controller, interconnect, etc.).

But of course, that's just a guess on my part.
No. My definition is correct. ;)
 
Yeah I should have used the GCN 1.1 moniker. But for that matter, who's to say Orbis isn't based on GCN 2.0 which I assume we are taking to describe the (possible) new IP launching at the end of 2013 in new high end GPU's?

I seem to remember a comment made somewhere that GCN didn't support "interrupting" the unified shader pipeline, but it was possible to do that in GCN 2.0. (the implication for the PS4 was that under GCN2 Sony could achieve their 14+4 weirdness through the standard unified shader - e.g. a simple 18CU rather than 14+4).

Does that make sense to anyone?
 
AMD's roadmap has promised preemption and context switching at some point in the future. I don't think the roadmap gave any finer granularity as to what version of the architecture would first offer it.

edit:
And while I don't want to dredge any of the console stuff into this thread, I don't think many of the PS4 rumors point towards Orbis being the product that introduces it, rather, that it has workarounds for the lack of preemption. The standard "rumors are incomplete and often BS" disclaimer applies.
 
AMD's roadmap has promised preemption and context switching at some point in the future. I don't think the roadmap gave any finer granularity as to what version of the architecture would first offer it.

edit:
And while I don't want to dredge any of the console stuff into this thread, I don't think many of the PS4 rumors point towards Orbis being the product that introduces it, rather, that it has workarounds for the lack of preemption. The standard "rumors are incomplete and often BS" disclaimer applies.

But increment in the number of command queues doesnt go against a context switching architecture.Such a radical increment in fact would talk even more about the need of the cus to automatically change between rendering and computing tasks.
 
But increment in the number of command queues doesnt go against a context switching architecture.Such a radical increment in fact would talk even more about the need of the cus to automatically change between rendering and computing tasks.
There's a console forum thread concerning the rumors surrounding how the PS4 does or does not reserve CUs.
It doesn't need to be brought into this thread.
 
There have been plenty of names floating around. Pick your fav: Thebe-J, Thebes, Kryptos-O, Kryptos, WANI, Basilisk, Cipher and Carrizo. At least one of them should match the PS4 APU codename. So far it seems that most sources have bet on Thebes.

For me it goes like so:

Thebe-J = AMD internal codename during development
Liverpool APU = Sony's own name for the APU

Kryptos is what I think might be the internal AMD codename for Durango's APU.

Probably wrong though.:LOL:
 
And what can be improved out of vanilla AMD blocks?.
A customer might want to make something more flexible, enhance security, etc. I'm just saying it's possible and within the structure of the semi-custom business. Of course the customer must pay for any customizations.
 
The implications of AMD winning all three platforms (especially Xbox Next and PS4 being both GCN) is that each dollar spent on driver development has that much more bang. We should see much better Windows and Linux drivers as a result.

http://theinquirer.net/inquirer/new...tware-optimisations-to-drive-chip-performance

Whoever is responsible for delaying Sea Islands is a fiscally smart chap: the game bundling along with driver improvements should see 7xxx series through this year easily for much less cost than releasing totally new hardware.

I've always been somewhat surprised that Radeon 79xx's have been lagging GeForce 680/670's despite its much better memory bandwidth; I imagine that Cat 13.2 is only the beginning, and we'll see that gap close quite a bit over this year.
 
I've always been somewhat surprised that Radeon 79xx's have been lagging GeForce 680/670's despite its much better memory bandwidth; I imagine that Cat 13.2 is only the beginning, and we'll see that gap close quite a bit over this year.
Hmm? Up the resolution a bit, 2560x1600 7970 already matches 680. 7970GE is ahead of 680 even if you don't go that high on resolution.
 
Obviously performance always varies from title to title, and it's not necessarily from any optimizations but from the fact that some games just work better on one architecture and some others on the other.
That's why I prefer either just checking specific games or overall average (from bigger test portfolio than Anand uses)
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan/27.html
perfrel_2560.gif
 
Roger that. Anyhow, Radeons seem like they have more room to grow and will have more attention focused toward them on the driver level this point.
 
Thanks for disabusing me of that notion. I'm also impressed that CrossFire gets so much closer to a factor of 2x single card performance than SLI this generation.
 
Back
Top