We seem to have a new tag, 17.3% overclock
938.4 mhz Durango GPU clock confirmed
Would be 1.44 teraflops, sounds good.
LOl, I have this tongue in cheek theory that in the know mods plant true information in the thread tags, amongst the nonsense.
17.3% overclock is a new tag I believe, it didn't used to be there. 15% OC was, but not 17.3.
Dont take it too serious.
Haha i didn't even notice the tags, 17.3% GPU overclock & 15% CPU overclock confirmed!!!!
Even if the GPU was overclocked 17.3% that would still be disappointing, the rumoured GPU just doesn't cut it.
I don't know, I like the design, a lot
Are you sure that's up to Crytek and not EA?
Please use the established "double-confirmed" expression if you intend your comment to be tongue-in-cheek in nature, that cuts down on misinterpretations and thus, unnecessary noise...Haha i didn't even notice the tags, 17.3% GPU overclock & 15% CPU overclock confirmed!!!!
I don't know, I like the design, a lot. A part of me is kinda actually hoping that the GPU isn't notably stronger than what people think, just to see the reaction when the exact GPU that we know about now, proves to be a whole lot more capable than people were ever giving it credit for.
Edit:: Thanks for telling me about the tags. Now you're going to have me tag watching in every thread of significance. I'm having a tinfoil hat built as we speak.
Durango having a tile base design has further implication beyond the console itself. I don't think Durango's was designed with just a console in mind. It's a power and bandwidth friendly design which means it's potentially well suited for smaller devices. If MS's phones, tabs and setup all use derivatives of Durango designs, it will allow 720 titles to be easily ported to those devices.
Furthermore, Durango as a robust tile based design with its DMEs and fast and wide I/o bus with its compression logic creates what looks to be a very hardy streaming device. Before Durango is finished rendering the last tile of a frame, the vast majority of that frame can already be in transit to a peripheral device. Compressing tiles and sending them off chips means Durango can be faster and more i/o friendly than more conventional PC hardware when servicing other devices.
Durango doesnt look like it was designed to be cheaper. It's console specific design look like its meant to efficiently service the broader feature set that's next gen consoles will offer as multimedia devices.
But this is how I see Durango, which doesn't mean much.
I've asked about it explicitly and it's not designed for tile based deferred rendering, though I'm sure you could implement a TBDR on it.
And what is your source for this? MS's own patents are pretty explicitly suggesting otherwise and they explain a lot about the stuff we do know by the sound of it. Maybe the 'deferred' part is what's off? The patent below goes into detail about the procedure of rendering tile-based content and where this method's gains come from and it seems to mesh well with what ppl like Arthur on GAF had said months back. Additionally, the patent seems to directly allude to leaning on the eSRAM and display planes both for this methodology.
Here is the patent link for a new page...this link is a bit better actually since it has the diagrams:
http://www.faqs.org/patents/app/20130063473
I read through it and it sounds like there could be pretty considerable bandwidth and processing advantages rendering things on a tile depth basis as opposed to simply using tiles to construct layers, and then processing those layers in the GPU. This (new?) method could possibly explain why the eSRAM is targeted at such low latency and the murmurs from insiders of an exceptionally efficient GPU. It may not simply be a generic GCN setup making it "efficient" like many have asserted. It sounds like the more meaningful efficiency gains may come instead (or rather, in addition to...) the way the content layers are being processed.
Someone can correct me here, but I think in the typical approach to tile-based rendering support, even for stuff like the PRT support in AMD's recent hardware, the method for processing the content involved the GPU waiting around for the full layer to be stored in memory before it processes it all at once. This leads to the GPU sitting there with nothing to do in the meantime. As such your GPU efficiency is entirely latency-bound.
In the manner employed by the patent there, instead of your GPU waiting until an entire layer full of tiles is ready it handles the processing on a per tile basis as those tiles are stored in memory. The result is your GPU processing is bound by the latency of the eSRAM, which reportedly is extremely low (which is a good thing). At least, that's what it sounds like to me.
The image planes seem to play a meaningful role in this process too, so I'll re-read that patent tomorrow maybe and see if it adds anything interesting.