I don't think you want to use the xbox 360 rollout as a way to instill confidence
Glad I didn't.
I don't think you want to use the xbox 360 rollout as a way to instill confidence
It is actually ~8% faster with both having GDDR5 (it's not a perfect match in bandwidth), so it's more like +35% per flop. But that's also highly depending on the workload. Somewhat connected to this, another interesting thing is that with restricted bandwidth (both running at 900MHz DDR3), the advantage of the GCN based 7730 grows to 22%. Now we are at +52% advantage per flop for GCN. Obviously the better cache architecture helps here.I often wonder just how much of an efficiency advantage GCN is...well Tom's hardware benched the 7730. It's 384 800 mhz GN shaders. With GDDR5 the BW is almost exactly the same as 6670 as well to remove that variable.
It performs almost exactly the same as the 480 shader 800 mhz VLIW5 6670. So there, the improvement per flop is about 25% for GCN. 384 GCN shaders performs like 480 VLIW5 shaders.
That sounds about right to me.
It is actually ~8% faster with both having GDDR5 (it's not a perfect match in bandwidth), so it's more like +35% per flop. But that's also highly depending on the workload. Somewhat connected to this, another interesting thing is that with restricted bandwidth (both running at 900MHz DDR3), the advantage of the GCN based 7730 grows to 22%. Now we are at +52% advantage per flop for GCN. Obviously the better cache architecture helps here.
good corrections. i hadn't really noticed the 6670 actually has a bit less bw with gddr5.
i also spotted this at gaf which is interesting:
...
now looking up the a4 5000 it's 128 shaders at 500 mhz which would put it at 128 Gflops. and it beats a 7900gtx (~RSX, actually somewhat above as it's clocked at 650) (in that specific bench, anyway)
i think you could very roughly say gcn might be about 2x as efficient per flop as rsx/xenos then (yes i know xenos is better than rsx, it's still not even as advanced as very old amd cards now).
that's pretty impressive. 10x on the gpu advancement (in real terms, with efficiency) should be a doddle for both consoles.
While better than Egypt I still don't think that benchmark is a good measure of ALU performance. Shader complexity still very low compared to modern AAA PC/Console games.
I would think so too. I just wanted to post the same as you did.Wouldn't one think shader complexity would actually increase GCN advantage?
It seems like older GPU's were more designed around throughput and shaders got more complex as time went on.
The ram increase was put to rest and with good reason,it wasn't possible in the short time remaining without affecting numbers of units to be deliver at launch,but it was pretty easy to see for me since they advertized 8GB on E3,going 12 or 16 would require higher density memory i don't know if the xbox one was already using the highest one.
And possible a hardware change if there were no higher density memory,a bump in clock is far easier,hell they can even do it when the console is already out,sony had variable speeds on the PSP and as time went on they changed from 266 to 333.
we dont really know why they didn't do it or even if it was ever under serious consideration *shrug*
Glad I didn't.
However, they didn't start producing 360s till the end of september or something (69 days before launch) so the situation should be much better.
Well this article gives us a hint. To quote:
"Building a console like Xbox One entails years of planning, supply chain management, purchasing agreements for components, etc and is for the most part locked at the time you start to put development kits into the hands of developers."
This is most likely why the didn't upgrade to 12Gb because specs were locked as soon as devs received the kits and as devs indeed received devkits, I quote, "well before the consoles were officially presented".
Of course DF said the same for months but it was ignored.
Well this article gives us a hint. To quote:
"Building a console like Xbox One entails years of planning, supply chain management, purchasing agreements for components, etc and is for the most part locked at the time you start to put development kits into the hands of developers."
This is most likely why the didn't upgrade to 12Gb because specs were locked as soon as devs received the kits and as devs indeed received devkits, I quote, "well before the consoles were officially presented".
Of course DF said the same for months but it was ignored.
Because the 360 never doubled its RAM amount before release? The PS4 didn't double its RAM amount before release? That line has been proven false multiple times.
It should also be remembered that the 360 memory bump was in the pipeline more than a year before it's introduction.
I think the RAM doubling of the PS4 was probably as late as one would like it to be.
8 or 12 GB doesn't matter. MS is moving to a cloud based development and compile method. So dev toward your target HW then load it from the cloud into your device.
Wouldn't one think shader complexity would actually increase GCN advantage?
It seems like older GPU's were more designed around throughput and shaders got more complex as time went on.
This is most likely why the didn't upgrade to 12Gb because specs were locked as soon as devs received the kits and as devs indeed received devkits, I quote, "well before the consoles were officially presented".
Well, that would contradict the Esram and GPU upclock, all of which Penello even stated a while back that microsoft's no stranger to last minute changes.