(((interference)))
Veteran
But can you comment on what your creative director said?
I think AMD's technology is better at efficiency nowadays. I was SO happy when first rumours said MS would go AMD this gen...
It is also interesting to note that the 7790 is a 896 sp part clocked around 1ghz with the aforementioned 128 bits bus and it seems to be exactly half as powerful as the Tahiti GPU which also goes in line with the rumoured 1792sps/256 bits bus part for 8870 which should match Tahiti line in performance if the chip is configured the same way as this excellent Bonaire.
Also they say Tahiti is not as efficient gaming chip as the Bonaire due to its other compute goodieson board, so it's truly good to see that with their console chips and their new GPUs AMD figured it out and hopefully this will reflect in the higher end chips which are going to come out.
If you take into account the allegedly amount of Teraflops alone of the X1's GPU, Xbox One games looked out of this world.
But the part I bolded is extremely hard to do without using a system, because many things simply are not documented.
Seemingly innocuous things like changing how frequently the DRAM in a system is refreshed can have a 5-10% performance impact.
Your assumptions can be completely flawed.
You might assume that you will be GPU bound, but later discover you are actually CPU bound or vice a versa. Historically for games it's been far more common for the CPU to be the limiting factor not the GPU.
And it's not just about hardware there is software between you and the system you have little or no control over.
Game software isn't a trivial demo, it's complicated, there are a lot of moving parts.
You will be ALU bound in some circumstances and at those points flops are all that matter.
If your geometry carries too many attributes, those ALU's will be massively underutilized when processing vertices.
When you are rendering shadows you will be fill or possibly bandwidth coinstrained
The same when doing a first pass for a deferred renderer
Full screen effects are probably memory limited, but could be ALU bound depending on complexity.
None trivial compute jobs are usually memory bound
Flops are a useful metric, but only in context, I just hate boiling performance down to a single number, because I don't believe that you can.
From the leaked specs it would be my best guess that PS4 in most GPU limited situations would have an advantage performance wise, and certainly it has an advantage from a development standpoint.
What I would not want to guess at is how big that advantage is in real terms. I certainly don't think it will be as apparent as the 12 vs 18 numbers would seem to indicate.
But the devkits do have ESRAM
Ok, I didn´t know that. But the devkit has an APU inside? Or it simply have a ESRAM memory pool in a PCIe card or a dedicated bus?
The architecture of the motherboard of the devkit is custom, is a PC motherboard?
I thought that before final devkits the previous versions were pc-ish...
Thank you for the explanation.
:smile: Pillin. Agreed. It is going to be a very interesting time to know how these new consoles can actually perform and a much more interesting generation than the previous one in terms of ports and graphics technology, resolutions, etc etc.Ok, thanks.
So it´s legit to think that the global performance and programming tricks that will apply with future XBO it´s already achievable with the current devkits.
That´s a question that has made me think because I remember Hideo Kojima complaining about how different were PS3 devkits compared with final hardware, and my conclussion was to assume that the more complex an architecture is, more difficult is to accurately replicate it with software measures even using powerful hardware.
If it´s a fact that XBO devkits have "almost" final silicon, or even prototypes of PCB layouts then it´s quite possible to consider the look and feel of the games showed at E3 legit, always relating to the final XBO hardware.
If the devkit has different architecture because of the absence of final silicon, it will be more probably to find differences in performance; and that could make the final look of the games somewhat unexpected (even for the better).
No doubt that the first bunch of games run in real consoles will attract a lot of nitpicking. That will be intellectually engaging as much as interesting.
I´m really excited about final performance levels. And how far the consoles will be from standar pc setups.
Was I hallucinating when I read your thread or you just answered my question with another question? This is a known trait people have here where I live. :smile:I expected next generation games to look very good. That has always happened (PS1->PS2 / PS2->PS3 / XBox->x360). Why would it be any different this time?
Some of the results are staggering taking into account such a difference in hardware specs, and even if some of the ratings of the comparison are to be expected, they never equal the 2x + difference in power never translates in 2x the framerate, sometimes the difference is negligible.my 5850 is a 2TFLOPS GPU (VLIW5 like the 6870), but it can be slower than a 7750 (0.82) in newer games, I think simply because it's slow for tessellation, on TessMark the 7750 is also absurdly faster than the 6870 (and the Nvidia GPUs even better), how heavily will next gen console games rely on tessellation?
http://anandtech.com/bench/Product/512?vs=535
(look at civ5 and batman)
Like Shifty said it's all about what are the usable and unusable levels of the hardware taking into account the peculiarities of each hardware's architecture.theoretical flops are pointless and always have been. Its why cpus aren't meassured in MIPs or flops any more. Efficiency plays a big part in any architecture and the flops are just there to be an absolute ceiling for the performance. Likely the ps4 will never be using the 1.84TF at any point of its life time because you will never be able to keep every single ALU fed 24/7 no matter what your memory bandwidth or latency is. If the operations is only inside the GPU's L2 then performance could be more efficient but even then you would not get max compute flops.
Even with the same GCN cores in the next gen consoles, the general performance depend on memory bandwidth and memory latency. An L3 in cpu can increase core performance by up to 20% alone. For a compute driven task that has to deal with data between cpu and GPU, its likely microsoft's approach is much more efficient. The Ps4 however has 4 GCU compute modules just for the compute tasks by VGleaks. This will likely result in better compute performance if they can find a way to bypass memory bottlenecks due to high latency GDDR5. Communication between CUs and CPU modules will likely be higher cost on the ps4 due to the GDDR5 latency but the excess resources there will allow for more performance when there is no latency bottleneck.
Xbox ONE is an interesting architecture and I do think it will be easier to get at efficiency at least earlier on in the generation than the ps4 when doing gpu oriented compute tasks. That doesn't even mean it would be as powerful at the end of the gen because devs will likely always find new uses for the ESRAM. The PS4 however will always have more compute units and more potential performance. And most likely have better GPU oriented performance in regards to that. The whole story won't be told until devs can start talking about this stuff.