Who is saying PS4 will reach 2x its theoretical performance? (Carmack's claim is in regards to dx9 and previous gen consoles)
Thraktor said:By the way, I think those of you saying that this is a matter of the GPU simply not being powerful enough are somewhat off-base, as I would say this has a lot more to do with architecture than raw power.
The bulk of the computational work of SVOGI consists of cone traces over the octree representing direct and indirect light sources. Unlike more traditional rendering techniques, where the GPU simply wants as much bandwidth as possible, these cone traces are entirely latency-bound. This is an issue for the PS4, which has been designed around a big pool of GDDR5, because the latency for a GPU accessing GDDR5 memory is going to be crippling to something as latency-sensitive as SVOGI is. (PC GPUs, of course, have the same issue, but when you've got something as powerful as two 680s, efficiency doesn't matter that much.
The heirarchical memory architecture of Durango and Wii U is actually much more suited to SVOGI, as the embedded pool can be used as a very low latency buffer for the octree while the cone-traces are being performed. Of course the Wii U wouldn't have the GPU grunt necessary for a full SVOGI-based UE4, but Durango should potentially be much more capable of the rendering technique than PS4, despite the apparent gap in power.
In fact, I mentioned a while back that Durango looked very much like a system specifically designed to run UE4. While this might not be exactly the case, I'd feel comfortable in saying that Epic would have had a fair influence in the design of the console (as they did with the XBox360's RAM, and that was when they were much less influential in the industry than they are now). If Epic are actually dropping SVOGI altogether, though, this could prove a big issue for Microsoft. The big (in fact close to the only) difference between the PS4 and Durango's designs is MS's decision to put 32MB on-die with the GPU, which meant there was a bit less space on there for GPU logic, and that they could get away with using a DDR3 main pool, which (they assumed) would be larger than a GDDR5 pool. The PS4's design would be easier for developers, and more suited to current rendering techniques, but Durango's design would give it an advantage with newer, latency-bound rendering techniques like SVOGI. Therein lies the problem: if the biggest middleware engine doesn't use the technique that Durango has seemingly been designed around, where does that leave MS?
And why exactly it is no longer valid for dx11 and PS4/x720
If You're saying 1.5 this means that 680 will be same or better in every instance of multiplatform game comparison, so basically lost Your 'bet' already.I am not saying 2 times, but let say 1,5?
Nvidia on their last presentation said that 8800GTX was only 40-50% faster than PS3/X0, yet in every case 8800GTX has much better performance in current gen games, up to twice as higher, so where is that magic optimization?
http://livedoor.blogimg.jp/hatima/imgs/b/3/b382e1f7.jpg
I wouldn't read too much in this slide, Nvidia is only speaking about gflops and it's not the best metric to compare GPU. Look at the comparison between NV20 vs NV2A it gives the impression that the later is two times more powerful and we know that it's far from the truth. A 8800 GTX is about 350GFlops (or 500 if you're counting the invisible dual issue MUL ) vs ~250GFlops for the Xenos/RSX but in terms of ROPs and texture units the gap is bigger.
It's impossible to be faster than the theoretical performance. I think you mean traditionally console games have made more efficient use of their theoretical performance.Can we have a conclusion? Since PS4 may reach 2x of its theoretical performance then we may see Unreal 4 Infiltrator runs on PS4 very close to PC demo. Someday, yes?
SVOGI acceleration strikes me as an oddly specific thing to include on a GPU. If developers want to use it, they can program shaders just like everyone else using every other lighting model out there.I wonder, if DX11 cards have hardware that supports tessellation then is it possible DX12 gpus will feature SVO GI? Nividia is going with stacked ram which should help with latency if that's one of the contributing factors to enable that feature. Would make pc games pretty awesome too.
http://www.videocardbenchmark.net/high_end_gpus.html
GeForce 680 vs Radeon 7970M
More like 32-35% slower with much more advanced memory subsytem and 8 core cpu. Its not very fast but show me PC game that usues 8 cores
Other than the fact Passmark is full of shit (showing that they do not even have SLI support with a GTX570 beating a GTX590 and a GTX670 beating a GTX690 and the clearly wrong CPU bench they have that shows just about 100% Hyperthreaded scaling)
Even from the Passmark numbers the GTX680 is 1.476991848540626 times faster, that is 47.7%, not 35%!
The GPU memory system is not more advanced!
Also if you knew anything about CPUs you should know that an 8 core 1.6 to 2.0Ghz CPU with no more than half the IPC of an Ivy Bridge is not going to be faster than a 4 core IVB running at around 2x the clock speed!
Because DX11 is much more efficient and all about multithreading. You can find examples here or in Repi presentation from BF 3 how they've managed to decrease amount of draw calls in DX 10+ in comparison to DX9 and how better multithreading works. There are more to DX11, like atomics or better use of shader cores etc
Why PS4/X720? Because they have almost identical architecture to current PC, where PS3 and X0 had completely different, especially PS3, so You had more way to optimize engine/assets around them.
Yes, in normal condtitions (dry weather) it certainly does.Actually, the width of a tyre doesn't improve upon traction (but that's another story).
Oh boy, I don't even know how to respond to that... If you take the 7850(7970M) as a base for your calculations (100%) GTX680 has 147% of its measured performance... But if you take GTX680 as base 100% - 7850 has something like 65% of its perf. Either way gtx680 IS NOT 2x faster.
Regarding passmark - I dont claim its one and only proper benchmark in the universe... But I can litterally "swamp" you with test results which will confirm relative performance of both GPUs - as they say - its all on the internet.
And I am not claiming that 8 Jaguar cores is faster than 4 IvyBridge, but show me PC game that has i5 as minimum requirements. Its ussually i3 or even core2duo and AMD equivalent, simultaneouslyy used for variuous tasks and services. Therefore what I am claiming is that Jaguar powered console (PS4/x720), albeit less powerful will be more capable than i5 powered PC .
And I am not claiming that 8 Jaguar cores is faster than 4 IvyBridge, but show me PC game that has i5 as minimum requirements. Its ussually i3 or even core2duo and AMD equivalent, simultaneouslyy used for variuous tasks and services. Therefore what I am claiming is that Jaguar powered console (PS4/x720), albeit less powerful will be more capable than i5 powered PC .
But there's still multiple configs, OS overhead, drivers overhead,gimped memory. And DX 11 evolution, however great, is not enough to bring PC optimisation anywhere near console level.
And this time there will be no main disadvantage of console enviroment - whch was not enough memory, which prevented multiplatforms parity more than anything else.
Question about SVOGI...does anyone know how sensitive that technique is to latency? Someone on GAF is suggesting the low latency of Durango's eSRAM may help it run UE 4's SVOGI better than PS4 since (he claims) SVOGI is highly sensitive to latency, of which GDDR5 is thought to be have plenty of compared to the eSRAM in Durango.
Any thoughts on such a claim from the tech heads here?
Here is the specific post I'm referring to, for reference:
http://www.neogaf.com/forum/showthread.php?t=531771&page=4
Light doesn't just illuminate objects and cast shadows. Just as in real life, it bounces. Willard points to a red carpet in a new room, adjusting time of day so more sunlight enters the room - the net result being that the walls gradually become more illuminated with a red tone as more light bounces onto the surroundings. Epic is using a voxel-based approach to indirect lighting - somewhat intensive in terms of RAM but with decent performance, on high-end hardware, at least. It appears a little reminiscent of the light propogation volumes developed by Crytek for its own CryEngine 3 middleware.
No one's claiming PC's are going to be as efficient as consoles. The only claim being made is that DX11 improves the situation over DX9. And since Carmacks statement was made in relation to DX9, it can't be used as a measure of PC efficiency in future games.
And regarding OS overhead, exactly how much power do you think it takes to run Windows in the background? Bare in mind Windows 8 can run quite happily on a tablet. And don't forget that according to rumours the next generation consoles also dedicate a significant portion of their CPU resources to general OS and system level operations.