Is UE4 indicative of the sacrifices devs will have to make on consoles next gen?

mighty 680 and we already see drops to very low fps, and when the action starts jen is quickly turning off the demo;). When this compute heavy games will ship , if anything i expect 680 class hardware to lag behind orbis because of

Well this just shows you don't really have much of a clue what you're talking about.

PCI latency,

This has no effect on graphics rendering performance and is only relevant to compute if the compyte jobs are required to run synchronously with gameplay. If they are then GPGPU isn't going to be on option on the PC anyway and these jobs will likely run on the CPU or IGP.

non unified memory

Why is this going to slow down rendering tasks? It might make development easier but if anything, contention between CPU and GPU demands on memory will reduce performance, not improve it.

mulithreded rendering

Do you even understand the point you are trying to make there? Because I certainly don't.

lower compute dispatchers count

This is tied into your first point and as with that point has no effect on rendering. And if the compute work is done elsewhere than on the discrete GPU this is irrelevant anyway. Bare in mind high end PC CPU's (the kind paired with a 680) have 2-4x the "compute" ability of the CPU in Orbis.

draw calls issues and in compute these will be exposed badly

Draw calls have always been an issue for PC's in comparison to consoles, it's never resulted in a doubling of the raw throughput of a console GPU though. If anything it's more of a CPU limitation.

GAF pc "experts" already even downplaying 8 vs 2GB, well they can try current gen pc port on card with 128MB...

2GB is to 8GB as 128MB is to 512MB. How many modern GPU's sport 512MB? The correct question to ask would have been how do current gen games run on 512MB cards. Still not great but lets at least construct our analogy's correctly. And yes, the memory size difference will be an issue for 2GB GPU's further down the line. It won't be apparent straight away though and if also won't make up for a 2x core performance advantage.
 
According to John Carmack, closed console enviroment can boost its hardware efficiency by 100%.
https://twitter.com/ID_AA_Carmack/status/50277106856370176

That quote was in the context of DX9 and also very late into the console generation. Don't expect anything like that in the first year of a consoles launch using DX11 as the standard.

Just look to how early PS360 games performed next to comparable PC hardware in 2006 for evidence of that.
 
http://www.videocardbenchmark.net/high_end_gpus.html

GeForce 680 vs Radeon 7970M

More like 32-35% slower

No. A stock 7970 is on paper >2x the shader and texturing throughput of Orbis.~70% more memory bandwidth (accounting for CPU demands on memory on the console) and admittedly only marginally greater fill rate and geometry throughput.

A 680 is faster than a stock 7970 in benchmarks but being a different architecture we can't compare to Orbis on raw specifications.

Thus a 680 is vastly faster in comparison to Orbis than your comparison. KKRT's "50% slower" (half as fast) is much more in line with reality. The benchmarks you've linked to are just a single example and are in no way stressing the GPU's to their maximum potential as next generation games would. Double the resources = double the peak performance. This is a fact, if a benchmark doesn't show it then the benchmark isn't fully utilising the available hardware.

with much more advanced memory subsytem

Much more advanced? How do you figure that? Your talking 192GB/s dedicated to graphics + 25.6GB/s dedicated to the CPU compared with 176GB/s shared between the two. Being unified has it's advantages but "much more advanced" is a pretty big stretch of the imagination.

and 8 core cpu. Its not very fast but show me PC game that usues 8 cores

What on Earth does that matter? More cores are inferior to fewer cores when total power is equal. And total power is not equal, a modern Intel quad / AMD Octo core is considerably more powerful than the 8 Jaguar cores in Orbis.
 
Last edited by a moderator:
http://www.videocardbenchmark.net/high_end_gpus.html

GeForce 680 vs Radeon 7970M

More like 32-35% slower with much more advanced memory subsytem and 8 core cpu. Its not very fast but show me PC game that usues 8 cores

7970M is clocked at 850mhz and has 20CU in comparison to 18 CU and 800mhz of PS4 GPU.
Also in the test You posted 7970m is actually slower by 55%.

Crysis 2/3 uses 8 cores/threads, BF 3 uses 8 cores/threads, Total War games uses 8 cores and 8 Jaguar cores have efficiency of 2-2.5 sandy bridge cores clocked at 3.2ghz
 
Last edited by a moderator:
Don't think we should have this PS4 vs PC debate here. If someone could split it out that'd be good.

One thing that struck me after watching Infiltrator again is it has that 'off screen captured' look to it. You know how when games can actually look better when filmed off screen sometimes? Everything's low contrast, details a bit smudged out but overall it seems better than when you finally see the direct feed footage. Kind of like how your brain is being fooled when you can't see every edge and pixel in detail.
 
No. A stock 7970 is on paper >2x the shader and texturing throughput of Orbis.~70% more memory bandwidth (accounting for CPU demands on memory on the console) and admittedly only marginally greater fill rate and geometry throughput.

A 680 is faster than a stock 7970 in benchmarks but being a different architecture we can't compare to Orbis on raw specifications.

Thus a 680 is vastly faster in comparison to Orbis than your comparison. KKRT's "50% slower" (half as fast) is much more in line with reality. The benchmarks you've linked to are just a single example and are in no way stressing the GPU's to their maximum potential as next generation games would. Double the resources = double the peak performance. This is a fact, if a benchmark doesn't show it then the benchmark isn't fully utilising the available hardware.



Much more advanced? How do you figure that? Your talking 192GB/s dedicated to graphics + 25.6GB/s dedicated to the CPU compared with 176GB/s shared between the two. Being unified has it's advantages but "much more advanced" is a pretty big stretch of the imagination.





What on Earth does that matter? More cores are inferior to fewer cores when total power is equal. And total power is not equal, a modern Intel quad / AMD Octo core is considerably more powerful than the 8 Jaguar cores in Orbis.


What you claiming is not only stretch of imagination but also not true in terms of raw specs (apart from texture fillrate).

Geforce 680
Memory Bandwidth: 192.256 GB/sec
FLOPS: 3090.432 GFLOPS
Pixel Fill Rate: 32192 MPixels/sec
Texture Fill Rate: 128768 MTexels/sec

Radeon 7970M (7850) (similiar to Orbis)
Memory Bandwidth: 176 GB/sec,( someone please clarify if it's affected by cpu bandwitch)
FLOPS: 1761.28 GFLOPS
Pixel Fill Rate: 27520 MPixels/sec
Texture Fill Rate: 55040 MTexels/sec


And since when a real life performance (as shown in banchmark) is no longer important? I can assure you that orbis will be much more stressed and better utilised than any PC card.
Regarding CPU - there is not even one PC game that takes full advantage of those powerful PC cpu's. Again - in PS4 this failrly simple, low power 8-core will be pushed to it's limits.

As for advanced memory in PS4 - when I think about PC memory, how every bit of GFX data is doubled between DDR3 and GFX memory - its not very efficient nor advanced...

Also in the test You posted 7970m is actually slower by 55%.

In relation to what?
680 has 1,5 x times the performance of 7970m, hence 7970M has aprox 66% of the 680 performance.
 
Well this just shows you don't really have much of a clue what you're talking about.



This has no effect on graphics rendering performance and is only relevant to compute if the compyte jobs are required to run synchronously with gameplay. If they are then GPGPU isn't going to be on option on the PC anyway and these jobs will likely run on the CPU or IGP.



Why is this going to slow down rendering tasks? It might make development easier but if anything, contention between CPU and GPU demands on memory will reduce performance, not improve it.



Do you even understand the point you are trying to make there? Because I certainly don't.



This is tied into your first point and as with that point has no effect on rendering. And if the compute work is done elsewhere than on the discrete GPU this is irrelevant anyway. Bare in mind high end PC CPU's (the kind paired with a 680) have 2-4x the "compute" ability of the CPU in Orbis.



Draw calls have always been an issue for PC's in comparison to consoles, it's never resulted in a doubling of the raw throughput of a console GPU though. If anything it's more of a CPU limitation.



2GB is to 8GB as 128MB is to 512MB. How many modern GPU's sport 512MB? The correct question to ask would have been how do current gen games run on 512MB cards. Still not great but lets at least construct our analogy's correctly. And yes, the memory size difference will be an issue for 2GB GPU's further down the line. It won't be apparent straight away though and if also won't make up for a 2x core performance advantage.

1 well the proof is in the pudding , just look at video
2 well i said in compute tasks
3 much quciker acces, mucz less coping and waiting for data, more options for practical use. lets write email to nvidia architects , they are wasting resourecs on unified adress space in maxwell:D
4"DX11+ GPUs as standard and the ability to construct draw lists over multiple threads thus making submission/draw faster meaning more draw calls and more stuff on screen and looking better.

Now, unless MS and the IHVs do something the PC is likely to still be restricted to single threaded draw calls (right now DX11's multi-threaded command list construction either slows things down on AMD hardware or effectively loses you a CPU core on NV hardware) and suddenly regardless of how fast your GPU is the consoles are blowing you out of the water draw call wise meaning the PC can't match a console in 'stuff on screen'.

We are already kinda at this point, the PC is only 'saved' by having the Mhz advantage but unless something changes that advantage will go away pretty quickly.

(Right now MS are blaming the IHVs and the IHVs are blaming MS and there is crazy talk about ditching DX and going 'to the metal' which I don't think anyone has really thought about as hard as they should have done. In short, it's a bit of a mess...)"

anything else?
5 and thats exactly why running both graphics and compute task is so slow currently

6limitation is limitation,and maybe it will exposed even more
7partialy agree, but 2GB will be limiting not even from rendering performance but simply from targeting assets etc... And in 2005 ,512Mb vram was a lot more common than 8GB in 2013...

BONUS

1 we are all know how much optimisation most pc ports have...
2 when these advanced games will ship , nvidia will have to sell next round of cards and i fell perfect excuse to leve 680 behind in performance, drivers adjusted accordingly...

That quote was in the context of DX9 and also very late into the console generation. Don't expect anything like that in the first year of a consoles launch using DX11 as the standard.

Just look to how early PS360 games performed next to comparable PC hardware in 2006 for evidence of that.

well we heard this "similar dx level "last time too, and qucickly dx10 was needed for simple ports...

bad performance of some cosnole games were effect of painfull learning of in order, mulithreded cpus and quick pc ports not gpu. And i remeber good stuff like graw and just cause too, with graphical features absent in pc versions running on hardware with more fillrate ,texrate etc (xtx1900...)
 
Don't think we should have this PS4 vs PC debate here. If someone could split it out that'd be good.

One thing that struck me after watching Infiltrator again is it has that 'off screen captured' look to it. You know how when games can actually look better when filmed off screen sometimes? Everything's low contrast, details a bit smudged out but overall it seems better than when you finally see the direct feed footage. Kind of like how your brain is being fooled when you can't see every edge and pixel in detail.

I agree. Its weird. Like just cutting ever so closely to that threshold when you can't even tell but not quite.

Even still though it really looks quite awesome!
 
Regarding CPU - there is not even one PC game that takes full advantage of those powerful PC cpu's. Again - in PS4 this failrly simple, low power 8-core will be pushed to it's limits.

I have almost 100% utilization on 4 cores in Crysis 3 on my i5 2500k.

I've used Titan result by accident my bad, still 7970m is faster than PS4 GPU.
 
On PC, you can tweak the settings until you get close to what you want.

In general, for shipping games, I doubt the devs will push for 100% utilization. They would want to be inclusive and be mindful of the average install base performance. The devs need $$$ to feed their families.
 
How much benefit can we achieve with "low-level optimization", "higher efficiency" on PS4? For example, can we say that a PS4 which has better effieiency may have performance equivalent to a PC with i3 & GTX 670 (or some spec like this)?

The way I answer that problem is to find similar PC hardware as an Xbox 360 or PS3 and try running a game like Crysis 3 or Battlefield 3 on it and compare FPS. No one today uses video cards from 2006. That's what optimization gets you.
 
Question is, is Epic delaying/scrapping SVOGI just for consoles, or also for their PC titles? This video was up on youtube for some time, dissappeared and reappeared again.

Fortnite September 2012 Gameplay: http://www.youtube.com/watch?v=GR_G8ovUjOw

The video does not have the best quality to begin with, but look closely beginning from 0:05 and 1:30. I cant seem to find direct evidence for a 2-bounce color bleeding, and all lighting visible there could be faked by some way or another with more or less pain, but I would guess, just judging from the softness of the lighting, that this has SVOGI still going, and as of now, this is a PC only title (which maybe will get a console release). This was pre-GDC, mind you.
 
ue4copyb1eb7.jpg
 
No one will make a game for GTX680 ,but PS4/x720 hardware will be a target level within next 6-7 years
This demo screenshot comparison don't make any sense.
 
Last edited by a moderator:
Nobody was going to waste GPU resources to realtime GI anyway unless the game was otherwise very light.

I suspect "next-gen" games will drop very soon to 720p after some 1080p launch titles. So it will be very similar to high end PC performance at lower resolutions. The market is not going to care about resolutions

That quote was in the context of DX9 and also very late into the console generation. Don't expect anything like that in the first year of a consoles launch using DX11 as the standard.

Just look to how early PS360 games performed next to comparable PC hardware in 2006 for evidence of that.

It will be funny when the great GTX680 will struggle to even start games in a few years..
 
Question is, is Epic delaying/scrapping SVOGI just for consoles, or also for their PC titles?

The key differentiating factor between last year's demo and this newer iteration is that the Sparse Voxel Octree Global Illumination (SVOGI) lighting system hasn't made the cut. Instead, Epic is aiming for very high quality static global illumination with indirect GI sampling for all moving objects, including characters.

http://www.eurogamer.net/articles/digitalfoundry-gdc-2013-unreal-engine-4

Not sure if it'll have a different version on PC's but if most games will be console ports like this gen I doubt we'd see SVOGI versions on PC. As for the latest PS4 UE4 demo, it doesn't even appear they implemented any indirect lighting or that it's completely missing on a bunch of elements. See the closeup shots of the lava... no red light bounce off the stone walls like in the PC version. And the door completely lets through the light from outside even though it's closed... :rolleyes:

The PS4 version isn't done up to snuff for whatever reason. Let's hope we see a 3rd version later on.
 
I suspect "next-gen" games will drop very soon to 720p after some 1080p launch titles. So it will be very similar to high end PC performance at lower resolutions. The market is not going to care about resolutions

Judging by that Infiltrator demo, it might as well have been 720p by how blurred it is after all the temporal AA techniques and post fx blur. The rez sacrifice will probably be worth it. So yeah, near the end of the gen I expect to see games running slightly above 720p to get more eye candy.
 
What you claiming is not only stretch of imagination but also not true in terms of raw specs (apart from texture fillrate).

No, what I said is absolutely correct. If you doubt it, please give examples of specifically which numbers you believe to be wrong and I will happily show you why you're mistaken.

Radeon 7970M (7850) (similiar to Orbis)

You've clearly missed (or deliberately ignored) my statement: "A 680 is faster than a stock 7970 in benchmarks but being a different architecture we can't compare to Orbis on raw specifications."

If raw paper specs were directly comparable between architectures the 7970 would be vastly faster than the 680. It clearly isn't which is why your comparison is meaningless. Orbis and PC based GCN GPU's use identical architectures and thus are directly comparable in terms of peak theoretical performance. Therefore comparing Orbis to the 7970 which in turn is comparable in real world performance to the 680 is the most accurate way of comparing Orbis to the 680. Not simply by comparing raw paper specs between two incomparable architectures.

And since when a real life performance (as shown in banchmark) is no longer important?

Of course it's important. But one benchmark is not absolute proof of anything. Let me ask you this. If Orbis had exactly double everything it currently has, i.e. 16 Jaguar cores, 36 CU's, 16GB GDDR5 at 348GB/s etc... would you deny it's twice as fast as the current Orbis configuration? Of course not, so why deny that the same architecture on the PC with twice the execution units would be twice as fast if fully utilised?

I can assure you that orbis will be much more stressed and better utilised than any PC card.

Really? So exactly the same game running on Orbis and PC will stress Orbis far more than it would say a 7850? Would you care to go into more detail explaining the exact reasoning for that?

Regarding CPU - there is not even one PC game that takes full advantage of those powerful PC cpu's. Again - in PS4 this failrly simple, low power 8-core will be pushed to it's limits.

So just to be clear, you expect the PS4 to demonstrate more CPU performance because despite having a weaker CPU, that CPU will be fully utilised while a more powerful PC CPU will for some reason be so underutilised that the actual end power output will be lower than the console? So for some reason, the PC game will require more CPU power, the CPU will have that power available, but it will not be used. Would you like to explain that one in more detail?

As for advanced memory in PS4 - when I think about PC memory, how every bit of GFX data is doubled between DDR3 and GFX memory - its not very efficient nor advanced...

Despite the redundancy we're still talking about 217.6GB/s compared with 176GB/s. Far more if we account for the seeming efficiency of NV memory bandwidth as opposed to AMD. I'm still curious to understand where the advantage is coming from there?
 
When I think of Unreal 4 I think of Kismet. While the removal of SVOGI is a big bummer and whatever solution they have now does NOT mesh with the design of their tech demo I hope, personally, Kismet is as powerful, quick, and useful in this iteration as they are suggesting.

 
Back
Top