Is UE4 indicative of the sacrifices devs will have to make on consoles next gen?

Deep Down's engine isn't available for licensing, so there's no competition with Epic.
Agni's Philosophy is the same and I'd say Infiltrator looks a bit better as a whole so far. Though Square has some nice features too.

As for too much of this and that in the Infiltrator demo, it should be visible to potential clients... They can then tune it to their preferences.

And yeah I've been talking about the importance of linear lighting, gamma correct pipeline and physically correct shading for some time now, since UC2 at least ;)
 
PS4 should have more architectural advantages over PC than consoles had over PC in DX9 days (especially with GPGPU).

Why? The 360 had unified memory plus shed loads of bandwidth to the eDRAM and a GPU architecture that was not only well beyond the common DX9 spec when it launched but in some ways beyond DX10 to say nothing of it sporting bleeding edge performance.

PS4 has what? Unified memory, a standard DX11 GPU, a performance target around the PC upper mid range and some GPGPU enhancements for which there are several possible answers to in the PC space (e.g. far more powerful CPU's like Haswell or utilising APU's like Kaveri).

Add to that the general PC like architecture of the new consoles and your expectations of greater console effectiveness for a given performance level over last generation seems a little far fetched.

Tomb Raider, with TressFX, cut PC performance almost in half. Wouldn't that likely be because the GPU had to stop processing graphics to do compute tasks? Onion+ and Garlic with cache bypass and locking particular data in cache wouldn't add to that overall measure? If you say no, then why not?

As AndyTX said, no this is not the case. Aside from anything else we've already been told by Exophase (and other developers here) than GCN and possibility even older architectures can already switch between graphics and compute on the fly and do not need to stop one to start the other.

The GPGPU enhancements of PS4 and Durango will really come into play where those jobs are highly sensitive to latency, but for the types of GPGPU stuff we see today effecting graphics only - which includes TressFX - there's no reason to expect PS4 to be significantly better than a regular discrete GCN based system.

Can you show some evidence, like the differences in that member provided chart (DirectX11 vs OpenGL)?

Evidence of what? DX11 having less overhead than DX9? It's common knowledge, a quick google will reveal dozens of different sources for that information. Your very own link posted earlier in this thread is one of them.
 
... and some GPGPU enhancements for which there are several possible answers to in the PC space (e.g. far more powerful CPU's like Haswell or utilising APU's like Kaveri).
Yeah I think a lot of people miss the fact that there's likely to be more FLOPS on consumer CPUs than whatever "low-latency" GPGPU solution is available on consoles, especially if the rumors about it being a separate partition/chip/space-slice are true. This is especially true on HSW with its 2x FMA/clock/core.

That's not to say that low-latency GPGPU stuff isn't useful and desirable, but people tend to forget how much horse-power is available on modern CPUs with all of the craze around GPU computing, and game developers tend to leave it mostly dormant :S Hopefully that will change with the next gen x86 consoles, but we'll see.
 
Last edited by a moderator:
To some maybe,but it kills all the small details for me.We already know anyone can do blur,devs should focus on showing razor sharp images with high detail IMO:p

Nah, that looks more gamey and fake, like the original UE4 demo.

In real life you don't perceive things as razor sharp images with high detail anyway, only the area in your focus will show high detail, everything else is blurred to some degree.

The infiltrator demo looks far more cinematic and as Laa Yosh says, much more like good quality CG (and he would know ;) )

It doesn't make sense to judge it based on blurry video stills either, in motion it's very effective.

It's definitely the most impressive next gen demo so far.
 
Last edited by a moderator:
Deep Down was and still is impressive, especially if it was running on a PS4 kit. But Infiltrator takes the cake now for sure. You'd hope so as it runs on a GTX 680 and i7!

Infiltrator quality level games on PS4 running at 900p by 2015. Book it :cool:
 
Nah, that looks more gamey and fake, like the original UE4 demo.

In real life you don't perceive things as razor sharp images with high detail anyway, only the area in your focus will show high detail, everything else is blurred to some degree.

The infiltrator demo looks far more cinematic and as Laa Yosh says, much more like good quality CG (and he would know ;) )

It doesn't make sense to judge it based on blurry video stills either, in motion it's very effective.

It's definitely the most impressive next gen demo so far.

I think it must be the AA they use because even when focused it has a layer of blur.
http://www.geforce.com/Active/en_US...icGamesUnrealEngine4InfiltratorDemo-00015.png
 
deepdown right?

Deepdown was a dark and claustrophobic demo, we didn't get to see as much of the environment and small detail as we saw in the Infiltrator demo.

So it's we haven't seen enough of Deep Down to make a call, though I would think Tim Sweeney would be better placed to deliver something cutting edge than Capcom.

I think it must be the AA they use because even when focused it has a layer of blur.
http://www.geforce.com/Active/en_US...icGamesUnrealEngine4InfiltratorDemo-00015.png

Yeah, it's what gets rid of the edge shimmering you see in the UE4 demo - and what makes it look like CG.

Do we know if they're running the demo at 1080p native? Or is it downsampled from some higher res like the BF4 demo?
 
It's 1080p :

infiltrator10802.jpg

http://s21.postimg.org/vg28hguh1/infiltrator10802.jpg
 
Anyone know roughly the difference in power needed to go from 720p30 to 1080p60? Using "bad math", its 2.25x number of pixels and 2x the frame rate, so 4.5x power needed (I suspect resolution scaling isn't that linear).
 
The infiltrator demo looks far more cinematic and as Laa Yosh says, much more like good quality CG (and he would know ;) )
Oh I agree, the problem is that some people think that's a good thing for a game. A director controls your gaze, in a game that doesn't work (hell it doesn't work that well in 3D movies either).

PS. In the real world something I focus on is razor sharp, by not making anything sharp you're not exactly making things realistic ... my eyes are perfectly capable of blurring the part of the monitor I'm not focusing on, don't need the game to do that.
 
I'm not so impressed by the Infiltrator Demo lighting.. Outside it really looks like something is missing, it has a cartoonish look.

What kind of problem had SVOGI? Performance?
 
SVOGI looks "cool" but lots of artefacts and the GI result is physically wrong (as hell). It's good R&D anyway coming from Epic but not yet for now. Performance should have been a problem also (it wouldn't have been so wrong if it wasn't)
 
I think it's more than 2X realistically.

You cant say a single number becouse you wont get 2 times the flops, ROPs, shaders etc. But seeing the 8800GTX from 2006 and how it performs in 720p in games that comment has little worth. If anything must be for a specific computation task but not in general.

720P aside, todays console games look in the same ballpark as todays PC games, despite 1/10-1/20th the power.

tomb raider on my 360 doesnt look like a different game than tomb raider on my brothers 6970 pc.[/QUOTE]

But your brothers PC runs it with higher framerate, AA and some other extra features at much higher resolution. Point is though a CPU and GPu from 2006 runs it better than 360. Take games like Crysis 1-2 also runs far better with higher detail and resolution aswell as framerate on such 2006 system. Core2Duo and Quad aswell as the 8800GTX was released 2006. And CE3 is an engine that really milks the consoles and PCs capabilities and perfomance. So point is the PCs you target in your comment are hardly being pushed while the 360 is "on it's four"! :LOL:
 
PS. In the real world something I focus on is razor sharp, by not making anything sharp you're not exactly making things realistic ...
But in photography of the real world, which is what most people are comparing screen visuals to, blur and the like makes it more realistic. I guess the pursuit of photorealism is the correct term, versus a hypothetical optorealism for recreating how we see the real world. Optorealism is likely impossible without super high resolutions and framerates and 3D. As photorealism can be achieved with much less effort, plus it typically looks better too, it makes sense to chase the camera's view of the world. Optorealism only makes sense to me in a first-person VR setting. The moment you are disassociated from the experience with another camera view, optorealism would look very odd. Lots of movies only work with their styles because they have a fantastic approach to the image generation. If they were like really being there looking at the real world, people in weird costumes in weird lighting would be..weird. ;)
 
Laa-Yosh, are you involved in any nextgen work yet ?

We're not involved in any game's development in any way, we sometimes get production assets as reference but those are usually not the final normal mapped poly meshes, but the original high-res source art (zbrush files usually).

Also, the projects we create content for at the moment are all previously announced titles.

Not sure what else you were trying to ask... ;)
 
But in photography of the real world, which is what most people are comparing screen visuals to, blur and the like makes it more realistic. I guess the pursuit of photorealism is the correct term, versus a hypothetical optorealism for recreating how we see the real world. Optorealism is likely impossible without super high resolutions and framerates and 3D. As photorealism can be achieved with much less effort, plus it typically looks better too, it makes sense to chase the camera's view of the world. Optorealism only makes sense to me in a first-person VR setting. The moment you are disassociated from the experience with another camera view, optorealism would look very odd. Lots of movies only work with their styles because they have a fantastic approach to the image generation. If they were like really being there looking at the real world, people in weird costumes in weird lighting would be..weird. ;)

Photorealism IMO is purely a matter of picture/image quality/fidelity but "optorealism" for me is a "visual mechanic/feature" and so it's not antithetical to photorealism but more rather complementary to photorealism and both are useful if you want to achieve "visual realism".
In this sense I think we have already a good degree of "optorealism" in video games because mechanics/features that simulate in-game the behavior of the human eye exist already (focus, motion-blur, temporary blindness, eye adaptation etc...) and indeed can help to create a more realistic representation in the end.
Of course "visual realism" is only possible to a certain degree and it is useful only to certain degree,

P.S.
I hope this is not too unintelligible.
 
Last edited by a moderator:
Anyone know roughly the difference in power needed to go from 720p30 to 1080p60? Using "bad math", its 2.25x number of pixels and 2x the frame rate, so 4.5x power needed (I suspect resolution scaling isn't that linear).
Depends where the bottlenecks are. If you're entirely pixel-limited (even at 1080p60) then sure, it's pretty linear. However increasing resolution doesn't increase vertex or CPU work (typically, although sometimes LOD calculations for meshes are based on screen-space size).
 
Back
Top