Is UE4 indicative of the sacrifices devs will have to make on consoles next gen?

Laa-Yosh said:
Sure there are a lot of methods with scanners, digital cameras, lightstages, LIDAR and such, but I don't yet see how a game development studio could afford it
Well multiple dev-studios around the world have their own Motion-capture studios, it's not such a stretch we'll see light-stages in some in the future (assuming the tech matures to that point).
Though I'm not sure are we talking about capture of humans or other stuff here.

sebbbi said:
I personally think Steam Survey is a very good source of data for game developers. It accurately reflects the current hardware base.
I'd have to disagree here - it reflects a certain userbase, but if you're targeting a broader audience the reality tends to fall somewhere between Unity survey and Steam - for US/Europe - for rest of the world it's closer to Unity still. That said even though tail is longer than Steam indicates, same extrapolation you mention does apply (hw gets upgraded over the 2-3 years of dev).
 
Last edited by a moderator:
Well multiple dev-studios around the world have their own Motion-capture studios, it's not such a stretch we'll see light-stages in some in the future (assuming the tech matures to that point).
Though I'm not sure are we talking about capture of humans or other stuff here.

Only the biggest and richest developers/publishers have a mo-cap studio and
Mo-cap is and will be reserved to AAA titles and frankly few devs really used the tech well.
 
Last edited by a moderator:
You can do mo-cap with devices like Kinect - that development alone will be responsible for this becoming a more accessible feature. Lots of people are also using it to make low-budget object and person scanners.
 
Oh I remember the good old days. My first Voodoo Card had 6MB RAM. Never in my wildest dreams would I have thought that one day I'd be here talking about cards or systems using around 6GB. Madness.

Sorry - OT
 
(actually, Laa-Yosh, if you get a couple moments spare with an existing scene, as you're all raytracing these days it shouldn't be any effort to drop some low-poly 1990's game model in and render... ;))

I haven't touched our rendering stuff in years, couldn't do it even if I wanted to without asking for help :)
 
Well multiple dev-studios around the world have their own Motion-capture studios, it's not such a stretch we'll see light-stages in some in the future (assuming the tech matures to that point).
Though I'm not sure are we talking about capture of humans or other stuff here.

Yes, we also have our own mocap, a photo studio and a lot of other stuff, and may invest in more advanced tech as we also feel a push to produce more stuff in less time.

As far as I know a lightstage is used to capture geometry, surface textures, normal maps based on comparing photographs with variously polarized lighting, and it can also measure reflectance and even light transport in translucent materials.

Still, LIDAR and such may remain a luxury. Like I can see some highend Sony studio renting a helicopter and LIDARing London or such things, but we probably shouldn't expect that to happen to smaller devs.
 
Mo-cap is and will be reserved to AAA titles and frankly few devs really used the tech well.

You probably have no idea about how many games are using mocap. Even Uncharted is based on mocap animations before the animator tweaks the performance.
 
You can do mo-cap with devices like Kinect - that development alone will be responsible for this becoming a more accessible feature. Lots of people are also using it to make low-budget object and person scanners.

The quality of that stuff is unacceptable.
 
You probably have no idea about how many games are using mocap. Even Uncharted is based on mocap animations before the animator tweaks the performance.

Why are you assuming I have no idea of how many game used mo-cap!?
Did my post offend you?
Many games use mo-cap, I know that, but it's not for everyone.
Not every developer has it's own mo-cap studio nor every developer that has used mo-cap has reached Sony's quality.
 
Last edited by a moderator:
Since a LOT of the AA titles are using mocap, you either don't recognize it or something else is wrong.

Keyframing human motion is incredibly hard and time consuming, and if the results are just a bit off it's very noticeable and annoying. The last big title to rely on it was Halo 3 and even Bungie decided to switch to mocap after that. Renting a mocap stage from a specialized company is actually much cheaper IMHO than keyframing (manually animating) everything, it's also a lot faster and more suitable for game development. So much so that even System Shock 2 has been using it, or Outcast as well.

Sure, it's rare to have their own studio, especially a fully equipped optical system, but that doesn't mean it's reserved for the high end. In fact, I'd have a very hard time to name any current title that does not utilize mocap for human characters.
 
Sure, it's rare to have their own studio, especially a fully equipped optical system, but that doesn't mean it's reserved for the high end. In fact, I'd have a very hard time to name any current title that does not utilize mocap for human characters.

And it's been that way for years.
That's not to say that there isn't an enormous amount of work that goes into cleaning up and tweaking the raw captured data.
 
Of course, and there is a lot of keyframe animation as well, for stuff that's not possible of feasible to capture; but mocap is now as essential a tool for most game developers as, say, digital cameras.
 
On the whole PC vs console stuff, anyone know why Lottes is so bullish (or comes across as such) on the capabilities of fixed hardware with low level access:

The real reason to get excited about a PS4 is what Sony as a company does with the OS and system libraries as a platform, and what this enables 1st party studios to do, when they make PS4-only games. If PS4 has a real-time OS, with a libGCM style low level access to the GPU, then the PS4 1st party games will be years ahead of the PC simply because it opens up what is possible on the GPU.

Note this won’t happen right away on launch, but once developers tool up for the platform, this will be the case. As a PC guy who knows hardware to the metal, I spend most of my days in frustration knowing damn well what I could do with the hardware, but what I cannot do because Microsoft and IHVs wont provide low-level GPU access in PC APIs.

One simple example, drawcalls on PC have easily 10x to 100x the overhead of a console with a libGCM style API.

http://societyandreligion.com/ps4-won-xbox-720-battle-windows-8/
 
"Years ahead" is hard to put in context but it's probably a bit of an exaggeration on his part. And to better qualify that I'm guessing "of a PC with similar performance parts" should be added to the end.

Anyways, it was all dissected in the UE4 thread. Still not really sure what the conclusion is.
 
"Years ahead" is hard to put in context but it's probably a bit of an exaggeration on his part. And to better qualify that I'm guessing "of a PC with similar performance parts" should be added to the end.

Anyways, it was all dissected in the UE4 thread. Still not really sure what the conclusion is.

Yes, i'm wondering whether 'years ahead of a PC with similar performance parts' equates to 20-30% better, 50-60% better or 2x better?
 
Oh yeah, this is the UE4 thread, lol.

I don't think there's a clear answer where you can say it'd be a specific percentage. Certain operations would see better benefits with low level h/w access while others not so much I imagine. And also not as much as it was in the past compared to DX9 or 10.

Take the average game built for PC on roughly equivalent console hardware, then port it to console with all the low level API h/w access and compare the results would be the best bet for finding conclusive figures.
 
The drawcalls discussion has already happened in this thread. Read back about four or five pages to see the whole thing.

http://forum.beyond3d.com/showpost.php?p=1725711&postcount=413

Yes and having such games like Crysis 1-3 at highest settings using 3-5k draw calls tells some. Heck even having 'ultra high' settings and for testing disabling instacing getting into the 20-30k draw call count per frame only taxed a E8400 Core 2 Duo 3,0Ghz little. If i remember correctly around 10-15% of one core going by the profiler in CE2 regarding CPU load. And framerate impact was less than 1fps (from 40 or so). This was Crysis 1 but not sure if it was DX9 (settings override) or DX10. I am sure I uploaded screenshots of it a longtime ago on this forum.

Interesting though that Crytek recommended max 2000 draw calls for PS3 in their CE3 developer guide for perfomance reasons..
 
Back
Top