*spin-off* Next Gen Gameplay and/or Graphics Differentiators

The most successful FPS franchise in history doesn't run at 60FPS but somewhere over 45FPS. That extra 6-8ms per frame is huge and which begs the question what's wrong with 40 or 45 FPS? It's more (as much as 50%) than 30 FPS.

It doesn't work like that. You aim to be done in 16 ms but if you miss you get 33.

The only way to "target" 24 - 28 ms without tearing to hell and back is to add an extra buffer and swap into that. And then you'll end up with uneven, juddery motion as the new image swaps in too soon or too late, but never 24 - 28 ms after the last.

I suppose you could predict how long your upcoming frames will be displayed for then render them to be correct for 16ms or 33 ms, but you'll still need that extra buffer and latency will increase accordingly. Better to dynamically reduce resolution, Rage style, and catch up on any slight overshoots that way.
 
That extra 6-8ms per frame is huge and which begs the question what's wrong with 40 or 45 FPS? It's more (as much as 50%) than 30 FPS.
TVs don't support 40-45 fps refresh, so you'd end up with some frames being shown twice as long and a horribly juddery result, or masses of screen-tear.

There's something to be said for flexible refresh TVs. As they can now sync down to 24Hz, and LCDs aren't tied to power supply like CRTs were, it's certainly possible. Would be great to have games that could set a 45 fps refresh or such, to compromise in more graceful sacrifices than either or 60/30 fps. Although it'll never happen.
 
It doesn't work like that. You aim to be done in 16 ms but if you miss you get 33.

The only way to "target" 24 - 28 ms without tearing to hell and back is to add an extra buffer and swap into that. And then you'll end up with uneven, juddery motion as the new image swaps in too soon or too late, but never 24 - 28 ms after the last.

I suppose you could predict how long your upcoming frames will be displayed for then render them to be correct for 16ms or 33 ms, but you'll still need that extra buffer and latency will increase accordingly. Better to dynamically reduce resolution, Rage style, and catch up on any slight overshoots that way.

How does triple buffering make for a juddery mess? I think you are overstating quite a bit.
 
How does triple buffering make for a juddery mess?
Your screen is refreshing 60 times a second. A 45 fps game is going to show some frames within that 60 Hz screen cycle, causing judder. Doesn't matter how many buffers you have; discontinuous time differential between frames = judder (at framerates we use anyhow).
 
But getting something to run in 16ms is hard, what about the assets that have to be thrown out because you end up unable to fit them in your time budget? 2nd to that is memory constraints which have plagued Devs this entire gen and again more assets cut or downgraded to fit in memory. IMO a distant 3rd is art expenses which are managable from a project management perspective.

What about the art assets which would have to be thrown in? The biggest problem this generation has been profitability and art is the most expensive line in any HD game development costing. There is a difference between using better assets drawn by artists by default and trying to include more better assets in addition. The latter doesn't apply to all SKUs so it makes an already difficult to justify budgeting decision into an impossible one.

The most successful FPS franchise in history doesn't run at 60FPS but somewhere over 45FPS. That extra 6-8ms per frame is huge and which begs the question what's wrong with 40 or 45 FPS? It's more (as much as 50%) than 30 FPS.

A better metric may be latency because that is what you're trying to reduce with high frame rates. Both Halo and COD have low latency relative to their competition and much higher sales than average.
 
Your screen is refreshing 60 times a second. A 45 fps game is going to show some frames within that 60 Hz screen cycle, causing judder. Doesn't matter how many buffers you have; discontinuous time differential between frames = judder (at framerates we use anyhow).

And vsync makes for framerate drops, which is worse? Most games these days are not vsynced.
 
But getting something to run in 16ms is hard, what about the assets that have to be thrown out because you end up unable to fit them in your time budget?

Save it for the sequel.

2nd to that is memory constraints which have plagued Devs this entire gen and again more assets cut or downgraded to fit in memory.
Include it in the PC SKU.

Problem solved!
 
And vsync makes for framerate drops, which is worse? Most games these days are not vsynced.

Almost every game is vsynced most of the time, they just drop vsync if they overshoot on a frame time. They do this to get the frame out there as soon as possible to keep the frame rate high and reduce motion discontinuity. Triple buffering doesn't stop a frame from being displayed late and can introduce instances of a frame being displayed "too early" relative to the previous and next frame. And it can also add latency.

I used to think triple buffering and vsync was the answer to everything, but that was because I didn't like the look of tearing and because I didn't understand what was actually causing the juddering that I was putting down solely to frame rate fluctuations.
 
This is the console forum.
Most console games have PC ports. And aren't we talking about "most devs" or costs that are on an "industry-wide" scale?

And if there's no sequel because your game looks like ass...........
huh, a game can sell shit if the game is shit too. Not everything is dependent on the graphics. We've already got a thread for this.

Anyways, if you have so much time to create a zillion shit assets, you have other problems like project management when it comes to focusing on what assets matter or need attention or just plain need to be cut early off so that you aren't just wasting time and money.
 
Last edited by a moderator:
The biggest problem this generation has been profitability and art is the most expensive line in any HD game development costing.

FWIW I'm not sure this is actually true. It might be true in general, but art is not as dominant as a lot of people think.
Large engineering teams eat money, to the point I'd say it's EA's biggest issue.

Artists in general are paid less than engineers and art can be outsourced/contracted, it's relatively easy to put in place processes to control art quality (which is rarely actually done but) which gives you control over head count and quality to a point. Having said that I'd wouldn't be surprised if total artist salary+overhead > total engineer salary+overhead for most projects, but I believe that's misleading (see below).

Engineering on the other hand is a complete crapshoot, it's as much about people management as it is about engineering ability, processes that work are highly dependent on the makeup of the team. Very little of the work is truly isolated and It's difficult to judge quality in isolation.
Every additional engineering feature has a cost much higher than the cost to implement in isolation, often that cost is difficult or impossible to estimate accurately.
When your engineering team is late, in most models you end up paying your army of artists for the wasted time, often the wastage is compounded because art quality (technical quality) is not checked during the slippage leading to additional cost.

I believe the right/money saving trade off for engineering is to take longer with a smaller team, and micromanage head count for both engineering and asset creation throughout the project. I can't think of a publisher with large internal teams in a position to do that today.

I've said before here if you can't control headcount on your project, you cannot control cost or quality.
 
What if your budget isn't enough to build assets to tax already fast hardware? If art is expensive and programming tricks and techniques are relatively cheaper then you'd expect developers to try to get the highest return for their effort.

In any case the most successful FPS franchise in history targets 60FPS on consoles. If nothing else the extra performance available will be tempting for other developers to try and copy this performance metric.

You make the best looking game you can, that can achieve the game play you want within your budget and scale it back for weaker hardware. If you want 60fps then it's part of your gameplay target. If art isn't the strength of your team then you might want to set your game apart with better physics of world deformation or something, but I don't ever see how targetting lower hardware and just running it in 3D or at 60fps on faster hardware would ever be a good design choice. If you think those features are important enough to your game, you would want them in the weaker hardware build as well. Again I'm sure it's never that simple, just the philosophy you would want to achieve the best product.
 
But getting something to run in 16ms is hard, what about the assets that have to be thrown out because you end up unable to fit them in your time budget?
Rage runs at 60 fps, is vsynch locked (most of the time), has more assets than any other console or PC game so far (3 DVDs) and is par on graphics quality with the best looking AAA games. 60 fps is doable.
 
Last edited by a moderator:
I didn't! Although I haven't played it. Reviews scared me off.

I hate Metacritic, and also hate myself for looking at it.

I have to wonder what a lot of games would look like during gameplay if they had been dynamic res. Surely there are many moments where there isn't a whole lot on-screen and the framerate would be fine at the maximum resolution. But then maybe the performance can drag down so much that it drops the res to lol-wtf levels, and on average it just ends up being an awful experience.

Would Call of Duty look that much worse if it were dynamically switching between 1024x600 and 960x544 during the heavier action bits? :p

I still don't know why more games haven't used dynamic MSAA as we saw in earlier Capcom games (Lost Planet and Resident Evil 5). I suppose that's not feasible for any game that is deferred, but... hey I wouldn't mind it if the Gears series had up to 4xMSAA when there's absolutely nothing going on (forward renderer).
 
And vsync makes for framerate drops, which is worse?
It's the same thing! A 60 fps game looks smooth when the game renders 60 unique images and the TV displays them. The moment your TV starts dislpaying the same frame twice, whether its because the game is deliberately generating 45 frames per second or because in trying to render 60 fps you have to drop 15 frames, it judders. No buffering technique can fix a varying time differential between frames. That's why the choice has always been 60 fps or 30, as the even dual-pass display of the same frame makes motion consistent even if not smooth. That's also why TVs now have a 24 fps mode as the 24 fps format of movies cannot be reproduced smoothly in 60 hz. Instead frames of different lengths are interleaved, making the motion more jarring. In the UK it appears we had movies accelerated up to 25 fps so that quality problem is one we managed to escape. i don't know how modern movies handle it although I do know BRDs on modern TVs support 24 fps.
 
That's impossible because different games have different demands. Magic Carpet had deformable terrain beause that's all it had. the complexity of full deformation in something like Uncharted would render the visuals impossible. You can't bake gorgeous lighting into scenery and have that scenery destroyable. You can't have deep worlds that are fully deformable without massive memory consumption and very different game engines to one streaming known parts.

In an ideal case you'd be able to combine all aspects together, and when we have virtually unlimited processing power that'll happen. Until then there's be lots of compromises, and in lots of games priorities will not favour interactivity because they aren't core to the experience. eg. Full dynamic skeleton simulation may work in Backbreaker, but it'd be lousy for a very tight fighter which needs to forego realism for gameplay.


function said:
Oh god no. 100,000 times no. Plus infinity 'no's.

Not only would that demand power, memory, development time be taken away from game specific ideas (and in a humongously staggeringly large way) it would prevent certain types of gameplay and level/map balance even being possible. It would be a creative disaster.

And I really wish some screen space effects would go away. And die.

Granted, you can't have everything all it once in every game.

But how about this: Is there a (good) reason that every open-world game doesn't have Natural Motion when every game Rockstar does these days since gta4 uses it?

How about RPG's lacking the interactivity presented in the first gen title Oblivian?


The destructibility of Red Faction obviously brings difficulty to game level design (and breaks all those pretty prebaked bullshit lighting effects, hi weak hardware), but there are ways to design around this aside from forcing every gamer into the designers rat maze by indestructible wood fences.

Obviously some game types are more on the fantasy side and certain levels of realism aren't necessary or even desirable, but there are many more that are based in reality and many of the in-game animation/physics kill that immersion far more than their "low resolution", "poor lighting", or "crappy textures".

The fact that some game companies DO get it (Bethesda, Rockstar) enough to include these standard feature sets for all their games, not just a single one off title, tells me they are interested in holding themselves to a standard and are interested in pushing the medium forward.


And on deformable terrain: Not every game is needing to cast a spell to create a volcano ... Obviously.

However, if tessellation is to be used as a standard on terrain nextgen, then implementing "blast craters" as the ground is affected by hi impact events, would be relatively easy. Altering the "texture" to affect the tessellation of the affected region and viola, your gameworld no longer seems like a stale and static game from yesteryear and can now join the ranks of Magic Carpet from 1994. ;)


As to full physics skeletal mapping in a fighter ... yeah SF4 wouldn't work so well with that, but a boxing game without it really kills the experience. It's why Fight Night3 looked great, but as soon as the game got going in motion, the immersion fell apart.

Proper reaction of the player in the game world and the game world around the player is something to strive for and when methods are found, they should be embraced, not shunned.
 
Proper reaction of the player in the game world and the game world around the player is something to strive for and when methods are found, they should be embraced, not shunned.

It's not about embracing or shunning, like everything else in building games it's about tradeoffs.
development time/cost, CPU/GPU cost and gameplay requirements vs perceived value.

Like graphical features what you place high value on, is likely not the same as what others do, designers, engineers and marketing people (ugh) try and make a set of tradeoffs that make sense to them at the time. They are not always the right ones.

Assuming you don't have infinite budget and development time, you have to draw lines, many of these are often determined by schedule as much as anything.
 
It's not about embracing or shunning, like everything else in building games it's about tradeoffs.

Tradeoffs I get, but it seems many of these features aren't even considered at the outset (they aren't the types of things one could just add at the mid point, or later in development).

So it would come down to two things to get things to change:
1) these features being reasonable in cost to implement
2) consumers demanding these features (else face the wrath of red ink and the bargain bin)

For example: Natural Motion for characters. There aren't a lot of games these days which can justify not having it in their games when GTA4 used it way back in 2007 for every character on screen ... including those not integral to the game at hand. And they have since done this for every game they make.

The truth is this is a technology which is unfamiliar to the development team, and the bean counters have seen that they can sell a game without it and "nobody cares". So it never gets a mention.

That last "nobody cares" bit is the part I'm trying to get changed here.
 
Back
Top