The Next-gen Situation discussion *spawn

One, is that quality is not the only area of improvement, but also quantity. I image next gen games might improve by having more stuff on screen. Again, not all genres will benefit as not every game will need scenes with 1000s of npcs, but the ones that do need will benefit.
That would be captured by the metric, too. A "missing" character would cause a much larger error than, say, coarse texturing. And in genres where being able to draw and animate more NPCs (i.e. every open-world game) would be a huge benefit--or if there's some type of game that isn't even possible because it can't really be done without 1000s of AI NPCs, we're still in a regime where each hardware generation results in a "big" leap in quality.
 
cRPGs are still full of clipping, view ranges in outdoor areas are still crap and I can still count polygons in the terrain.
 
That would be captured by the metric, too. A "missing" character would cause a much larger error than, say, coarse texturing. And in genres where being able to draw and animate more NPCs (i.e. every open-world game) would be a huge benefit--or if there's some type of game that isn't even possible because it can't really be done without 1000s of AI NPCs, we're still in a regime where each hardware generation results in a "big" leap in quality.

I understand your point. So yeah, by that, different game types would have diferent scores. Your methodology is really good.
 
I find it pretty crazy to think that the next generation will offer the kind of huge graphical leap in 2D graphics that we haven't seen for a long time now. The only thing that's changed in a decade is the screen resolution and the aspect ratio. Regarding that 2D game video you posted, yeah, there's probably an effect here and there that's not really possible on last-gen hardware, but "an effect here and there" is exactly the kind of marginal improvement you end up talking about when you're in the "diminishing returns" phase (I wasn't impressed by the heavy particle effects--that was completely possible last gen). The jump from Odin Sphere, Alien Hominid, and Muramasa to DET is little more than screen resolution and a few effects, nowhere near the jump from, say, Contra 1 to Contra III. Or Double Dragon II to Streets of Rage 3. Or Gradius (NES) to Thunder Force IV (Genesis).

That other game you posted uses 3D graphics.

From the footage I've seen of Crysis 3, there are areas of Crysis 2 I'd say look 75% as good. Easily. The jump from 360 Crysis 2 to PC Crysis 3 is certainly nowhere near as large as the jump from, say, the SNES version of Doom to Perfect Dark.

That kind of proves my point. Better-looking trackside grass is not the kind of dramatic, global improvement we've seen from generation to generation. Most of the global things are done correctly now. The lighting looks physically correct, there are very few visible polygon edges, there's very little texture pixelation, the overall coloration matches reality pretty well, etc. Trackside grass is a small-scale detail. It's a thing, yes, but not nearly on the scale of, say, going from nearest-neighbor sampling to bilinear filtering, or going from flat shading to phong shading, or going from SDR to HDR lighting, or going from thousands to millions of polygons in a scene, or going from simple texturing to texture + normal + specular. If you drive around the track from a cockpit view instead of parking on the median so you can stare at the grass, the differences between a current-gen driving sim and real-life footage are, in my opinion (and depending on the game, track, and cars), not much bigger, possibly smaller, than the difference between current-gen sims and last-gen sims.

http://www.youtube.com/watch?v=SIDE58RRzmU
http://www.youtube.com/watch?v=S9s_G1ph_vA&feature=related

Here's a problem with this discussion: You are treating the term "diminishing returns" as equivalent to "no returns." That's not what the term actually means. It doesn't mean, "there is no visible improvement at all;" it means, "the overall improvement you get is smaller than it was last time." And that's what you'd expect from any kind of converging sequence. Or if you stick to Grand Prix games, consider these jumps:

Virtua Racing (Genesis) > F1 '99 (PS1) > F1 '06 (PS2) > F1 Champ. Ed. (PS3)

Impose any measure you want, the hugest jump is by far the first one, and the first game is kind of cheating by having a fancy new 3D chip on the cartridge...and if you want an even bigger jump, we could put the NES F1 Racing game at the beginning. :D


well, you're intrinsically comparing 2D machines (SNES/Genesis) to later 3D ones for a few of your "huge jumps" That seems to be cheating.

I'm also not sure how you're defining "largest jump" and the like. You seem to get subjective when it suits you. I bet in many metrics and numbers, like, polygons, the first jump is not the largest.

Bottom line, say with this timeline of major consoles:

2600>NES>SNES>PS1>PS2>360

I dont see any particular jump that I go "wow, that one is way bigger than the rest". They were all gigantic. I think the PS2>360 jump is just as large as any of them. So, I'm skeptical that suddenly next gen is the first exception.

And like I said, there are other ways to measure this. The never ending controversy over the Wii U not being a big enough hardware jump for example. The constant desperate message board pleas for Sony to equip the PS4 with 4GB of RAM instead of 2. The fact devs run their graphically demanding games frequently at less than 720P, with shaky framerates, cause they're desperate for a little more power to put into the visuals. How many people think it's despicable that "most games this gen aren't even 720P, harumph". How many people plea for "why isn't every game 60 FPS???!!" Countless untold pages of comments on the internet waxing eloquent over how great Uncharted 2 looks. Countless pages on the internet over how great Halo 4, an intra-generation improvement, looks. It sets people all atwitter. It matters. I dont see any evidence it matters one ounce less now than ever. Maybe it matters more? Maybe the jumps are getting bigger every gen.

Also, just look at the real life photo versus the Forza 4 screen. There is definitely one very large jump available to make in there. Probably even two. I dont think we'll get more than halfway between the Forza screen and the photo next gen.


From the footage I've seen of Crysis 3, there are areas of Crysis 2 I'd say look 75% as good.

Maybe, but put it all together and I think Crysis 3 in 1080, on PC highest settings, is good enough right now to be called a generation gap. And I'm sure it will be far exceeded, given it's still basically a game coded under current gen console constraints.

Lastly you seem a little dismissive of resolution jumps, when they're pretty transformative alone. You talk about those PS2 2D games, but those aren't going to be very good or pretty experiences today for the most part cause the resolution sucks. And if you dont believe that, enjoy all the horrible 240p and 360p youtube videos of those games like Odin Sphere lol.
 
See it the other way around, every increase in graphics fidelity comes at a tremendous financial cost, which seems to be exponential rather than linear, how long can we sustain that ?
Bigger budgets are unlikely to take any risk therefore you're stagnating in games genre, they become more refined, but still more of the same.
Is that really what you want for next gen ?
Not me.

That's all nice to want insanely powerful hardware but that's not seeing the whole picture and side effects for gamers and game developers alike...

Hence why I do not believe next generation consoles will push the envelope as much as it did this generation, the costs could rise too much.
 
IMO the tool business is still amateur hour, the level of automation in 3D scene generation is pitiful ... eventually software will catch up to the hardware even for smaller budgets. Also budgets are by their very nature self regulating, if there is no profit for a given budget the game won't be made at that budget ... capitalism compresses margins, cry me a river.
 
Also most of the budget just goes to the salaries. I doubt that more power automatically has to lead in to increasing the size of the teams, after a certain point has already been reached.
 
well, you're intrinsically comparing 2D machines (SNES/Genesis) to later 3D ones for a few of your "huge jumps" That seems to be cheating.
A 2D machine with a 3D chip added on isn't exactly a 2D machine any more, is it? Besides, it's no more "cheating" than comparing a 32-bit machine to a 64-bit machine, or comparing a machine with a fixed-function GPU to a programmable GPU.
I'm also not sure how you're defining "largest jump" and the like.
I haven't changed the definition the entire time. It's the distance between what you've currently got and your ideal target. In an F1 game, I think it's pretty easy to define--you'd like the game to look as close as possible to actually sitting in the cockpit of an F1 car and driving around a real track.
Lastly you seem a little dismissive of resolution jumps, when they're pretty transformative alone.
Like anything else, the rate at which the distance between a finite-resolution image and the infinite-resolution ideal closes continually declines, i.e., even screen resolution has diminishing returns. I'm going into my office soon; I think I might just whip up a little program to illustrate how.
 
Last edited by a moderator:
Do you have some data on how the physics of the eyes work?
Lots of information here

Here is what I'm talking about. I took a 1338x2000 photograph and downsampled by 50% pixels successively using nearest-neighbor (worst of all worlds) until I was at 1/8th by 1/8th resolution. I used the metric to quantify the difference between each image in the sequence, i.e., quantifying the real gain you get with each doubling of pixels. The gain is correctly scaled to simulate viewing each image on the same size monitor. As you can see, the gains begin diminishing right away, and really taper off at around 1/4 resolution (670 x 1001). The next step up is 947x1415, but the gain is pretty small, so if this were a game development context, it might make more sense to do something else with your pixels (of course, if you're already rendering photo-realistically at 670x1001, then there's really nothing else to do other than increase the resolution).

screenshot2ah.png


Included pics of both original image and source (no, it's not efficient code).
 
Last edited by a moderator:
I'm also not sure how you're defining "largest jump" and the like.
He's doing it comparatively with a perfect image by measuring deviance. His reference to existing games was only for illustration, though that discussion has taken on a life of its own. If there was a real-world example that had closest-possible renditions on each console (let's say Leguna Seca in a racing game is rendered on every console going back to the 70s), then his strategy would be a fair one for trying to get a realistic imperfection metric. Of course, if set that task to match a video clip, developers could change their rendering to include specific optimisations to get as close as possible which is different to a game engine that has to cater for a wider range of possibilities.

But then his plan wasn't really to create such a functional metric, but to use that hypothetical metric as an indicator of diminishing returns. The important point is that fearsomepirate was trying to avoid the relative comparisons and fund an absolute one, which I consider commendable and logical. It's quite apparent that the level of deviance is decreasing, and my view is that the rate of closing the gap on photorealism is not linear but diminishing.
 
IMO the tool business is still amateur hour, the level of automation in 3D scene generation is pitiful ... eventually software will catch up to the hardware even for smaller budgets. Also budgets are by their very nature self regulating, if there is no profit for a given budget the game won't be made at that budget ... capitalism compresses margins, cry me a river.

Yes, totally agree!

So much crying about the unsustainable nature of it all, but it's self regulating. If there isn't profit, games wont be made. That simple. For this gen, that has been very far from a problem. Are there no games coming out this fall? Dont think so.

It could be there are less triple A games that get made, but I'm not even sure a certain amount of trimming is bad. There were years there seemed like just too many games being made,
 
I'm also not sure how you're defining "largest jump" and the like. You seem to get subjective when it suits you. I bet in many metrics and numbers, like, polygons, the first jump is not the largest.

I don't want to be rude at all, but there is no other way about it: for you to say that, you simply must have not understood a good part of fearsomepirate's argument.
He described a completely objective metric and was consistent with it throughout the whole discussion.
It is about final output. The final image a game delivers, and the distance of that from reality. He is consciously not comparing polycounts or texture sizes or whatever technical feature present in synthetic rendering. To the contrary his point is that linear increments on those features don't result in linear improvements in the final output images, thus the "diminishing returns" with every gen. There are still returns to be had, but he is right that the will always be smaller for the amout of technical improvements necessary to achieve them.
 
New Crysis 2 texture pack for PC has been released..

li5Pw.jpg


h3LBb.jpg


UUObi.jpg


You reckon we'll be seeing that on the next set of consoles?

most likely, and most likely better usage of tessellation on branches and plants; their skeletal structures still look the same as ever in the past 5 years.

As i mentioned a while back, the improvements on paper will knock people off their socks but it's got to be quite a bit to elevate the visual representation to a generational leap. Crisp textures is one of a few things that contributes.

I have a feeling the next set of consoles wont be out till 2014, strictly being the projects coming next year are still fruitful, "Metal Gear R and Ground zeros", "tombraider" "final fantasy", "Gears of war", "god of war" "last of us" "Crysis 3" and so forth.


Microsoft only started the generation so early the last time was because of Nvidia's costs; which lead to the abandonment. no more consoles being sold means no more projects, so they were forced to switch out of the lack of a fruitful future. Right now were actually still hearing release dates going into the next year. And then there's this-
Microsoft-Xbox 360 has 'more than two years' left
http://www.gamespot.com/news/xbox-360-has-more-than-two-years-left-microsoft-6381373

I could only imagine that when the next E3 does arrive, it won't be about the sudden urge to switch because of a total abandonment on the same year....if and when a new console is in the talks on that day. (meaning Microsoft will most likely want to push and complete the next year with the 360.)

the year after would sound like the factual truth, It would be the right decision to make use of that research and development time anyways.
 
Last edited by a moderator:
I have a feeling the next set of consoles wont be out till 2014, strictly being the projects coming next year are still fruitful, "Metal Gear R and Ground zeros", "tombraider" "final fantasy", "Gears of war", "god of war" "last of us" "Crysis 3" and so forth.


You do realize that MG Revengance, Gears of War, God of War, Tomb Raider, and Crysis 3 are all Q1/early Q2 releases. They can't expect them to ride out to the holiday. The way the release schedules are shaping up, I would say, it almost looks certain that we will have new consoles next year.

MS/Sony have always announced big upcoming games for the following year. Outside Last of Us, there's nothing from Sony and nothing from MS. It all seems like a giant "wink, wink" hint of what's coming next year.
 
He's doing it comparatively with a perfect image by measuring deviance. His reference to existing games was only for illustration, though that discussion has taken on a life of its own. If there was a real-world example that had closest-possible renditions on each console (let's say Leguna Seca in a racing game is rendered on every console going back to the 70s), then his strategy would be a fair one for trying to get a realistic imperfection metric. Of course, if set that task to match a video clip, developers could change their rendering to include specific optimisations to get as close as possible which is different to a game engine that has to cater for a wider range of possibilities.

But then his plan wasn't really to create such a functional metric, but to use that hypothetical metric as an indicator of diminishing returns. The important point is that fearsomepirate was trying to avoid the relative comparisons and fund an absolute one, which I consider commendable and logical. It's quite apparent that the level of deviance is decreasing, and my view is that the rate of closing the gap on photorealism is not linear but diminishing.


But then he says something like "the gap from game X to game Y is clearly largest" without being objective about it, in the sense of measuring anything. He just says it as a subjective opinion, which is what he's accused me of doing at times. Cant have it both ways.

Because how would we? Unless he runs his image comparing program on it. And even then, are we sure the measuring metrics of that program are the ones that matter? Or that the biggest perceptive improvement to the human eye doesn't lie in the 90-100% band instead of all the others? Or whatever?

But in reality game X may have 100 polygons, game Y 900, and Game Z 10,000. So game z is the bigger objective jump in that metric.

And there's still the issue of, which has better graphics, Crysis 2 or Killzone 3? If it's all objective, in this neat box, we can measure?

We cant measure. There's a million variables because it's extremely complex, right?

I dont have the answers, but the Crysis/KZ3 thought experiment pokes holes in his idea that we can objectively measure the quality of graphics perfectly. Lighting (where crysis 2 excels) seems to me to be one big thing that's probably hard to measure in comparing still photos on monitors.
 
Lots of information here

Here is what I'm talking about. I took a 1338x2000 photograph and downsampled by 50% pixels successively using nearest-neighbor (worst of all worlds) until I was at 1/8th by 1/8th resolution. I used the metric to quantify the difference between each image in the sequence, i.e., quantifying the real gain you get with each doubling of pixels. The gain is correctly scaled to simulate viewing each image on the same size monitor. As you can see, the gains begin diminishing right away, and really taper off at around 1/4 resolution (670 x 1001). The next step up is 947x1415, but the gain is pretty small, so if this were a game development context, it might make more sense to do something else with your pixels (of course, if you're already rendering photo-realistically at 670x1001, then there's really nothing else to do other than increase the resolution).


Included pics of both original image and source (no, it's not efficient code).


Physics of the eye thing is cute but has no info relative to our pixel counting discussions on the top results.

The pics, I was expecting more, cant really glean anything from that set of 4 non-magnifiable pics. What am I supposed to be looking at?
 
Physics of the eye thing is cute but has no info relative to our pixel counting discussions on the top results.
It's pretty straightforward. The lens of your eye projects an image onto your retina. The rods and cones in your retina sense the light and transmit the info to your brain. You cannot see details of the image that are smaller than your photoreceptors. Didn't know this was controversial.
The pics, I was expecting more, cant really glean anything from that set of 4 non-magnifiable pics. What am I supposed to be looking at?
I explained it pretty thoroughly. The top left plot quantifies the returns in image quality you get by doubling the resolution of the image in terms of the L^2 norm, clearly illustrating the diminishing returns that come with increasing screen resolution.
 
Back
Top