Diminishing returns with advancing 3d hardware?

"Diminishing returns" is a quantitative idea, so it kind of implies the question...how do we quantify this? I came up with a thought experiment that I think could quantify it. Get a bunch of screenshots going all the way back to the Atari 2600, photographs, and artistic works (sketches, paintings, etc). Poll a bunch of people and ask them to rate the quality of the graphics from 1 to 10. Tell them only that "1" means "This couldn't be much worse," and "10" meaning "I'm not sure if this is a video game, a picture, or an artist's work."

Then plot the rating of the picture with the year the game's target hardware was released. Dollars to donuts say you see a curve that visibly increases more slowly some time in the early to mid 2000s. It's not worth the effort for me to do it, but if I was writing for an industry publication, I'd try to put together the poll.

Hard to specifically quantify diminishing returns, but we can specify the processes that lead us there. The number of pixels per triangle across a screen, how many operations lead to the final screen space pixel, and how many screen space pixels we have to render to in a way define our ability to reach a diminished rate of visual improvement. A simple doubling or even quadrupling of capability needs new techniques and methods to improve beyond just 4x the triangles, texels and pixels.

Hence, each of the 3D generations (and even within generations) to me is much more than just more polygons, texels, and pixels, but also about what you can do with them. Each new console has layered on more hallmark capabilities to approximate realistic rendering.
 
Hard to specifically quantify diminishing returns, but we can specify the processes that lead us there.

The number of pixels per triangle across a screen, how many operations lead to the final screen space pixel, and how many screen space pixels we have to render to in a way define our ability to reach a diminished rate of visual improvement. A simple doubling or even quadrupling of capability needs new techniques and methods to improve beyond just 4x the triangles, texels and pixels.

Hence, each of the 3D generations (and even within generations) to me is much more than just more polygons, texels, and pixels, but also about what you can do with them. Each new console has layered on more hallmark capabilities to approximate realistic rendering.

What you're talking about are all inputs to creating the final experience. A customer does not actually see pixels, HDR lighting, polygon transformations, etc. He sees Geralt of Rivia riding his horse through the forest. The technology is an input to creating the customer experience. Most people don't even have a clue what the technological capabilities of their computing hardware are.

From the sega Genesis to the PS2 is 12 years. From the PS2 to the PS4 is 13 years. Did an equivalent time gap result in an equivalent jump in quality? Most people who aren't trying to count pixels and effects would look at this and say, "Obviously not."

 
What you're talking about are all inputs to creating the final experience. A customer does not actually see pixels, HDR lighting, polygon transformations, etc. He sees Geralt of Rivia riding his horse through the forest. The technology is an input to creating the customer experience. Most people don't even have a clue what the technological capabilities of their computing hardware are.

From the sega Genesis to the PS2 is 12 years. From the PS2 to the PS4 is 13 years. Did an equivalent time gap result in an equivalent jump in quality? Most people who aren't trying to count pixels and effects would look at this and say, "Obviously not."

Funny that I was originally going to bring up GT4 in my post you quoted as a great example of a photo realistic graphics despite being so old relatively. Polyphony knew how to max the hardware and use visual tricks. The flip side is they could not have something like a day-night cycle with relatively correct dynamic shadows and lighting, in a way making it difficult to have the kind of dynamic conditions a game these days can have. They had to build tracks to a set of conditions. You simply couldn't change the dial to from day to night, or sunny to dark & cloudy.

My point was each new console generation brought capabilities needed to simulate real life visual processes beyond just displaying polygons and texels. For example, the first 3 Playstations are very easy to differentiate, even for gamers who don't know anything about the tech, as they still see the results on screen:

PS1: Rudimentary 3D with no subpixel precision. Affine texture mapping without interpolation or mipmapping limited to geometry based lighting techniques.
PS2: Subpixel precision rendering with perspective corrected multilayered, mipmapped textures and alot of pixel blending for special effects. Dynamic shadows, reflections and lighting became a possibility along with rudimentary texture manipulation.
PS3: Programmable shaders to give developers new ways to simulate more detail in an efficient manner with bump, normal, and parallax mapping. Large scale dynamic shadowing, lighting, and ambient occlusion became a thing.

The PS4 and Xbone, even with alot of new built in features from a visual standpoint are really bigger badder versions of what we saw on the PS3 and Xbox 360. I've yet to see a game this generation that realistically couldn't have been made on the previous gen, even with visual and gameplay downgrades. You could chalk alot of it up to increasing budgets and less risk taking along with hardware not being enough of an improvement over the previous gen.
 
Last edited:
and 10fps on any kind of consumer hardware...
Yeah but back then that was mostly ok, especially considering it was fairly sophisticated real-time 3D.

Frame rate is another variable in the diminishing returns argument though. I feel like 1999 was the year that 60 fps became fairly commonplace.
 
Last edited:
Another way to look at diminishing returns is to look at the hardware side of things, which is easiest to do on the PC.

For both GPUs and CPUs, systems are capable of being quite decent for gaming for far longer than they used to. Back in the day, a really good high-end video card might last two-three years before you had to make some serious sacrifices in game quality to keep the framerates up. Today, 4+ year-old GPUs still function quite admirably. For instance, only last week I finally upgraded my GTX 970, a nearly 4-year-old card, which still is able to play even the newest games at pretty high resolutions (most games work well at 1440p).

Furthermore, the product cycle has slowed really dramatically. Even though the GTX 970/980 came out in 2014, and we have only seen one new generation of video cards in that time (though a new gen is slated to be released relatively soon). There used to be new generations released every year, with refreshes every six months.

For CPUs, which were always a comparatively minor factor in PC gaming, the life span is even longer. You could probably game well in an 8-year-old system today as long as you had a decent video card to go with it (top of the line 8 years ago would have been Core i7-990X, a 6-core, 12-thread 3.46GHz processor. It's hard to be sure of that since people rarely benchmark such old CPUs in new games, but I bet it'd hold up just fine in most of them. Even if it isn't perfect, it's at least conceivable you could do it.

But try doing that in 2000. Eight years prior to that, Intel was at the end of the 486 life cycle (they released the 486DX2 66Mhz in September), and were close to releasing the very first Pentium systems. I can't imagine trying to even attempt playing a game like Deus Ex or Operation: No One Lives Forever (which were released in 2000) on a 486DX2. It might not even function at all, as it might depend upon architecture changes made in the Pentium line, even if the processing power were remotely sufficient.
 
Isn't part of that due to the fact that Intel held back core counts for so long with stagnating single threaded improvements? Now the core wars have begun it might force another leap in hardware requirements.
 
Frame rate is another variable in the diminishing returns argument though. I feel like 1999 was the year that 60 fps became fairly commonplace.

Dreamcast brought in sufficiently capable yet affordable hardware that met the standards needed for arcade ports beyond just fighting games. It was the first home system with both refined 3D hardware and disc based media after all. The PS1 lacked the former, and the N64 lacked the latter (not to mention the DC was much more powerful).

For CPUs, which were always a comparatively minor factor in PC gaming, the life span is even longer. You could probably game well in an 8-year-old system today as long as you had a decent video card to go with it (top of the line 8 years ago would have been Core i7-990X, a 6-core, 12-thread 3.46GHz processor. It's hard to be sure of that since people rarely benchmark such old CPUs in new games, but I bet it'd hold up just fine in most of them. Even if it isn't perfect, it's at least conceivable you could do it.

But try doing that in 2000. Eight years prior to that, Intel was at the end of the 486 life cycle (they released the 486DX2 66Mhz in September), and were close to releasing the very first Pentium systems. I can't imagine trying to even attempt playing a game like Deus Ex or Operation: No One Lives Forever (which were released in 2000) on a 486DX2. It might not even function at all, as it might depend upon architecture changes made in the Pentium line, even if the processing power were remotely sufficient.

I think 1) you had a very long console cycle that made it possible for PC CPUs to last much longer, then 2) the current cycle that did not put a premium on a high end CPU or exotic architecture and 3) AMD was a non-threat from 2006 to 2011. Then Bulldozer came out and AMD's non-threat status got extended to 2017. There was little reason to upgrade if you were already at the top end and Intel milked it. Plus Intel probably looked at the numbers and saw more reason to focus on mobile and tablet, where improvements were still needed. What we got was Westmere and it's descendents, while the Atom line I guess improved, and of course Intel learned to leave handsets to ARM.

Nvidia likewise went a similar direction, with Tegra being a not-super successful handset and tablet processor, but being the progenator of what might be there most important products in the coming AI driving revolution. And of course their bread and butter GPU line was refocused to creating a single production line of GPUs that could be respectively binned for desktop, mobile and servers.

Basically the focus on mobile left desktop for marginal improvements while huge leaps in power were made on the laptop. I don't completely remember how well Core 2 Quads worked for mobile, but it was with Westmere that I remember Intel first really started to push quad cores as a more mainstream option. Even the 2.13 GHz dual-core i3-330M in my Sony Vaio from 2010 made it feel like a really snappy machine, much more than the 2.26 GHz dual-core P8600 in my previous. Whether it was the reintroduction of hyperthreading, or the wider issue core, Westmere was a big leap from Core 2. With improvements in clock and process, came more capable SIMD and small refinements to bring further parity between mobile and desktop chips.
 
Last edited:
Core 2 was an extremely power efficient chip back then, but I think Core 2 Quad was pretty rare for mobile. I have a feeling part of that was having 4 cores wasn't very useful yet and you took a hit with overall clock speed / single thread perf because they don't have Turbo functionality and have to stay within a reasonable TDP.

And yeah I was thinking of Dreamcast. Several times faster than N64 without a doubt. PC finally had some refined GPUs too with Voodoo3, G400, TNT2, etc.
 
Last edited:
Isn't part of that due to the fact that Intel held back core counts for so long with stagnating single threaded improvements? Now the core wars have begun it might force another leap in hardware requirements.
I doubt it. Quantum limitations have come into play which make further improvements in circuit density vastly more challenging than they were in decades past. Furthermore, increases in frequency have largely stopped due to limitations which stem from electrodynamics, while increases in parallelism are only useful for a subset of processing applications.

On one side, it's difficult to pursue ever more dense circuits due to physical limitations. It's harder to get more processing power out of more dense circuits. And it's harder to get to higher clocks. Hardware is being squeezed on all sides. The only way to break out of this is for an entirely new computing paradigm to come around. But any such computing paradigm will take a monumental amount of time and money to make it even competitive with existing designs, let alone superior (likely meaning decades of well-funded R&D).

So settle in. Diminishing returns are here to stay. We are entering an era of little to no noticeable improvements in processor power, which means that new frontiers of gaming technology are coming to an end.

The good thing? The stuff today's high-end computers are capable of is nothing short of amazing. I have little doubt this level of quality will be common in games within 10 years:

We probably won't go noticeably beyond that over the following 30 years, but who cares?
 
Core 2 was an extremely power efficient chip back then, but I think Core 2 Quad was pretty rare for mobile. I have a feeling part of that was having 4 cores wasn't very useful yet and you took a hit with overall clock speed / single thread perf because they don't have Turbo functionality and have to stay within a reasonable TDP.

And yeah I was thinking of Dreamcast. Several times faster than N64 without a doubt. PC finally had some refined GPUs too with Voodoo3, G400, TNT2, etc.

Mobile C2Q was definitely for larger notebooks. I was checking Wiki, and to my surprise, there are apparently no 32nm Westmere quads for notebook. They are all dual-cores :-? Weird that there was no 32nm shrink for Clarksfield. But I think it further supports my idea that Intel wanted to get dual cores up to snuff for the mainstream. In concert they could boost the margins for the high end binnings despite even labeling the top end ones "i7". I'm sure quad-core Clarksfield was pretty nice though.

I wish I still had my i3-330M Vaio. Best laptop I ever owned :cool:
 
Funny that I was originally going to bring up GT4 in my post you quoted as a great example of a photo realistic graphics despite being so old relatively. Polyphony knew how to max the hardware and use visual tricks. The flip side is they could not have something like a day-night cycle with relatively correct dynamic shadows and lighting, in a way making it difficult to have the kind of dynamic conditions a game these days can have. They had to build tracks to a set of conditions. You simply couldn't change the dial to from day to night, or sunny to dark & cloudy.

You're counting effects. Consumers don't judge quality by counting up features, then comparing who has the longest list.

My point was each new console generation brought capabilities needed to simulate real life visual processes beyond just displaying polygons and texels.

And my point is that the impact this has on visual quality is lower and lower relative to time.

For example, the first 3 Playstations are very easy to differentiate, even for gamers who don't know anything about the tech, as they still see the results on screen:

Diminishing returns is not "more investment leads to NO improvement." It's "more investment leads to LESS improvement." I think until you grasp the significance of that, you're talking around the subject, not really addressing it. Consumers do not count sub-pixels or read about shader pipelines. What they do is they see, "Wow, the latest games on the latest PC hardware doesn't look like nearly as big a jump over a base model PS4 as I remember the Dreamcast being over the PS1."
 
Another way to look at diminishing returns is to look at the hardware side of things, which is easiest to do on the PC.

For both GPUs and CPUs, systems are capable of being quite decent for gaming for far longer than they used to. Back in the day, a really good high-end video card might last two-three years before you had to make some serious sacrifices in game quality to keep the framerates up. Today, 4+ year-old GPUs still function quite admirably. For instance, only last week I finally upgraded my GTX 970, a nearly 4-year-old card, which still is able to play even the newest games at pretty high resolutions (most games work well at 1440p).

I got a high-end gaming PC last Black Friday. I don't even know what to run on it. Divinity: Original Sin 2 looks pretty good on max settings, but you know what, so does Witcher 3 on my launch PS4. And that's a 4-year-old machine. 4 years just ain't what it used to be. Between the Nintendo 64 and the GeForce 2 is 4 years. Between the PS2 and the DX9 era is 4 years.

I mostly use it to play PS2 games in HD, to be honest. :D
 
You're counting effects. Consumers don't judge quality by counting up features, then comparing who has the longest list.

Those features are a big part of what developers can do with the given hardware to make the games you want to compare.

And my point is that the impact this has on visual quality is lower and lower relative to time.

No one is doubting you there.

Diminishing returns is not "more investment leads to NO improvement." It's "more investment leads to LESS improvement." I think until you grasp the significance of that, you're talking around the subject, not really addressing it. Consumers do not count sub-pixels or read about shader pipelines. What they do is they see, "Wow, the latest games on the latest PC hardware doesn't look like nearly as big a jump over a base model PS4 as I remember the Dreamcast being over the PS1."

I understand what diminishing returns are in the context of graphics (a just as interesting subject of diminishing returns is combat aircraft BTW), but you're probably right about me dancing around the subject. As a hardware enthusiast, I want to experience the meaningful capabilities beyond what a simple screenshot can provide, and I put alot of value in what one system realistically could do that the previous could not.

But what happens when your set of individuals being tested view not just screenshots, or even passive footage, but actually engage in meaningful gameplay? I think we should be looking at it more like KimB does:

I would do it in the following way:
New technologies can be categorized into three groups.
1. Technologies which allow new gameplay mechanisms. Examples include first person gaming (e.g. the original Wolfenstein). Fully 3D environments (e.g. Quake), networked multiplayer (e.g. the original Doom).
2. Technologies that allow more in-depth games with more variation (think Donkey Kong vs. Super Mario Bros.).
3. Technologies which improve visual and audio fidelity.

That would be very interesting test and set of plot points. You have games like Minecraft and Fortnight shifting the paradigm and graphics, as long as they meet a meaningful threshold are not as important as they used to be in drawing in consumers.
 
Those features are a big part of what developers can do with the given hardware to make the games you want to compare.

Right, and if you're into automotive engineering, you can give a big list of technologies that make a new mid-sized sedan in 2018 different from one in 2003. The list is huge! Yet if you took an ordinary customer, and told him to take like-new sedans from 1988, 2003, and 2018 for test drives, and asked him which two cars had the greatest jump in quality and driving experience, the vast majority of customers will tell you it's no comparison: '88 to '03 feels like different planets compared to '03 to '18.

But what happens when your set of individuals being tested view not just screenshots, or even passive footage, but actually engage in meaningful gameplay? I think we should be looking at it more like KimB does:

You'd get similar responses. In terms of home video games, I will bet if we tried to categorize and enumerate innovations, we will see explosions on the Atari 2600 and the PS1 that comparatively taper off. The reason is that gameplay innovations have become increasingly marginal as well. The vast majority of innovations for at least the last ten years have been additions to previous revolutions, not revolutions themselves. I mean, we're talking about adding day-night cycles to the racing sim, not something as revolutionary as the creation of the racing sim in the first place, which happened over 20 years ago. From the PS1 to the PS4 is a 20 year leap. But 20 years before the PS1 is this:

 
Right, and if you're into automotive engineering, you can give a big list of technologies that make a new mid-sized sedan in 2018 different from one in 2003. The list is huge! Yet if you took an ordinary customer, and told him to take like-new sedans from 1988, 2003, and 2018 for test drives, and asked him which two cars had the greatest jump in quality and driving experience, the vast majority of customers will tell you it's no comparison: '88 to '03 feels like different planets compared to '03 to '18.

You'd get similar responses. In terms of home video games, I will bet if we tried to categorize and enumerate innovations, we will see explosions on the Atari 2600 and the PS1 that comparatively taper off. The reason is that gameplay innovations have become increasingly marginal as well. The vast majority of innovations for at least the last ten years have been additions to previous revolutions, not revolutions themselves. I mean, we're talking about adding day-night cycles to the racing sim, not something as revolutionary as the creation of the racing sim in the first place, which happened over 20 years ago. From the PS1 to the PS4 is a 20 year leap. But 20 years before the PS1 is this:


But you talk about trying to quantify this. What's the point when we already know what the general correlation is going to be? We're fighting transistor densities, material science, and even economics here as well.

"X -> Y generation didn't have as big an improvement compared to W -> X" is more important when you bring in the context of the related subjects. I think is more worth the time than quantifying diminishing returns based simply on screenshots.

We need to dig deeper to get to the meaningful stuff.
 
Last edited:
But you talk about trying to quantify this. What's the point when we already know what the general correlation is going to be? We're fighting transistor densities, material science, and even economics here as well.

"X -> Y generation didn't have as big an improvement compared to W -> X" is more important when you bring in the context of the related subjects. I think is more worth the time than quantifying diminishing returns based simply on screenshots.

We need to dig deeper to get to the meaningful stuff.
From a game development standpoint, the diminishing returns could be seen as a tremendous opportunity, one that could lead to significant benefits for users. Diminishing returns means a narrowing of expected hardware requirements, and it also means stability over longer periods of time in terms of the technical aspects of the field.

That means that the real growth in game design over the next ten years will have to come from the creative side of the equation. Coming up with new ways for users to interact with games (new ways to get gameplay ideas across, as well as new input methods). It also means aspects like story are going to become more and more important. Because this is the only way that new games will be able to really distinguish themselves.

On the hardware side of things, the situation is far more dire. It means that hardware manufacturers are going to find it more and more difficult over time to sell their products.
 
But you talk about trying to quantify this. What's the point when we already know what the general correlation is going to be? We're fighting transistor densities, material science, and even economics here as well.

The point is to give people a meaningful way to think about what "diminishing returns" actually means (i.e. the "return" is the customer's experience, not how realistically sweat is rendered on a warrior's skin), especially as there are apparently still people who think on-screen pixel counts and shader operations prove there's no such thing. It's important not just to win internet arguments, but for game developers to understand the world we live in now...or rather have for a long time now. Remember, Crytek bet the farm on being the premier PC developer based largely on graphics and ultimately lost that bet.

This is that 11-year-old game:

11 years before that was Quake 1. Minecraft came out 2 years later and would run on my aging laptop.
 
Back
Top