120fps at 4k can still look fake

I always thought the obsession over resolution has cost the evolution of graphics a great deal, and I mean all aspects of graphics not just characters or skin. We need better water, shaders, polygons, textures, physical interactions.. etc.


If the push for resolution above all approach continues(4K and so on), I fear the development of better visuals will be very slow indeed. I think we need to put a stop to the need for bigger monitors (and thus higher resolutions). The sizes available now are sufficient enough.
 
Last edited:
I would argue that it largely depends on the type of game you're playing.

If you're playing something like GTA, then yes, a physically accurate 720p path-tracer is probably ideal, because it will increase your immersion in the (mostly urban) environment, especially with all the light sources at night, the reflective surfaces (cars, windows, etc.). If the low resolution makes things slightly blurry, it's not the end of the world.
Patently not. There would've been uproar if the GTA V remaster had not run at full 1080p.
 
But the point is, 1080p is the new bar that games have to aim for or risk getting slated by the press and vocal minority.
 
but the press and vocal minority have never seen 720p game with realistic graphic, undistinguishable from real-life.

its like chicken and egg all over again.
 
I think the OP wants games similar to the Far Cry 3 in Real Life short by Devin Graham, Leanna Pareja, Jacob Schwarz, and Clint Jones. Even at 360p, it's more realistic than any game ever made.
 
Ugh.

Decades have been spent watching television in "480i" or less, with very "realistic" results. What makes something look realistic is not linked to any singular dimension, for example resolution. What truly drives the perception of reality is very, very small things that your brain isn't going to consciously observe but will absolutely notice: subtle nuances in movement (example: continuity of movement in clothing, hair, even skin movement over bones and muscles), small vagaries in lighting (not using point lights, shadows that are truly from multiple sources and soft, driven by hundreds of light bounces that collect all manner of color from the surrounding area), small nuances in physical dynamics (sure 9.8m/s^2 isn't hard to compute, but drop a piece of paper four times in a row from a standing height and it will land in four different spots).

None of these are graphics specific, and these are not the amalgamation of all things that make a scene look real. Getting even ONE of them absolutely perfect would still make your game look fake if the rest wasn't there.

Resolution? Doesn't even register on the list of things to get right, right now.
 
Well not exactly. No one wants 480i. The point is that we are now at a level where resolution, for 99.9% of the people, is fine at 1080p, given the average screen size in people's homes.

The problem is that manufacturers will want growth, and growth in TVs unfortunately is dictated by cycles of new technologies which push people to upgrade.

First Widescreen, then HD, then 3D, now 4K, eventually 8K until the futility of it all will finally be understood.

We're not automatically going to want 100" TVs, which makes such high resolutions pretty much useless. But rest assured, Samsung, Sony and everyone else will be pushing those 42" 8K TVs before you can know it.
 
I think you missed the point I was intending to make. The thread is focusing on "looking fake" and the human perception of fake vs real is not linked to resolution. Thus, in the context of this thread, instead of saying "noone wants 480i", the proper focus should be "noone wants it to look fake" Said a different way: resolution is not a measurement of "real looking". e.g. a "real looking" scene could conceivably exist at 480i, but a wholly "fake looking" scene can similarly be observed at 4k.

Further to the point, the same scene may conceivably look worse at higher resolution. As a thought exercise, being able to scrutinize details on a 4k TV may lead you to discover the image as "fake", whereas that image on an older "SD" TV would have potentially looked far more real. Consider an old movie that did NOT use CGI, but instead used elaborate physical set pieces. In the days of 480i televisions, you may not be able to discern that a tumbleweed in the wilderness was actually a bunch of twisted chicken wire. However, when that movie is re-mastered from the original film source into 1080p Bluray, you can now immediately tell that tumbleweed is "fake".

I realize we're all thinking about CGI in here, but "looking fake" isn't limited to computer graphics alone.

I realize that display device and computation device manufacturers all want us to buy more, bigger and better in the form of more Hertz, more pixels, more colors, etc. That really has no specific bearing on what "looks fake", it's a purely tangential line. It might matter to someone who measures their manhood by how "much" their current home theater has of whatever dimension you care to tabulate (petapixels, megawatts, gigahertz, tachyons, GRB's, pr0nz) but that's really not the discussion, now is it?
 
(I have nothing to contribute... but I had to, since it always makes me smile when I see the title of this thread)

It will certainly take quite some time until we have enough GPU performance to render photo-realistic graphics at 120 fps + 4K :)
 
Surely we can do both.
Bigger monitors / sharper text are needed for productivity.... I've been surprised just how much I like apples' retina displays. So, that hardware is going to be around and people will use it for games.

Photorealism is restricted by content& game budgets (how much content can be authored and refined for the more realistic effects); ramping up resolution is easier to retrofit.

So - I agree with the OP's statement in that merely ramping up the resolution doesn't really help much if the underlying models are simple - but equally, I think its' an effect, not a cause, and it wont restrict anything.

developers can still give users a choice, and lower AA samples or framerate still give leeway at a higher resolution.

it is possible the higher resolution will make more difference for stereoscopic & VR experiences
 
(I have nothing to contribute... but I had to, since it always makes me smile when I see the title of this thread)

It will certainly take quite some time until we have enough GPU performance to render photo-realistic graphics at 120 fps + 4K :)

Pessimist! The Genuine Fractal guys just need to patent a hardware implementation, done! They'll pull the stunt of fractal (depth-buffer-less) 3D reprojection too, just wait for it. :p
 
if it was a side scrolling game with a mix of realtime and pre-rendered stuff. A-la Duck Tales Remastered. Photo realistic graphic at 4K 120fps can be done right?
 
This recent post from Timothy Lottes touches on similar topics:

http://timothylottes.blogspot.com.br/2015/01/leaving-something-for-imagination.html

Lack of information can invoke the perfect reconstruction of the mind. For visuals, seems like the deepness and slope of the uncanny valley is proportional to the spatial temporal resolutions of the output device. This 4K and eventual 8K crazyness, while awesome for 2D and print, has an unfortunate consequence for real-time realistic 3D: available perf/pixel tanks simultaneously as required perf/pixel skyrockets due to the increased correctness required for perceptual reality.

The industry continues to shoot itself in the foot focusing on quantity instead of quality, raising the baseline cost required for digital 3D content to climb out of the uncanny valley. (...)

(...)My personal preference is for the most extreme tradeoffs: drop view dependent lighting, go monochome, drop resolution, drop motion blur, drop depth of field, no hard lighting, no hard shadows, remove aliasing, add massive amounts of film grain, maximize frame-rate, and minimize latency. Focus on problems which can be solved without falling into the valley, produce something which respects the limits of the machine, and yet strives for timeless beauty.

Any thoughts?
 
Any thoughts?
In terms of convincing the viewer that your graphics are representing a realistic underlying source scene, he's not wrong (although depending on the actual framerates and the MB quality you might do better with lower framerate and more MB).
 
Last edited:
I wouldn't be happy with massive amounts of film grain its an effect I thoroughly dislike. And I'm not sure what he means by "go monochrome"
And resolution has its place, and the res race wont end until we hit the limits of human vision (whatever that may be) its needed for detail and also the size of what you can see
I do think that 3440X1440 UltraWide QHD may be a better option than 4k because of the aspect ratio.
 
I wouldn't be happy with massive amounts of film grain its an effect I thoroughly dislike. And I'm not sure what he means by "go monochrome"
And resolution has its place, and the res race wont end until we hit the limits of human vision (whatever that may be) its needed for detail and also the size of what you can see
I do think that 3440X1440 UltraWide QHD may be a better option than 4k because of the aspect ratio.

So this monitor is HDMI restricted to 50hz mode. This can be a better compromise than falling in the 30/60 duality falacy and for a perspective that consoles are limited to HDMI.
 
I didn't realise that, thankfully hdmi isnt the only input (is the hdmi input an older version I thought the latest spec could do 60hz)
ps:
Whats your opinion if you were given a monitor would you choose QHD over 4K ?
 
I wouldn't be happy with massive amounts of film grain its an effect I thoroughly dislike. And I'm not sure what he means by "go monochrome"

That has me thinking about Quake 1, the lighting was "monochrome" in that it was only shades of white and the palette was limited though that was simply because the game was made with 8 bpp in mind, with VGA mode X as the lowest resolutions (software rendering on the slowest Pentium and PCI 2D graphics card)

When run with the OpenGL renderer, Quake 1 has many of the qualities the author of this rant is seeking. Framerate is very high on a modest GPU (you will saturate your monitor's resolution/refresh rate combo), latency is minimized, anti-aliasing is cheap [even 4x RGSS] and then you are free of any motion blur, bloom and other shader-based distractions.
Not sure if "drop view dependent lighting" means to go back to lightmaps (which were pretty advanced in Quake 1 actually, I mean besides their limitations they used a good algorithm which took its toll when "compiling" the map and they were absolutely everywhere)

The author is also making a case against pixels as "little squares", while I won't necessarily agree with film grain he's making the case for having some perfectly anti-aliased lowish res rendering with 4K or otherwise absurdly high resolution for the target output i.e. some reconstruction filter that does away with one pixel == one square on the target display.
One analogue for this may be a SNES emulated game with HQ3x filter : low res input material and high res output with an interesting result (maybe the author played Street Fighter II Turbo that way with arcade sticks, that is low latency too)

We have an opportunity nevertheless of making some algorithm or processing that has both anti-aliasing and a vastly higher output resolution in mind, with low latency and whether the output simulates CRT, film or something else.
 
Back
Top