Graphical effects rarely seen this gen that you expect/hope become standard next-gen

Some self shadowing problems are simply the result of low poly, angular models. Normal maps make you forget but it's a pretty simple geometrical shape in front of you most of the time, and shadow maps can't take the normals info into account.
Even we have some trouble with this stuff, on million polygon models, when we do close-ups and sharp raytraced shadows...
 
I would like to see more image reuse in next generation games. Subsequent frames in games (especially 60 fps games) have lots and lots of almost identical pixels in the previous frame compared to the newly rendered frame. The more pixels we can reuse, the less work we have to do every frame. If we can for example reuse half the pixels, we can either double the frame rate, or double the graphics quality. This is an area of graphics rendering that hasn't been researched enough.

Reminds me of MS's Talisman. There are probably a lot of games that could benefit from re-using pixels, especially if there is little change and various effects (OBMotion Blur, DOF, various filters and particle systems). I am still waiting for someone to replicate the "camera picture off screen" post processing filter to make games 'pop' more!
 
I believe sebbi meant that as a jest (though it does backfire a bit cause thats how a lot of graphics was done >10-15 years ago)
 
I believe sebbi meant that as a jest (though it does backfire a bit cause thats how a lot of graphics was done >10-15 years ago)

I won't speak for sebbbi, but iirc he actually discussed this with Trials HD and how he did/wished to re-use pixels as a huge number of pixels were unchanged between frames. I may have misunderstood him though.

sebbbi?
 
Epic uses some kind of "temporal filter" for occlusion, they also use "computational masks" (see here from page 42 to 63).
You think of something like a deffered render that would pre-light a low res framebuffer then storying it, then rendering at standard resolution. For the next frame the new pre-lit low res framebuffer would be with the old to set a mask, you upscale the mask to your standard resolution and use it to kill a bunch of pixels.
It sounds reasonable (for me it sounds nice but I know I miss some implications)? Such a low res buffers could be reuse for multiple effects SSAO, motion blur and post processing, etc so it costs in computation and memory could be amortized due to its multiple uses, what do you (members) think?
EDIT
Do you think such low res frame-buffer could also be used to do some real time perfs optimization/profiling like dynamic resolution or AA?
 
Last edited by a moderator:
I won't speak for sebbbi, but iirc he actually discussed this with Trials HD and how he did/wished to re-use pixels as a huge number of pixels were unchanged between frames. I may have misunderstood him though.

sebbbi?
Yes, I experimented with couple of techniques to reuse last frame pixels. The frame rate boost was really nice, but I just didn't have enough time allocated to design/experiment with it enough. The image glitches were most of the time really small and unnoticeable, but in some scenes too distracting for the quality you expect in a released game.

Rendering only odd/even scanlines and moving the last frame according to the stored velocity vectors is also interesting approach. When done inside the game, this method works better than usual deinterlacing methods (you have exact knowledge about geometry and pixel movement instead of a quess). Stalker used a method like this, and I didn't notice any visible gliches on it. Bad thing about this method is that you still have to refresh your shadow maps at the same rate as previously (actually lazy update rate yields even worse results compared to traditional rendering).
 
I won't speak for sebbbi, but iirc he actually discussed this with Trials HD and how he did/wished to re-use pixels as a huge number of pixels were unchanged between frames. I may have misunderstood him though.
:oops: I was wrong then
Though like I said this is how it was often done in the old days, the problem though is occasional errors would creep in, eg an area would not get updated when it should since its marked as not having changed
 
No Problem. You have a better grip on the concept and issues, I just remember reading his posts on the issue. It is interesting how old ideas come back into play as technology progresses.
 
The implementation is improving all the time, you're never going to get it perfect the first time, they can't really improve on something if they don't try it at first, we ARE seeing better and better self-shadowing and it wouldn't be possible if everybody just kind of stays away from it instead of tries it and then learns from each other's mistakes.

How do you think the game could've turned out had the devs used a FEAR or Doom 3 like shadow system (vertex shadow volumes right?)? Sure it would've been hard-edged, but lighting itself can still deal with it easily and you don't get the ass load of artifacting (all that shadow jag and crawl, OMG!) that Dead Space's shadowing system had.
 
Are you talking about Stencil Shadows .?
Well from what I've heard here, I think those are not a very good option to take when performance is of concern.
 
What I like to see more: destructible environments! I don't know if this is exactly graphics related, but I learned in this very forum that it at least increases the burden on the graphics engine and its optimization potential.

In an action game this effect really increases the immersion in my opinion (for me it is funny, when I shoot with a rocket into a wooden window, and the only effect is that the wood turns from brown to black :mrgreen:).
The same goes for lights!! I know that I am silly, but if I see a light, the first thing I do is shoot at it and see if it breaks and turns off ... can't remember one game where this actually happens, except where it is scripted into the mission.

But on the other side, I don't like destructible environment to be the center of the game either (a la Red Faction) but rather a cool additional feature, see for instance Bad Company and to some small extend Killzone 2.
 
The same goes for lights!! I know that I am silly, but if I see a light, the first thing I do is shoot at it and see if it breaks and turns off ... can't remember one game where this actually happens, except where it is scripted into the mission.

The Splinter Cell series have plenty of them. ;)
 
Better shaders and polycounts first plox. I particularly like Epic/UE3's emphasis to it, makes games seem more like art than assets + tech (some devs still suck at it though).

Best advancement we've seen from are actually racing games. None of the 09 titles look bad! :LOL:
 
60 fps, 1080p, 4xAA, stereoscopic 3D, RemotePlay

I agree on no screen tearing, better shadows

No more small buffers ever, for things like smoke/shadows/overlays/etc.
 
Order independent transparency and proper particle system with volumetric self lighting/shadowing.

More common use of shader antialiasing to get rid of all the flickering in speculars and shadows.
Is it impossible to sample shadow from different location within pixel/polygon to get some antialiasing in terms of screen space as well?
 
Back
Top