Graphical effects rarely seen this gen that you expect/hope become standard next-gen

As I understand it, it scales better for higher-resolutions. Example of 720p not being suitable for CUDA but how about resolutions that is used on PC which ranges in the 2-4x pixel amount?

They can also use it for a large batch of images; the higher res, the better the gain.

But would you really have to gain a lot to have it run fast? The example of a Q3.0GHz having the MLAA take 5ms render time on a 1024x1024 render.

Depends on the architecture. The speed up may not be linear due to the data dependency and other overhead that only the implementor would know. You'll need to integrate the whole processing into the GPU pipeline also.

GOWAA runs on an architecture where the GPU, CPU and memory are arranged to enable sharing efficiently. So T.B. could throw the more flexible CPU cores at the image data via DMA. Perhaps future PC will be like this, I have no idea.

If we wait a while, I suspect we will see different implementations on the GPU for various problem sizes. If a single CPU thread implementation can solve the problem in 5ms, I don't think they need to throw massively many cores at the problem (Will introduce too much overhead). So the per-core implementation needs to be very very efficient.
 
They can also use it for a large batch of images, the higher res, the better the gain.

Which would suit the PC gaming space as 720p was outdated several years ago (no pun). With common resolutions being 1920x1080/1920x1200 and then upwards 2650x1600 or with the introduction of ATIs triple display system with far higher resolutions. ATI triple monitor res of ~5760x1600 would not be suitable with MSAA or SSAA.
 
Last edited by a moderator:
Not if they want to throw the image on a living room TV though. 720p - 1080p will probably last a long time.

For MSAA/SSAA scaling, I think it depends on the hardware vendors. They'll need to address MLAA shortcomings also (Ability to apply AA to sub-pixel structures). Anything is possible.
 
Not if they want to throw the image on a living room TV though. 720p - 1080p will probably last a long time.

But most on PC play on monitors. Also nothing prevents you from connecting 3 HDTVs together either! :p

And while 720p might be unsuitable 1080p might not.

EDIT: Crap wrong thread, move it over.
 
But most on PC play on monitors. Also nothing prevents you from connecting 3 HDTVs together either! :p

And while 720p might be unsuitable 1080p might not.

Cinema might be a better domain. Wives, power bill and typical living room space will come into play with 3 HDTVs.

If the idea is anywhere feasible/practical, we should see some implementations in 1-2 years time.

EDIT: Regarding the SPU implementation, I could not assimilate KKRT's post into this discussion because he did not cite his source:
http://forum.beyond3d.com/showpost.php?p=1433477&postcount=3818

They use all 5 SPU if they are free for MLAA [sixth is dedicated for color correction as i remember], but they need only 2 SPU for reaching 60hz [2 SPU filter image in less than 10ms in 720p, 5 SPU do it in less than 4ms]. So they only 'waste' maximum 25% of SPU time on MLAA in 60hz conditions.

Would be interesting to know where the overheads are (waiting, fetch latency, etc.). That's why I'm curious in the Intel numbers _after_ integrating with the GPU pipeline. The standalone CPU implementation is in a "clean-room".
 
Which would suit the PC gaming space as 720p was outdated several years ago (no pun). With common resolutions being 1920x1080/1920x1200 and then upwards 2650x1600 or with the introduction of ATIs triple display system with far higher resolutions. ATI triple monitor res of ~5760x1600 would not be suitable with MSAA or SSAA.
I wish that was the case, but check any actual stats of resolution sizes + youll see even what I have 1920x1200 (plus a second 1200x1600 monitor) is larger than what most ppl use.
hell monitors are retrograding a couple of years ago yould have 1920x1200 now its 1920x1080
 
But most on PC play on monitors. Also nothing prevents you from connecting 3 HDTVs together either! :p

And while 720p might be unsuitable 1080p might not.

EDIT: Crap wrong thread, move it over.

Multi-monitor and resolutions >1080p are niche. Multimonitors will always stay niche, and most likely the monitor resolution too, since many people don't have space for a >60" TV and homes aren't getting bigger.
However you could do 2*1080p for 3D, and if you still have more shader power to burn there's always supersampling. However I think that extra power will be used for more effects before programmers give up and say that they just have too much power and render at 4k :)
 
EDIT: Regarding the SPU implementation, I could not assimilate KKRT's post into this discussion because he did not cite his source:
http://forum.beyond3d.com/showpost.php?p=1433477&postcount=3818



Would be interesting to know where the overheads are (waiting, fetch latency, etc.). That's why I'm curious in the Intel numbers _after_ integrating with the GPU pipeline. The standalone CPU implementation is in a "clean-room".
http://forum.beyond3d.com/showpost.php?p=1401785&postcount=211
First paragraph
http://www.realtimerendering.com/blog/more-on-god-of-war-iii-antialiasing/

Third paragraph
http://www.eurogamer.net/articles/the-making-of-god-of-war-iii?page=4

5th post ;]
http://forums.godofwar.com/t5/God-of-War-III-Discussion/Developers-on-Forums-Right-Here/m-p/31162

Cheers :)

Ps. I'm really couting for GT HD 1440x1080 mode with MLAA in GT 5 or even higher, but I dont think that Polyphony is working with US/Europe 1st Sony studios' ;\
 
What about realistic hair, strand by strand?

We've seen several examples to showcase the graphical capabilities of... erm... graphics cards, but are there any games that implement it or are planning to do it?

Graphical demonstration:

With respect to videogames, I don't think that current technology is enough to render this PLUS background/scenario geometry, textures and shaders, characters, AI and others.

Do you think we will see this in a near future?
 
Hair is sort of a nightmare, especially because you need simulation to get it work right without intersecting with everything else in the scene. Also, you pretty much need a realistic number of strands to avoid a bald looking character and that means 20-50K strands usually. Of course you can usually get away with simulating a few hundred, but it's still something that movie VFX studios spend a LOT of time and money to research their own solutions.

Rendering them is also problematic, as far as I know we work with camera-facing strips and a special shader to get proper lighting. Then there are shadows, they usually need their own shadows and lights even in offline CG most of the time.

To be honest I don't see even PS4 generation hardware properly dealing with hair. It'll get better, but far from good enough...


Volumetrics are also complicated. I know that we use cached data for the rendering at it literally takes gigabytes of disc space even for a few seconds... once again it only really works when there's a lot of it. Still, it might be feasible next gen - I think Warhawk had SPU-rendered volumetric clouds, but the resolution was several orders of magnitudes below acceptable for stuff like smoke and dust, far too large volumes meaning far too soft clouds. As long as sprites are better looking, devs will stick to them...
 
Not voxels, but isn't several (PC) games already doing at least semi-realistic volumetric smoke/fog/younameit which interacts with the surroundings?
 
What about realistic hair, strand by strand?

We've seen several examples to showcase the graphical capabilities of... erm... graphics cards, but are there any games that implement it or are planning to do it?

Graphical demonstration:
It'll be a while before this can be used in games. Doing fully simulated hair on top of all the other stuff that's going on in games is not possible at the moment.

The Apex cloth physics used in Mafia 2 require very similar calculations, but a lot less of them (hair can usually be simulated with cloth solvers). And the game takes a massive performance hit when the Apex is turned on.
 
Not voxels, but isn't several (PC) games already doing at least semi-realistic volumetric smoke/fog/younameit which interacts with the surroundings?

Yes, volumetric interactive smoke/fog can be seen in a few games. Dunno if it's ever been done on a console.


^ recommend watching in HD
 
That and some other games AFAIK with PhysX NV HW. Batman for example at beginning of demo.
 
Not voxels, but isn't several (PC) games already doing at least semi-realistic volumetric smoke/fog/younameit which interacts with the surroundings?

AFAIK everyone uses sprites, with tricky blending where it intersects with the level geometry.
There were some pretty limited alternate approaches with Hellgate and DX10 maybe, but only good for like two cubic meters of total volume (a serious limitation on many offline volumetric solutions as well).
 
No more screen tearing. Use of triple-buffering on all titles.
SSAO
No pop-in or reduced pop-in objects for sure. I hate objects just popping up (see GTA IV).
Death to texture pop-in.
Motion-blur is over-rated. I could live without it.
 
SSAO is old stuff now, with most games using it these days and quite a few using more advanced version of indirect shadowing. Though I do expect something like SSDO to become extremely common next gen.
 
realtime GI and radiosity need to be in more games. SSS for skins and more physx like interaction.

What are the advantages of real time GI over baked ?
Is there any real advantage of using real time in games without day/night cycle,destruction etc. ?
I assume that real time is easier and faster for developers but is there something else ?
 
Back
Top