The Order: 1886

Amazing how much they invested into this game.

It still is to me the best looking game available. And most people I show it say it looks really like a movie to them.

Only a few immediatly realize, that Galahads eyes are crazy and completly wrong imo...this then breaks the illusion.

Why are eyes, or say eyes movements so difficult?
 
Last edited:
With and without filters:

qGJNUdS.jpg


Cz0yis5.jpg
 
God, the compression in those images makes me cringe, Sony please fix your share button :runaway:

Still, i think the PP helps them reach the intended result, the "filmic look". Second picture looks gamey.
 
God, the compression in those images makes me cringe, Sony please fix your share button :runaway:

Still, i think the PP helps them reach the intended result, the "filmic look". Second picture looks gamey.
And people didn't believe me when I told them those effects are a big part of why people are impressed with the game.
 
They are a big part but surely not the only part as we've already seen from GDC/SIGGRAPH.
 
Custom resolve just means that you use multisample load instructions in your own (compute) shader. You execute this shader instead of a normal fixed function resolve. This has been possible on PC since DirectX10.

The required functionality to do it has been available for a long time, but it hasn't been popular at all. This is probably due to the higher cost on PC. 3 years ago when I did lots of experimentation, every GPU I tried performed quite a bit worse with a custom resolve. It was especially bad on AMD GPU's, which have a high fixed overhead just from accessing an MSAA texture as an SRV. Nvidia has gotten better in recent years, and the cost is not so bad now. I'm not sure about AMD PC hardware, as I haven't tried anything recently. On consoles it's possible to do it rather cheaply, since you have access to low-level data that can accelerate the process. In fact I was able to beat a hardware resolve with my compute shader when I didn't have temporal AA enabled.

In our case we do custom resolve in the beginning of our (tiled) lighting shader. This way you never need to pay any bandwidth cost of the resolve operation (as you would if you write it to memory).

From what I understood from your presentation, you guys aren't exactly doing a typical resolve" since you're not actually using MSAA to oversample anything. You're instead using it as a way to shade and output your G-Buffer at a lower frequency, and you then do an in-place upsample by interpolating the UV's and tangent frame to your subsample locations. Which is very clever, by the way! :)
 
Me and our technical AD were very impressed about your visual quality and consistency. You have achieved very nice results for such a small team.
The required functionality to do it has been available for a long time, but it hasn't been popular at all. This is probably due to the higher cost on PC. 3 years ago when I did lots of experimentation, every GPU I tried performed quite a bit worse with a custom resolve. It was especially bad on AMD GPU's, which have a high fixed overhead just from accessing an MSAA texture as an SRV. Nvidia has gotten better in recent years, and the cost is not so bad now. I'm not sure about AMD PC hardware, as I haven't tried anything recently. On consoles it's possible to do it rather cheaply, since you have access to low-level data that can accelerate the process. In fact I was able to beat a hardware resolve with my compute shader when I didn't have temporal AA enabled.
On Xbox 360 you couldn't access the multisampled EDRAM data without a fixed function resolve (custom resolve was not possible). I believe this was one of the (many) reasons why custom resolve based pipelines were not that popular last gen.
From what I understood from your presentation, you guys aren't exactly doing a typical resolve" since you're not actually using MSAA to oversample anything. You're instead using it as a way to shade and output your G-Buffer at a lower frequency, and you then do an in-place upsample by interpolating the UV's and tangent frame to your subsample locations. Which is very clever, by the way! :)
In our current renderer, we actually use 8xMSAA and pack four (2x2) 2xMSAA pixels inside it. So we actually have 2xMSAA per pixel. We use custom MSAA pattern to make all our 8xMSAA quadrants to have identical 2xMSAA sampling pattern. Without custom sampling patterns this technique produces (slight) jitter at object edges.

We have also experimented with EQAA. Because we pack four pixels to a single 8xMSAA pixel we actually have less "unknown" samples, meaning that 2xMSAA + 2xEQAA (actually 8xMSAA + 8xEQAA shared between four pixels) produces a result closer to 4xMSAA. Your AA quality is stunning (very stable). I will be certainly looking at your implementation, and adapting some ideas :)

I was about to post follow up questions regarding to your SDSM and EVSM implementations, but I noticed that you have already explained these very well in your blog. In Trials Evolution we also used similar simplified SDSM implementation than yours (depth only fitting). We also had problems in read back latency (culling in CPU side). We had to use several conservative approximations to hide the issues. This eventually lead to the idea of GPU-driven rendering.

We also used EVSM with 16 bit channels (only positive moments, so two channels) and hardware MSAA in our depth maps (with Xbox 360 fixed function hardware resolve). Did you use a custom resolve for your MSAA shadow depth maps?

One nice thing about virtual shadow mapping is that it compresses the depth range as tightly as possible based on the screen space receiver geometry (=pixel) depth values (in shadow space). Our culling pipeline produces min/max depth for each shadow page, allowing us to tightly pack the depth values. This makes 16 bit shadow maps much more useful. It would be a nice experiment to combine this with 16 bit EVSM. Currently we use single tap PCF, as that provides nice antialiasing for our 1:1 resolution matched shadows, but I really miss blurry soft shadows. Prefiltering is also one advantage in variance based techniques, since we use XOR hashing for the virtual shadow map pages (to detect if they changed). Filtering cost is only paid when the page changes.
 
Last edited:
Guess i'll post some of the pictures i got too, that was on my first play-through so it's w/o photomode

16428663748_6b8433d2ae_o.jpg

16615224562_7d7f2329af_o.jpg

16587509811_cdecc0283e_o.jpg

16589392415_b9f8aa8306_o.jpg

15974373143_edd4978f55_o.jpg

16402359830_e1d70fcda6_o.jpg

16403529939_5a1b3236ac_o.jpg

15967224634_62103a0bbd_o.jpg

16588183531_33204f80d6_o.jpg

16403530379_8f0e6c2d07_o.jpg

15969613793_a05ae6f776_o.jpg

15969613113_6b6b2ce917_o.jpg

16593517932_808438f0a7_o.jpg

16448500000_c90329eeaf_o.jpg

16448332968_411a474c3f_o.jpg

16428599627_81de5a7bd7_o.jpg

Game's so moody :yep2:
Those are one of the best screenshots I have seen from this game. Good choice of perspectives, details and volumes.
 
Every time I see some nice looking open world game and thought wow graphics are good enough already, then seeing this game again makes me realize it's not even a contest, those open world games still have A LOT to catch up to. It's a pity we won't see a sequel at this stage as I could only imaging how much further PS4 could push this engine in the later cycle.
 
I decided to give it another playthrough now that we got photomode, some shots i got from the first few areas
2563599

2563600

2563601

2563602

2563603

2563604

2563605

2563606

2563607

2563608
 
Back
Top