Digital Foundry Article Technical Discussion Archive [2014]

Status
Not open for further replies.
is this solution considered better or worse than 1080p30 especially for MP? Wouldn't a 1920i solution produce 120 fields at 60 fps (in KZ case it would closer to 100 fields at 50fps). Despite the IQ you lose from artifacting, don't you gain smoother gameplay or does it also introduce stuttering?

IQ is less of a concern in MP is it not?
 
Last edited by a moderator:
First, this http://i.picpar.com/ARA.png
But at still and slowly rotating/moving view (the image I posted comes from a slowly rotating view), the temporal reprojection allows real native 1080p image (even if a bit of lower quality than regular 1080p).

It wouldn't be real 1080P unless you didn't move at all would it? And there's no such thing because of your idle gun animation at least either.
 
Actually no, you get only 60 fields. Basically the pixel load is almost the same as it is with 1920 at 30fps, that's why they can double the fps - they sacrifice half the pixels.

Or to say it in a more optimistic way, they try to come up with double the pixels by being clever...
 
I went back and watched some 1080p video captures DF did for multiplayer. If there's artifacting, I don't see it. I might see it if I knew what I was looking for. Maybe youtube covers up some of the artifacts.

So, is this technique like the reprojection that was linked earlier, or is it something different? Added latency is definitely a downside. Still, if it's only 1 frame at 60Hz, it could be worth the tradeoff.
 
Actually no, you get only 60 fields. Basically the pixel load is almost the same as it is with 1920 at 30fps, that's why they can double the fps - they sacrifice half the pixels.

Or to say it in a more optimistic way, they try to come up with double the pixels by being clever...

How much does the de-interlacing/reprojection cost? It must be fairly intensive, but cheap enough to make it worth the effort, at least in the case of Killzone.
 
How much does the de-interlacing/reprojection cost? It must be fairly intensive, but cheap enough to make it worth the effort, at least in the case of Killzone.

That's the million dollar question really and given that the reprojection technique still doesn't provide a consistent 60hz I'd guess it's non-trivial. I hope we get a nice detailed breakdown at GDC or some other technical venue
 
Actually no, you get only 60 fields. Basically the pixel load is almost the same as it is with 1920 at 30fps, that's why they can double the fps - they sacrifice half the pixels.

Or to say it in a more optimistic way, they try to come up with double the pixels by being clever...

Whats the point then? The single player has not problem doing 1080p30. Vertical interlacing and going 1920i30 seems backwards.

I thought KZ MP ran in the 50 fps range. If thats fields not frames then why choose 1920i25 over 1080p30. That produces less pixels over time.
 
Whats the point then? The single player has not problem doing 1080p30. Vertical interlacing and going 1920i30 seems backwards.

I thought KZ MP ran in the 50 fps range. If thats fields not frames then why choose 1920i25 over 1080p30. That produces less pixels over time.

Am I wrong here in my understanding, but isn't Killzone's MP still rendering half the frame each frame and then reprojecting(whatever) the rest? It's still rendering 50-60 fps, but half the image for each frame is created out of information from previous frames. Correct me if I'm wrong.
 
Scott is right, it is true 60 fps. It renders a half res frame's worth of data regularly, also reprojects from the previous frame and then blends the two to create a full res frame. It's not really interlacing in the strict sense.
 
given that the reprojection technique still doesn't provide a consistent 60hz I'd guess it's non-trivial.

We don't know that - the game could just as easly be CPU limited. Based on the developer comments we've got here on B3D about real life scenarios, it's actually the far more likely explanation.
 
Whats the point then?

The point is that they get both 60 fps and close to full 1080p resolution when it really matters (slow or no movement) from the rendering cost of ~30fps/1080p.
Pretty damn clever solution and trade off IMHO.

(but it still means that the hardware's performance advantage was completely misjudged, and the importance of actual full 1080p is overrated)
 
Secondly, the KZSF MP is native 1080p, most of the time. In fact you could say it's native 1080p during still and slowly rotating/moving view. When you are moving quickly then you'll see the interlaced artifacts like in some of the already posted screenshots.

the MP is not native 1080p, the native is half of that, at 960x1080. It renders at 960x1080 pixels per frame only. I see your link so I'll do the analysis (which anyone can do, I stated the procedures) on the other shot.

But at still and slowly rotating/moving view (the image I posted comes from a slowly rotating view), the temporal reprojection allows real native 1080p image (even if a bit of lower quality than regular 1080p).
what does this self-contradictory line even mean?

The analysis shows that, this blending method does contain more details than 960x1080, which is actually pretty cool. However looking at the detail differences on a native 1080p synthetic comparison, the amount of detail is noticeably less, meaning that that method does not reproduce the same amount of details (as you had claimed). I did some test comparisons, this definitely works better than a 720p (but the 960x1080 has more pixels to begin with anyways), probably a toss-up against a 900P, at 39% less pixels rendered.
 
Last edited by a moderator:
Am I wrong here in my understanding, but isn't Killzone's MP still rendering half the frame each frame and then reprojecting(whatever) the rest? It's still rendering 50-60 fps, but half the image for each frame is created out of information from previous frames. Correct me if I'm wrong.

So first, no, the MP does not use the same temporal technique as the the paper (nor the force unleashed prototype, nor what sebbi had mentioned). This might introduce a 1 frame latency, or as sebbi had mentioned, unpredictable cost, and scaling artifacts. The way I see it, the amount of missing pixels that needs to be computed between the frames fluctuates depends on the scene, and while you get the camera vector to compensate the motion of the whole scene, predicting moving objects in the scene are costly (motion analysis?), so the compute time is hard to manage.

The KZ MP probably uses something far simpler than that, the principle is rendering each frame at 960x1080, slices the current frame vertically into 960 odd lines, and combine that with the previous frame (sliced vertically into 960 even line). It's similar to how TV panels combing the 1080i signal into a full frame (but not exactly).

There might also be some motion compensation or reprojection used on the previous frame, or they could just judder the camera every so slightly to achieve the same effect to get more details into the combined 1920x1080 output. There is probably some resampling involved on the merge as well. It produces more details than you can get out of straight upscaling; however saying that this reproduce the same level of detail as native 1080p is just ludicrous.

Given that the amount of pixels being operated on are the same every frame (it's just a blending, essentially, like a motion blur), the compute cost is rather consistent, so it's way more predictable and manageable.
 
Last edited by a moderator:
The point is that they get both 60 fps and close to full 1080p resolution when it really matters (slow or no movement) from the rendering cost of ~30fps/1080p.
Pretty damn clever solution and trade off IMHO.

(but it still means that the hardware's performance advantage was completely misjudged, and the importance of actual full 1080p is overrated)

Does "60 fps" denote 60 960X1080 frames per second or 60 1920X1080 frames per second where half the pixels are done with their reprojection tech. I see the point in the later but not in former.
 
Does "60 fps" denote 60 960X1080 frames per second or 60 1920X1080 frames per second where half the pixels are done with their reprojection tech. I see the point in the later but not in former.

All I am saying is that if people are going to call this 1080p then they need to call Ryse 1080p as well.
 
All I am saying is that if people are going to call this 1080p then they need to call Ryse 1080p as well.

Not really. KZ is still rendering 1080p it's not upscaled from that. If it was rendering just 900p or 720p then your point would stand. I guess that it just doesn't fall under the trueHD moniker.

Sent from my Xperia Z using Forum Runner
 
Does "60 fps" denote 60 960X1080 frames per second or 60 1920X1080 frames per second where half the pixels are done with their reprojection tech. I see the point in the later but not in former.

It's both ...

You render 960x1080 at 60Hz and pull in information from the previous frame to fill in the other 960x1080 for the image. So it is true 60fps, and you should get all the benefits of the smoothness and responsiveness that entails. In some cases (camera position is not moving, or moving slowly) the additional information from the previous frame should give you a boost in image quality.
 
Not really. KZ is still rendering 1080p it's not upscaled from that. If it was rendering just 900p or 720p then your point would stand. I guess that it just doesn't fall under the trueHD moniker.

Sent from my Xperia Z using Forum Runner


You're rendering 960x1080 and pulling in data from another frame to fill out the rest. I definitely would not say it is equivalent to rendering 1920x1080. You're pulling in other information to complete the frame, which is kind of what upscaling algorithms do. This method is kind of like a horizontal upscale, but you're using data from a previous frame to fill out the missing information, rather than trying to interpolate based on information in the current frame.

That's my understanding.
 
You're rendering 960x1080 and pulling in data from another frame to fill out the rest. I definitely would not say it is equivalent to rendering 1920x1080. You're pulling in other information to complete the frame, which is kind of what upscaling algorithms do. This method is kind of like a horizontal upscale, but you're using data from a previous frame to fill out the missing information, rather than trying to interpolate based on information in the current frame.

That's my understanding.

and the analysis showed that the details are not on par with native 1080p.

Or put this another way, "it's true 1080p but looks just a little blurred" :rolleyes:
...exactly what happens when you upscale an image to a higher resolution.

It seems the line between interpolation and 'true rendering' is being blurred, so to speak ;)

I can deal with blurred lines so long there's no twerking.
 
Status
Not open for further replies.
Back
Top