Digital Foundry Article Technical Discussion Archive [2014]

Status
Not open for further replies.
If you could keep the image quality relatively the same
that Killzone MP 960x1080 shot doesnt look that good (it doesnt look nice and sharp), I would like to see that scene vs 1920x1080 sidebyside.
 
So I didn't spend much time looking for a good base shadow fall SP 1080p, I believe this is a MP bullshot, which is native 1080p if not higher. The goal is to quantify how much details are lost between the temporal blending to a native 1080p.

procedure:
1) rescale from 1920x1080 to 960x1080, nearest neighbor
2) rescale back to 1920 wide, nearest neightbor
3) Diff the layers between the original and the scaled, contract and brightness all up 100%

What you see, the more white stuff, means there's more details lost in between the scaling:

MP capture from earlier in the thread:
http://i.imgur.com/XsTegg2.jpg

native 1080p "bullshot"
http://i.imgur.com/3zuJ3jQ.jpg
 
Gates didn't say that, nor the 640k shit.

How old are you? Back in the days of Quake 640x480 was the HiDef of the gaming , you could compare it to 4K of today. Getting a smooth Quake at that resolution required some heavy hardware that wasn't really on the table for alot of people. Afaik my 180PPRO (OC) was ok at 512x384.

640x480 was enough :)

The PC shops around here had endless slideshows running with Gif Slides in VGA, you could see the Decoding and loading as the picture was getting drawn.
 
How old are you? Back in the days of Quake 640x480 was the HiDef of the gaming

.

he misquoted, I corrected him, and you have no idea what I was talking about.

back in my days, well lets just say my first PC runs on a 80286 CPU with a 20MB hdd. Actually, it was an XT, I think my life of coding started there, there was no VGA, monitors were monochrome.
 
Last edited by a moderator:
For those who are interested in re-projection, here's couple of links on subject.
http://advances.realtimerendering.c...nal Iterative Reprojection(Siggraph2012).pptx
http://research.microsoft.com/en-us/um/people/hoppe/proj/bireproj/

IMHO.
All sorts of re-projection, shader use and decoupling shading from resolution will be one of the big things this new generation.
We also used bidirectional reprojection during Trials HD project (2009), but scrapped the technology just before the game launch (since we could achieve 60 fps without it). The technology reached 100+ fps on last gen consoles, but had some problems. The obvious downside for bidirectional reprojection is the extra frame of latency, but any technique doing a full frame reprojection (every other frame) will also result in every other frame being very heavy compared to the cheap reprojected frame between two real frames. Flipping the display to screen in the middle of the expensive (real) frame rendering causes a GPU and a CPU stall, and it's quite hard to predict when to do that so that both the stall becomes minimized and the vertical retrace is never missed accidentally. This is a hard problem to solve.

Partial reprojection and partial regeneration every frame solves both the uneven frame cost and the latency problem. But it's much harder to implement. But with modern GPUs / compute shaders an efficient implementation is certainly possible. But I am sure some developers will try it, since it makes 1080p @ 60 fps much more achievable.
 
he misquoted, I corrected him, and you have no idea what I was talking about.

back in my days, well lets just say my first PC runs on a 80286 CPU with a 20MB hdd. Actually, it was an XT, I think my life of coding started there, there was no VGA, monitors were monochrome.

Yeah, i used PUNCH CARDS on my first computer and had my own powerplant to feed it power! /joke

He made a joke, he didn't quote Bill Gates , because he (gates) never said anything about the 640x480 but it goes well with the 640KB is enough urban legend. And it was a well excuted joke since back then 640x480 was actually "enough" in the eyes of many who were used to very low res games.

And what an exceptionel power full first computer, respect.
 
Yeah, I thought the other things I wrote were sometimes even more stupid so it should've been obvious that I was joking ;)
 
Yeah, I thought the other things I wrote were sometimes even more stupid so it should've been obvious that I was joking ;)
I for one applaud your joke as well executed. :yep2: The very best humour is lost on the majority of people anyhow - its exclusivity makes it all the more sinisterly amusing. :devilish:
 
So I didn't spend much time looking for a good base shadow fall SP 1080p, I believe this is a MP bullshot, which is native 1080p if not higher. The goal is to quantify how much details are lost between the temporal blending to a native 1080p.

procedure:
1) rescale from 1920x1080 to 960x1080, nearest neighbor
2) rescale back to 1920 wide, nearest neightbor
3) Diff the layers between the original and the scaled, contract and brightness all up 100%

What you see, the more white stuff, means there's more details lost in between the scaling:

MP capture from earlier in the thread:
http://i.imgur.com/XsTegg2.jpg

native 1080p "bullshot"
http://i.imgur.com/3zuJ3jQ.jpg

First, this http://i.picpar.com/ARA.png direct capture pic directly comes from a dark10x post from NeoGaf. For your information he works at Digital Foundry.

Secondly, the KZSF MP is native 1080p, most of the time. In fact you could say it's native 1080p during still and slowly rotating/moving view. When you are moving quickly then you'll see the interlaced artifacts like in some of the already posted screenshots.

But at still and slowly rotating/moving view (the image I posted comes from a slowly rotating view), the temporal reprojection allows real native 1080p image (even if a bit of lower quality than regular 1080p).
 
So... why not just implement a dynamic scaling algorithm akin to KZM, which didn't drop res if the scene was static/near-static :?: The biggest advantage over KZSF is none of the artefacting on transparents, biggest offender being the foliage, and zero ghosting.
 
So... why not just implement a dynamic scaling algorithm akin to KZM, which didn't drop res if the scene was static/near-static :?: The biggest advantage over KZSF is none of the artefacting on transparents, biggest offender being the foliage, and zero ghosting.

My guess is that this temporal reprojection technique also deals with temporal aliasing.

I think this technique is a variant of the temporal AA of the SP mode.

But I agree that it is a bad solution because of the artifacts and blur created when you move quickly which you do often in a MP mode...
 
"DOS=UMB, emm386.sys, himem.sys, in my day we walked to school in our bare feet!" The updated 'Yorkshire Men' sketch from Monty Python ;D

I remember the awe I felt at that first game I fired up that ran at 640x480 and glorious 16 bit colour! I missed the CGA/EGA era myself and only jumped in with the 386 SX 25Mhz (FPUs are for wimps!) with a then mind blowing 4MB of RAM. Despite it's crusty nature that 4MB kept that machine relevant long past the point of usefulness so ever since I've always overspecced RAM.
 
But at still and slowly rotating/moving view (the image I posted comes from a slowly rotating view), the temporal reprojection allows real native 1080p image (even if a bit of lower quality than regular 1080p).
That's a good point. Static images are when you need the higher resolution the most.

So... why not just implement a dynamic scaling algorithm akin to KZM, which didn't drop res if the scene was static/near-static :?: The biggest advantage over KZSF is none of the artefacting on transparents, biggest offender being the foliage, and zero ghosting.
That's a good point. My guess is interlacing is fully consistent and predictable so will allow smoother framerates, whereas dynamic scaling is perhaps going to drop frames before the scaling kicks in if you're running double buffered with lower latency.
 
So... why not just implement a dynamic scaling algorithm akin to KZM, which didn't drop res if the scene was static/near-static :?:

Even if the scene is static, they're only computing 960x1080 pixels per frame, 60 times per second; they probably don't have enough performance for computing a full res buffer 60 times. Maybe it could go up to something like 1280x960 while standing still, but even then it wouldn't look as good as a full 1080p frame.

However, because of the temporal blending, the reprojection approach can look much better and closer to full 1080p when standing still. And even in movement, you could argue that the reprojected/blended image still contains more information compared to a single 960x1080 buffer. So it's the solution with the superior quality IMHO.

Edit: also agree with Shifty on the performance and consistency part.
You could also argue that losing IQ because of the interlacing during fast action is acceptable as you can't see the lost information at those times anyway.
 
I remember the awe I felt at that first game I fired up that ran at 640x480 and glorious 16 bit colour! I missed the CGA/EGA era myself and only jumped in with the 386 SX 25Mhz (FPUs are for wimps!) with a then mind blowing 4MB of RAM.

Yeah, I had a 386SX too, 2 megs of RAM and VGA (with a 13" 640*480 monitor).
Best thing was the hard drive: double height 5.25" MFM. I think the read speed was something like 120-150 kilobytes per second.
 
Even if the scene is static, they're only computing 960x1080 pixels per frame, 60 times per second; they probably don't have enough performance for computing a full res buffer 60 times. Maybe it could go up to something like 1280x960 while standing still, but even then it wouldn't look as good as a full 1080p frame.
I actually considered posting about use of interlaced video (thinking vertically) some time back (may have even posted it!) as a possible scaling compromise. Reconstruction techniques could get quite advanced, like temporal tweening. Motion blur as a post process could greatly reduce artefacts when the interlacing would be most prominent.

I think the concept has merit similar to post FX AAs. Modern GPU architectures are going to allow a host of solutions working smarter, not harder, and I'll be pleased to see more experimentation. Hopefully crazy internet fanboy noise doesn't scare developers away from this exploration.
 
That's a good point. My guess is interlacing is fully consistent and predictable so will allow smoother framerates, whereas dynamic scaling is perhaps going to drop frames before the scaling kicks in if you're running double buffered with lower latency.

mm.. fair enough. Frame pacing should definitely be a concern. :oops: Dynamic can be pretty unwieldy... Tough decisions.

(I'd have told the artists to axe the foliage because it looks so bad alongside the rest of the graphical goodies - fewer temporal artefacts & less load! win-win. :p)
 
Status
Not open for further replies.
Back
Top