that Killzone MP 960x1080 shot doesnt look that good (it doesnt look nice and sharp), I would like to see that scene vs 1920x1080 sidebyside.If you could keep the image quality relatively the same
Gates didn't say that, nor the 640k shit.
How old are you? Back in the days of Quake 640x480 was the HiDef of the gaming
.
We also used bidirectional reprojection during Trials HD project (2009), but scrapped the technology just before the game launch (since we could achieve 60 fps without it). The technology reached 100+ fps on last gen consoles, but had some problems. The obvious downside for bidirectional reprojection is the extra frame of latency, but any technique doing a full frame reprojection (every other frame) will also result in every other frame being very heavy compared to the cheap reprojected frame between two real frames. Flipping the display to screen in the middle of the expensive (real) frame rendering causes a GPU and a CPU stall, and it's quite hard to predict when to do that so that both the stall becomes minimized and the vertical retrace is never missed accidentally. This is a hard problem to solve.For those who are interested in re-projection, here's couple of links on subject.
http://advances.realtimerendering.c...nal Iterative Reprojection(Siggraph2012).pptx
http://research.microsoft.com/en-us/um/people/hoppe/proj/bireproj/
IMHO.
All sorts of re-projection, shader use and decoupling shading from resolution will be one of the big things this new generation.
he misquoted, I corrected him, and you have no idea what I was talking about.
back in my days, well lets just say my first PC runs on a 80286 CPU with a 20MB hdd. Actually, it was an XT, I think my life of coding started there, there was no VGA, monitors were monochrome.
I for one applaud your joke as well executed. The very best humour is lost on the majority of people anyhow - its exclusivity makes it all the more sinisterly amusing.Yeah, I thought the other things I wrote were sometimes even more stupid so it should've been obvious that I was joking
So I didn't spend much time looking for a good base shadow fall SP 1080p, I believe this is a MP bullshot, which is native 1080p if not higher. The goal is to quantify how much details are lost between the temporal blending to a native 1080p.
procedure:
1) rescale from 1920x1080 to 960x1080, nearest neighbor
2) rescale back to 1920 wide, nearest neightbor
3) Diff the layers between the original and the scaled, contract and brightness all up 100%
What you see, the more white stuff, means there's more details lost in between the scaling:
MP capture from earlier in the thread:
http://i.imgur.com/XsTegg2.jpg
native 1080p "bullshot"
http://i.imgur.com/3zuJ3jQ.jpg
So... why not just implement a dynamic scaling algorithm akin to KZM, which didn't drop res if the scene was static/near-static The biggest advantage over KZSF is none of the artefacting on transparents, biggest offender being the foliage, and zero ghosting.
That's a good point. Static images are when you need the higher resolution the most.But at still and slowly rotating/moving view (the image I posted comes from a slowly rotating view), the temporal reprojection allows real native 1080p image (even if a bit of lower quality than regular 1080p).
That's a good point. My guess is interlacing is fully consistent and predictable so will allow smoother framerates, whereas dynamic scaling is perhaps going to drop frames before the scaling kicks in if you're running double buffered with lower latency.So... why not just implement a dynamic scaling algorithm akin to KZM, which didn't drop res if the scene was static/near-static The biggest advantage over KZSF is none of the artefacting on transparents, biggest offender being the foliage, and zero ghosting.
So... why not just implement a dynamic scaling algorithm akin to KZM, which didn't drop res if the scene was static/near-static
I remember the awe I felt at that first game I fired up that ran at 640x480 and glorious 16 bit colour! I missed the CGA/EGA era myself and only jumped in with the 386 SX 25Mhz (FPUs are for wimps!) with a then mind blowing 4MB of RAM.
I actually considered posting about use of interlaced video (thinking vertically) some time back (may have even posted it!) as a possible scaling compromise. Reconstruction techniques could get quite advanced, like temporal tweening. Motion blur as a post process could greatly reduce artefacts when the interlacing would be most prominent.Even if the scene is static, they're only computing 960x1080 pixels per frame, 60 times per second; they probably don't have enough performance for computing a full res buffer 60 times. Maybe it could go up to something like 1280x960 while standing still, but even then it wouldn't look as good as a full 1080p frame.
That's a good point. My guess is interlacing is fully consistent and predictable so will allow smoother framerates, whereas dynamic scaling is perhaps going to drop frames before the scaling kicks in if you're running double buffered with lower latency.