D
Deleted member 11852
Guest
You can't have either!!!good to know but still prefer a good PC port or proper remaster.
You can't have either!!!good to know but still prefer a good PC port or proper remaster.
Yes for ferns etc alpha polygons are gonna give better response as the polygon etc count to approx model this with geometry is gonna be far too high, but for long grass eg the battlefield 2 screenshot you posted above (2nd pic) it will look far better,I am talking about alpha masking single ferns/plants/flowers/grass blades
Sliver? Or are you a snake person from Neptune?!This is the B3D Console Technology Forum, not GAF. Let's keep a tiny slither of console gaming dignity.
Digital Foundry: Can you go into depth on the wins asynchronous compute gave you on the consoles and any differential there between PS4 and Xbox One?
Jean Geffroy: When looking at GPU performance, something that becomes quite obvious right away is that some rendering passes barely use compute units. Shadow map rendering, as an example, is typically bottlenecked by fixed pipeline processing (eg rasterization) and memory bandwidth rather than raw compute performance. This means that when rendering your shadow maps, if nothing is running in parallel, you're effectively wasting a lot of GPU processing power.
Even geometry passes with more intensive shading computations will potentially not be able to consistently max out the compute units for numerous reasons related to the internal graphics pipeline. Whenever this occurs, async compute shaders can leverage those unused compute units for other tasks. This is the approach we took with Doom. Our post-processing and tone-mapping for instance run in parallel with a significant part of the graphics work. This is a good example of a situation where just scheduling your work differently across the graphics and compute queues can result in multi-ms gains.
This is just one example, but generally speaking, async compute is a great tool to get the most out of the GPU. Whenever it is possible to overlap some memory-intensive work with some compute-intensive tasks, there's opportunity for performance gains. We use async compute just the same way on both consoles. There are some hardware differences when it comes to the number of available queues, but with the way we're scheduling our compute tasks, this actually wasn't all that important.
Digital Foundry: Can you talk us through how the 8x TSSAA implementation works? Is it consistent between consoles and PC?
Tiago Sousa: I've always been a fan of amortising/decoupling frame costs. TSSAA is essentially doing that - it reconstructs an approximately 8x super-sampled image from data acquired over several frames, via a mix of image reprojection and couple heuristics for the accumulation buffer.
It has a relatively minimal runtime cost, plus the added benefit of temporal anti-aliasing to try to mitigate aliasing across frames (eg shading or geometry aliasing while moving camera slowly). It's mostly the same implementation between consoles and PC, differences being some GCN-specific optimisations for consoles and couple of minor simplifications.
I thought that. I wasn't quite sure what it was indicative of, if anything. Maybe that most games over the the first few years of the current gen have been on tech that was fairly mature anyway?
Tech Interview: Doom
Very nice interview, probably one of the very best DF articles in a while
On async compute:
On TSSAA:
Digital Foundry: The scaler is highly effective on both PS4 and Xbox One. Can you give us your thoughts on the importance of resolution in general and its importance in terms of image quality?
Tiago Sousa: We don't use the native scaler from PS4/Xbox One, we do our own upsampling via a fairly optimal bicubic filter. It's also important to mention that the TSSAA implicitly takes into account the dynamic resolution scaling changes, mitigating aliasing occurring from resolution changes.
Resolution importance is a function of eye distance to display and display area - essentially the angular resolution - and to a degree also from the individual visual acuity. What that means is that the further away from your display, the higher the pixel density. After a certain distance/pixel density threshold you are essentially wasting performance that could be used to improve other things. In VR for example you have this tiny display on front of your face, pushing for higher pixel density still makes sense for dealing with things like geometry aliasing.
With console gameplay, where a player typically plays at a distance of two metres or more, and your display size is a common one (say 70" or so) it starts to become a performance waste relatively quickly, particularly if we are talking about 4K. If a developer does it the brute force way, you are essentially rasterising the same content, but literally 4x slower for not that much of a gain. Even for desktop rendering where users sit fairly close to the display, I can think of a myriad of approaches for decoupling resolution costs, than just brute force rendering.
Yep. I honestly have no idea why anyone thinks 8k will ever be necessary for the home.
Yep. I honestly have no idea why anyone thinks 8k will ever be necessary for the home.
I dont really agree with him though because playing maybe 12 feet away on my 55" 1080 TV, I noticed Doom looks pretty soft/blurry on XBO (IIRC DF says it's often running not much above 720P). It could definitely use a higher resolution.
Get an Xbox Scorpio!Get a PS4.
[emoji38]
Get an Xbox Scorpio!
Maybe he used the flops capacitor to get one from the future with the Fusion APU powered by all the garbage posts on the forums.Where?