*spin-off* How should we classify resolution

I imagine developers don't talk about it and it's only PR/marketing that cares about a singular metric. The devs will talk about specific buffers with their properties.
 
width x height x multisample count

done.
Where are you getting these numbers? Front buffer, back buffer, and output can all be different. And the same scene rendered with the same size front and back buffers and output at the same resolution can still be composed of different resolution components.

Also, Multisample isn't the only form of AA anymore, and it isn't always performed on every pixel when it its used.
 
So if we ignore the different data types for the final output image atm we have resolution as well as aspect ratio as variables;

The order uses a sub 1920 x 1080 resolution but its 1:1 pixel mapped with the inclusion of black bars..

If we are talking rendering its not full hd but if we are talking what our tv receives then it is full hd.

Are folks trying to gague the qty of pixels pushed to compare or marvel at gpu power or simply trying to know if they can expect a 1:1 image on their screen.
Off topic but that's always interested me with The Order, would it really have been more taxing if they used a higher resolution such as 2160x900 downsampled, to improve IQ by having a resolution closer to the 1080p pixel count fit into their 2.40:1 aspect ratio?
 
So your asking if using
2160x900 downsampled (to what, 1920x800 ?)
be more taxing than EQAA on 1920x800 (which i assume is what the order uses)
what level of eqaa ? 2xeqaa = 2msaa samples and 2 coverage samples
the 2160x900 is about 10% higher res I would assume the downsample is more or less free so I would assume a 10% lower framerate than 1920x800 no aa
so would whatever level of msaa used by the order (i would consider the cost of the coverage samples more or less free) hit the framerate more than 10%
Im sure someone who knows the cost of various msaa levels will weigh in.
 
MSAA increases depth sampling rate but keeps colour sampling rate the same. (ie depth is computed per fragment but colour per "pixel" (usually using centroid sampling.))
Memory cost is multiplied by sampling rate for both colour & depth.
Bandwidth cost is unclear since modern hardware compresses these data.
 
Last edited:
That's the AA used in TO:1886
2qhj5n.png

Now that the background is out of the way, it’s time to talk about what we used in The Order to combat aliasing. In the end the tools we used weren’t particularly novel or exotic, but it was still important to choose the right ingredients.

The first component was EQAA, which is a variant of MSAA available on AMD GPU’s. Like Nvidia’s CSAA, it essentially lets you decouple MSAA fragment storage from coverage computations so that you can more finely tune the balance between quality and performance/memory. For The Order we used 2 color fragments and 4 coverage samples, which puts the raw quality somewhere between 2x and 4x MSAA if you’re using a standard hardware resolve.

To increase stability, we wrote a custom resolve shader that uses a wider, higher-order reconstruction filter instead of the typical box filter that’s used in hardware. This essentially let us smooth out the signal a bit while resampling, in order to give a result that was more stable under motion.

Finally, we combined our resolve pass with a fairly standard temporal antialiasing component. This is done primarily to reduce flickering from shader aliasing, by tracking and reprojecting surface samples from previous frames.
 
Look up the word mega and mibi the original definition of mega = 2^10 is wrong

As Shifty has already said multiple definitions for Mega and Giga weren't a problem until hdd manufacturers and their marketers got a hold of them. As further proof they didn't redefine the Kilo prefix from 1x2^10 Bytes to 1x10^6 Bytes, just Mega and Giga. O/S coders don't care what games hdd manufacturers play so they've never changed their definition from 2^10 hence the calls I used to field, 'your website said I was getting a 500GB drive why is it only showing 476GB?'.
 
yes they did
kilobytes = 1000 bytes
kibibytes = 1024 bytes

yes for many years ppl were incorrectly(*) calling 1024 as a kilobyte, and some ppl still do today (I even make the mistake sometimes)

https://en.wikipedia.org/wiki/Kilobyte

(*)perhaps on purpose if it makes their product look better
Sort of. What happened is certain companies started advertising storage values using decimal values while it had already been an established for 20+ years that binary values were used for such things. The binary only KIBI, MEBI, etc prefixes were invented at this time. 1999 or so if memory serves. They didn't really exist as a standard before them. So what happened is for years a kilobyte was 1024 bytes. And then some companies began to advertises their products by using a 1000 byte kilobyte, as defined by tiny print on the box, and then a new standard was created for binary powers, but not everyone uses them.

It was 1024 for longer than it's been 1000
 
yes they did
kilobytes = 1000 bytes
kibibytes = 1024 bytes

yes for many years ppl were incorrectly(*) calling 1024 as a kilobyte
They can't have been incorrectly calling it kilobyte when that was the only name available (kibibyte hadn't been invented)! Someone bringing in a new name doesn't update the entire vernacular lexicon, especially when it's a perfectly serviceable lexicon that doesn't need updating - when talking about binary storage, the decimal prefixes take on the nearest power-of-two value. Absolutely no-one was ever confused by this. The confusion only arose when storage companies cheated to get bigger numbers than their market expect using their market's local terminology. And only HDD companies. Floppy discs and RAM etc. were using binary prefixes (with the same names as the decimal prefixes, but none of us were too stupid to get confused between the two of them).
 
They can't have been incorrectly calling it kilobyte when that was the only name available
Think about this sentence for a second, its like calling a 'donkey' a horse and saying thats correct since the word donkey hasnt been invented yet :). Yes a donkey is close to a horse I grant you, but its not correct. The rest of what you say is also not correct there have been instances of ppl/companies using kilobytes as meaning 1000 bytes before it became 'the standard'

I repeat, noone (including myself even though I sometimes do) should use kilobyte = 1024 nowadays, its like talking in inches for length (outside the US), btw inches as in ~2.54cm ah fuck it, lets just call it, 2 and a half cm :D
 
Think about this sentence for a second, its like calling a 'donkey' a horse and saying thats correct since the word donkey hasnt been invented yet :).
No, because a donkey and a horse aren't differentiated. They exist in parallel and it'd be unclear what animal you were referring to. Kilobyte as a word means 2^10 bytes. It has adopted the prefix but without any confusion whatsoever because it's always qualified by the 'byte' root.

I repeat, noone (including myself even though I sometimes do) should use kilobyte = 1024 nowadays, its like talking in inches for length (outside the US), btw inches as in ~2.54cm ah fuck it, lets just call it, 2 and a half cm :D
No it's not. A kilobyte is a entire thing with no confusion. There are squillions of words that have evolved over time to not follow their exact meanings, but we use them without confusion.

'Inflammable' means 'flammable'. 'Intrude' means 'in pushed'. 'Incredible' means 'not credible' OMG! A prefix with two different meanings! How can we cope?!

Kilobytes and other -bytes are far better than these other words because there's zero confusion or arbitrary evolution. There was no need whatsoever to invent a new terminology. Use of binary is so rare that it doesn't need a whole nomenclature of its own - it's not like we're talking about kilograms and kibigrams in the same sentence. Or even kibigrams ever, because they don't exist.

Binary prefixes were invented to solve a problem created by people not following the existing, perfectly serviceable language for marketing reasons. Instead of everyone adopting a (ridiculous sounding) alternate terminology, the storage manufacturers should just have been held to the official terms.
 
Back
Top