davepermen
Regular
well.. i'm still waiting for more detail what they all implemented, but the only obvious one is, of course, brilinear style stuff. then again, it can be a small-range error in certain cases, wich they try to detect.Suspicious said:1. They haven't used numerical optimizations that give ~3.8x speedup and introduce random one bit errors that could also be deemed acceptable.
they won't get a such factor out of it, as long as they do scanline rastericing. its still a one-time-per-pixel job. but there are ways, wich they could, if they do yet, we don't know. (discussions about one-mip-level trilinear, i guess you followed them).2. They haven't invented new algorhitm for full trilinear filtering that works ~14.5x faster and gives the same result as in Chudnovsky + FFT .vs. Gauss-Legendre case I mentioned.
well, the way they did it is not bad, it's rather good. it worked over a year till somebody noted with great effort.Instead they "invented" algorhitm which reduces precision of full trilinear filtering or even falls back to bilinear in some cases which is clearly NOT what I have requested through the game or control panel settings. And I don't like decisions removed from me and I guess neither do you? In case you really don't care then let me decide what you should think about it -- even if what they did was not bad, they did it in a bad way. 8)
yes, of course, it affects the output, and yes of course we should be informed. but we life with that quite fine till now, why can't we continue? we have fp24, non IEEE conforming INSTRUCTIONS (yeah, the data format is the same, but the calculation quality is not), we have adaptive anyso, funny antialiasing. we're so loaded with fakes. adaptive trilinear replacements don't hurt there really. but yes, you are fooled, and allowed to cry. you cry about something you would never have noticed (unlike brilinear at the start), but so what.
i, as a developer, don't care. and i know why. there are much more evil things going on on a gpu, this doesn't really mather. i'm on the other side working on raytracing, global illumination, and high quality offline (and one day online?) renderers. gpu's are just shit. loaded with such optimisations. if its not visible, it doesn't hurt me. i never expect a gpu "to work as expected". it is not ment to. it is ment to illusionise the gamer into the feeling he sees something complex, realistic. thats all. scientific calculations on gpus are fine, but i would not do it if i need the precicion with ANY today existing gpu's. there is just no 100% definition on how each part of it really works.
thats my opinion on it. except on this russian page, there is nothing showing that it visually hurts. and no user-complains prior to the bit-comparisons. thats good enough to be a valid optimisation, it passed the "be in the range of near enough to the original". what else do you need? usercontrol. okay. then again, we can't compare pixelshading hw at all. they don't work identical, nor do they the same workload. so what? it never hurted, as long as the output looks good. with all _pp, it didn't. with the premature optimisations per app, it was cheating. now, with the shader optimizer, it is a valid optimisation. and no one bothers.