Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
That's not how hdr is supposed to work. The pq eotf has fixed nit levels. There's nothing stopping display manufacturers from adding dynamic tonemapping - all of them actually do this now. Dolby vision has just introduced "Dolby vision iq" for this exact reason. Basically the tonemapping is tied to an ambient light sensor in the tv and the image is adjusted accordingly - while maintaining the creator's intent.

A good read is polyphony digital's paper on hdr. Basically there's a sweet spot - like the diffuse white point. All hdr displays should be able to display this sweet spot (100-300 nit range). As long as the sweet spot is hit you can go as dark in the low end and as bright in the high end as you like. The better the display, the better the range and dimension to the image.
 
I could do that in the future and that is not a bad idea. But perhaps it is just me, but I have yet to actually be super enthralled with HDR for some reason. Perhaps it is because I just do not have an amazing HDR set.
well, when you do.... I'll left it that, and play mysterious. Still, you don't need an awesome monitor to notice how....it...., trust me on this one. Think of how mucho you love Raytracing, which I can understand despite only having a non capable of RT graphics card -1080-.

If youtube is tonemapping the HDR video, yes, I guess it will never look the same as true SDR. The issues of working with two colour spaces at the same time -the OS also has sometimes a hard time and odd things can happen-.

I have HDR enabled on the monitor all the time -using the W10 slider to adjust and equal both as much as I can- and it surprised me to see the HDR tag in the video-. I think it was a first for DF.

Keep up the good work, Alex!
 
That's not how hdr is supposed to work. The pq eotf has fixed nit levels. There's nothing stopping display manufacturers from adding dynamic tonemapping - all of them actually do this now. Dolby vision has just introduced "Dolby vision iq" for this exact reason. Basically the tonemapping is tied to an ambient light sensor in the tv and the image is adjusted accordingly - while maintaining the creator's intent.

A good read is polyphony digital's paper on hdr. Basically there's a sweet spot - like the diffuse white point. All hdr displays should be able to display this sweet spot (100-300 nit range). As long as the sweet spot is hit you can go as dark in the low end and as bright in the high end as you like. The better the display, the better the range and dimension to the image.

I'm looking at Calman for HDR Dolby Vision and their workflow for LG has two modes, one for reference viewing (dark) and one for daytime viewing that has the same peak brightness but an overall higher average picture level to compensate for ambient light. I'm not sure how it works, but it's not dynamic tonemapping. Without some good solution for ambient light, HDR is basically garbage. Any standard that forces you to see crushed blacks under your normal viewing condition would be missing the point.
 
Nope, I'm the same way. I think it's because noone has quite gotten HDR "right." For me, when I see it, it's either understated to where it doesn't look much different from good SDR lighting or it's overstated to make it "pop" more but then it comes across as looking unrealistic. I had high hopes for HDR, but I'm still waiting for something to really impress me with how well it matches realistic lighting. And so far, none really have.

I think part of it is also that developers still haven't figured out how to properly implement the transition from dimly lit indoor areas to brightly lit outdoor areas and vice versa with HDR. I would have thought this would have been easier to do than trying to accomplish the same feat in SDR, but I guess it must be tough, because no one is getting it right, IMO.

Perhaps if HDR stabilizes (standards and such) and if it becomes more prevalent with the next generation, we'll see something truly impressive. And when I say impressive I mean something that doesn't go, "LOOK AT ME, I'M IN HDR NOW" and instead something that you look at and don't notice because it all looks like how you expect it to look in real life.

Basically, I'll consider HDR to be good if I look at a scene in a game and at no point do I think to myself, "This looks like it's in HDR." or "This looks like it's in SDR." and instead I don't notice it because it all looks like how it should look.

Regards,
SB
depending on the game, it doesnt need to look realistic, but it it does.....

Have you tried Forza Horizon 4 with HDR? You can see the difference in real time. It's night and day.

Switch HDR off and HDR on and you are going to notice the difference in shadowing just in the paint of the car while the action is paused. In game it's just that it is a different game -looks wise-.

What looks like an amazing game, becomes the superlative of amazing as of a sudden. And once you enable HDR going back to SDR the game looks dull and lacks colour depth and shadowing, it becomes something of a washed out image -though still soooo good-.

Forza Horizon 4 has a very easy calibration tool to set the HDR right. The sweet spot for my monitor is 1500 nits -when the FM logo disappears completely as indicated-, in HDR for Games mode. My monitor goes up to 500 nits in HDR mode -so it isnt something out of anyone's league, yet the difference is so pronounced!-.

Also, have you tried Gears 5?

This one has a HDR tool to set the nits that you might like a lot, because the clouds look very realistic.

In this case, I followed what I see in real life, because the calibration image they use shows the sun appearing above the horizon in between the clouds as a reference.

So I started moving the and 500 nits looked dull, 1000 nits looked overly bright losing detail.

Then I found a sweet spot where the edges of the clouds were shining as in real life when the sun appears in between and when I looked at the nits, it turns out that the sweet spot was also 1500 nits.

The result is that playing Gears 5 is...otherworldly. :mrgreen:

You can see the blue lights of the gears' armor reflecting on the character's faces, and the colors are so intense..

When you are in the first level, and enter the cave, when the ceiling of the cave falls, what was so dark...

Becomes very brilliant because of the open gap on the ceiling where the outside light filters.

And you need to squint your eyes because of the luminance intensity.

Still not as in real life, maybe in a few years when 10000 nits monitors will be available, but it certainly adds to the atmosphere of games where materials and light -like with raytracing- behave like the real world, even if they are artistically more or less realistic.

cheers @Silent_Buddha
 
Hold onto your hats, Richard is discussing DLSS as possible tech for the next switch whenever it arrives.

Fascinating results at lower resolutions.


This is really interesting stuff and not only DLSS for Nvidia, I believe the AMD RIS results were also very respectable from what I recall. I would not be surprised if the PS5 went in this direction, the logical next step from checkerboard, or a combination of the two.
could this be an evolved variation of AMD RIS?

Apparently, an Xbox game studio is experimenting with shipping low res textures to be upscaled in real time by an AI.

https://wccftech.com/an-xbox-game-s...-res-textures-to-be-ai-upscaled-in-real-time/

AINeuralStuff-740x368.jpg
 
could this be an evolved variation of AMD RIS?

Apparently, an Xbox game studio is experimenting with shipping low res textures to be upscaled in real time by an AI.

https://wccftech.com/an-xbox-game-s...-res-textures-to-be-ai-upscaled-in-real-time/

AINeuralStuff-740x368.jpg

That picture while impressive is from something completely different and not something that can currently be done in real-time. So, it could be misleading if someone looks at that and thinks it's representative of what MS can accomplish in real-time.

Now, it's possible that MS may be able to do something similar in real-time, but we don't know as there are no public demonstrations of it yet.

Regards,
SB
 
That picture while impressive is from something completely different and not something that can currently be done in real-time. So, it could be misleading if someone looks at that and thinks it's representative of what MS can accomplish in real-time.

Now, it's possible that MS may be able to do something similar in real-time, but we don't know as there are no public demonstrations of it yet.

Regards,
SB
perhaps in the mobile space to use with the cloud? One can live without HDR -have many games without it that impress me sooooo much-, even without raytracing -the only tick I am missing as of now-, but on a phone or tablet for games to look good and save battery/storage space a game with this technology could be very interesting.

I think I saw that same image somewhere, a few years ago, but can't recall exactly.
 
Just because I am a stickler, AMD RIS does something vastly different than DLSS. It is merely an image sharpener, a smarter one, but nothing more. It cannot generate pixels, smooth lines, or do anything that image reconstruction like checkerboarding or DLSS will do. It actually increases aliasing.

Would not have it any other way. I know its just sharpening but I mentioned it as that seems to be exactly what the AI upscale on the shield is and its in AMDs toolkit for Sony and Microsoft

I seem to recall it produced reasonable results.

Did DF cover AMD RIS and its perceived quality and performance gains when AMD announced it? I seem to remember a comparison video but cannot find it now.

I had a quick Google and top result is similar to what I remember

https://www.techspot.com/article/1873-radeon-image-sharpening-vs-nvidia-dlss/

Bottom Line
Radeon Image Sharpening is genuinely impressive. It doesn’t require any developer implementation and it works well by sharpening the image which can be useful in a variety of situations.

After spending more time with the feature, we feel the best use case is for image downsampling with high resolution displays. A sharpened 1800p image was typically as good as a native 4K image in our testing, which means you can happily use this configuration with Navi GPUs to gain ~30% more performance for a minimal quality loss. Downsampling all the way to 1440p didn’t deliver as good results, so the sweet spot is around that 70 to 80 percent resolution scale.

The article the concludes in favor of ris over DLSS but they are old titles and the new DLSS implementations I believe are far better. Both will have their place but Sony and Microsoft are in team red so this could be something that is adopted. Also this is a blind driver application, I would assume the consoles sdk would allow tweaking to help dial in quality and artists vision into the application.

Edit: further thoughts
Possibly going off pieste here.
This works with soft images and not angular geometry.
Ps4 Pro Mark Cerny talked about 4k geometry and lower res shading that would eliminate the dumb upscale and artifacts, but may be superceded by temporal techniques now.
We know Microsoft has VRS and this would compliment that well, if it was applied intelligently with info from the game it could use the vrs data(and more) to know where to sharpen more or less.
 
Last edited:
Also this is a blind driver application, I would assume the consoles sdk would allow tweaking to help dial in quality and artists vision into the application.
There is a conscious implementation of it in the form of Contrast Adaptive Sharpening, it's used under the name of AMD FidelityFX in several titles, it's implemented in the engine itself and is vendor agnostic, and seems to selectively apply sharpening where it matters.

I believe next gen consoles will try to lessen the impact of 4K resolution using Smart Sharpening or AI Upscaling or Variable Rate Shading or a combination of them. This is badly needed if next gen games are to increase presentation quality significantly or use RT.
 
Apparently, an Xbox game studio is experimenting with shipping low res textures to be upscaled in real time by an AI.
Why on earth would you not ship the small textures and then once its downloaded on the HD upscale them only once BEFORE the game starts, one time only
i.e. Its apparent whoever wrote the article has no actual understanding of what they are doing
 
Would not have it any other way. I know its just sharpening but I mentioned it as that seems to be exactly what the AI upscale on the shield is and its in AMDs toolkit for Sony and Microsoft

I seem to recall it produced reasonable results.

Did DF cover AMD RIS and its perceived quality and performance gains when AMD announced it? I seem to remember a comparison video but cannot find it now.

I had a quick Google and top result is similar to what I remember

https://www.techspot.com/article/1873-radeon-image-sharpening-vs-nvidia-dlss/

Bottom Line
Radeon Image Sharpening is genuinely impressive. It doesn’t require any developer implementation and it works well by sharpening the image which can be useful in a variety of situations.

After spending more time with the feature, we feel the best use case is for image downsampling with high resolution displays. A sharpened 1800p image was typically as good as a native 4K image in our testing, which means you can happily use this configuration with Navi GPUs to gain ~30% more performance for a minimal quality loss. Downsampling all the way to 1440p didn’t deliver as good results, so the sweet spot is around that 70 to 80 percent resolution scale.

The article the concludes in favor of ris over DLSS but they are old titles and the new DLSS implementations I believe are far better. Both will have their place but Sony and Microsoft are in team red so this could be something that is adopted. Also this is a blind driver application, I would assume the consoles sdk would allow tweaking to help dial in quality and artists vision into the application.

Edit: further thoughts
Possibly going off pieste here.
This works with soft images and not angular geometry.
Ps4 Pro Mark Cerny talked about 4k geometry and lower res shading that would eliminate the dumb upscale and artifacts, but may be superceded by temporal techniques now.
We know Microsoft has VRS and this would compliment that well, if it was applied intelligently with info from the game it could use the vrs data(and more) to know where to sharpen more or less.
I read this article when it was originally published, and I couldn't help but get stuck up on the term "downsampling". Isn't it upsampling when you take a 1800p image and scale it up to 2160p regardless of any sharpening filters? Wouldn't downsampling be the opposite, when you take a higher res image reduce the sample rate of a higher resolution image down to a lower resolution image.

Anyway, RIS vs DLSS is no longer a vendor vs vendor issue, since RIS's algorithm is open source and has been implemented on the driver level by nVidia as well.

Hardware Unboxed did a fairly in depth video comparing RIS vs DLSS when RIS was released. Pretty sure they did follow up videos when nVidia released their sharpening also.
 
Why on earth would you not ship the small textures and then once its downloaded on the HD upscale them only once BEFORE the game starts, one time only
i.e. Its apparent whoever wrote the article has no actual understanding of what they are doing

Devils advocate says perhaps due to expensive, hard to expand, limited physical storage space on the hdd?
 
I'm wondering if there's some way they could use ai to upscale a small block of a texture that would fit into cache. Basically trade alu for vram bandwidth, keep the data in cache while it's needed. So some version of reading a block from the texture into gpu cache and then ai upscale before sampling, maybe even avoiding a write back to vram. Or maybe with virtual texturing, tiled resources, the virtual texture stores the upscaled texture. Seems the simplest case. Not sure if the tile sizes typically map to what fits into cache. Keep the virtual texture in vram, but you only have to selectively upscale the tiles that you need, which would roughly fit into cache.
 
Why on earth would you not ship the small textures and then once its downloaded on the HD upscale them only once BEFORE the game starts, one time only
i.e. Its apparent whoever wrote the article has no actual understanding of what they are doing

With games like RDR 2 already taking up 150 GB of valuable storage space, it's possible that if nothing is done, game sizes will grow significantly for next gen consoles.

While deduplication of stored data will help, developers will still want to take advantage of the increased resources available to them.

Better compression is obviously one way to accomplish this, but it doesn't come free as more aggressive compression algos are significantly more computationally expensive than what is currently used in games.

In a similar vein, real-time AI upscaling of textures is another way to address this, but at unknown cost. And, on top of that the two approaches aren't mutually exclusive. You could combine real-time AI upscaling with better compression for even greater storage savings.

But, as mentioned all of this comes at a cost. So, it'll certainly be interesting to see what each company does WRT this problem.

Regards,
SB
 
Devils advocate says perhaps due to expensive, hard to expand, limited physical storage space on the hdd?
Mate, do you know how slow thjs image scaling is? I tested it once a single image took like a minute
True my machine aint the best, so I looked up what someone else has gotten with their NVIDIA Tesla P40 - GPU
one image 3 = seconds
yet they want to do this with 100s of images per frame at 60fps :LOL: on weaker hardware no doubt as I assume they are not restricting it to $5000 GPUs
Sure with all algorithms theres room for improvements, but here we are talking about orders of magnitude improvements required
 
Mate, do you know how slow thjs image scaling is? I tested it once a single image took like a minute
True my machine aint the best, so I looked up what someone else has gotten with their NVIDIA Tesla P40 - GPU
one image 3 = seconds
yet they want to do this with 100s of images per frame at 60fps :LOL: on weaker hardware no doubt as I assume they are not restricting it to $5000 GPUs
Sure with all algorithms theres room for improvements, but here we are talking about orders of magnitude improvements required

I probably came over as pedantic or preachy. I was just trying to be concise out of lazyness and cold hands when walking.

I cannot comment on the practicality, but hdd space is the only reason I can see you might do it. . Bandwidth is cheap for most users so I dont think they care about file sizes for distribution, certainly we have already passed 100gb for a title and what's a couple of extra gigs between friends.
 
Nothing wrong with pedantry, & I do agree with what you said. Its just I fail to see how they speed this up enough to be practical.
Also from what I've seen yes for some images it upscaled ok/good but with others it was crap
 
Nothing wrong with pedantry, & I do agree with what you said. Its just I fail to see how they speed this up enough to be practical.
Also from what I've seen yes for some images it upscaled ok/good but with others it was crap

Something similar to DLSS. You spend X number of hours/days/weeks training on Y numbers of GPUs in a server farm. Once you have an acceptable model you apply that model to a game and the model is executed in real-time. The longer you train it, the better the results will be.

Hence,

In this case, it only works by training the models on very specific sets. One genre of game. There’s no universal texture map. That would be kind of magical. It’s more like if you train it on specific textures it works with those, but it wouldn’t work with a whole different set.

From the interview that was posted above... https://wccftech.com/an-xbox-game-s...-res-textures-to-be-ai-upscaled-in-real-time/

So, similar to DLSS, best results will come from training a ML model for assets in each specific game. Potentially even multiple ML models per game.

Regards,
SB
 
Last edited:
AI texture upscalling, technically, IS a form of data compression.

Yeah, I was trying to explain that in a way that more people might understand. IE - that it saves space and can be combined with other compression techniques that most people are at least somewhat familiar with. In my head as I typed that I was going to mention that it's basically just another form of compression, but by the time I finished the paragraph, I'd obviously forgotten. :) Getting old sucks. :p

Regards,
SB
 
Status
Not open for further replies.
Back
Top