Digital Foundry Article Technical Discussion [2024]

That's a very pro-IHV attitude that doesn't really follow the concern. Throughout all of gaming's history, developers have freely shared ideas and algorithms. This has caused iterative evolution across the entire industry. eg. Intel shared MLAA as a concept, from which an entire new branch of AA algorithms were spawned. Everything from line drawing to path finding to flocking to GI solutions etc. has been free to use and develop.

Irrespective of whether a dominate market player uses their considerable financial advantages to progress their product lines or not, a change in the industry to keeping one's ideas to oneself for a competitive advantage would be a seismic shift in the industry and render it far worse overall.
I gueess I am on the properitary side and not the GPL side.
But the fact is that the market is being pushed forward by properitary features while the "open source" is lagging behind.
x86 is not exactly open source, but is ahead of the open source RISC-V.
Neither is DirectX open source, compared to Vulkan.
Unreal Engine 5 not open source either.
Neither is Unity.
Most games are not open source either.

So the "fear" puzzles me 🤷‍♂️
 
Not sure if things have changed in the industry, and companies just aren't sharing like they used to. When FXAA came out, it seemed like it spread quickly, and there were implementations made all around. Now Guerilla makes a secret and seemingly leading non-ai upscaler. Really need someone to come out and invest real time into making something open sourced. FSR kind of sucks. I use DLSS because it's the best option on PC, but I'd 100% prefer a solution that could run on every gpu. Not sure if it's just an oversight in the industry and most companies don't want to invest the resources. Not sure who the good samaritan would be. Was expecting microsoft to come out with something, but ... yah.
Is it non-AI? Because it's not on the regular PS5, so I assume it must leverage machine learning somehow.
 
Is it non-AI? Because it's not on the regular PS5, so I assume it must leverage machine learning somehow.

I mean it could be. I'm just making a guess, but they could have trained it for that particular game. I feel like general training, if they wanted to use it for many games could be a really big investment. Not sure if a game studio would do that, especially since Sony is already offering a general solution. Could be wrong though.
 
I checked it, it really looks gorgeous on XSX. The game is a great example of how well the idTech engine fits the current console.
*cries in 2060 6 GB*

I get 20-30ish fps with DLSS Performance at 1080p and lowest settings. The Series S runs this game much better. Really frustrating.
 
I have tried the performance tier of GeForce now and damn, what a horrible experience that was.

I have a 2.5GB very recent fibre connection and it's very fast and very low latency. Not very close to a data center (I live in Italy and the closest data center is in center Europe) but I was expecting at least some stability and a playable experience.

I fire up Hellblade 2 and pretty much no matter what I do I can't reach 60 fps locked. It's always at 47 to 57 fps, even at 1080p with DLSS performance. The image quality is pretty good but it stutters constantly, losing packets every 2 seconds.

Sometimes it would take so long to load, it felt like I was playing on a PS3. It's way too slow to get you in the game.
It crashed more than once, and I encountered some huge graphical glitches (one of them in the photo below).

All of this to say, cloud gaming is not anywhere near close being something that's for everyone.

IMG20241217215933.jpgScreenshot_2024-12-17-22-56-05-57_940dbac95e936c551a1f90d834c82d5f.jpg
 
I have tried the performance tier of GeForce now and damn, what a horrible experience that was.

I have a 2.5GB very recent fibre connection and it's very fast and very low latency. Not very close to a data center (I live in Italy and the closest data center is in center Europe) but I was expecting at least some stability and a playable experience.

I fire up Hellblade 2 and pretty much no matter what I do I can't reach 60 fps locked. It's always at 47 to 57 fps, even at 1080p with DLSS performance. The image quality is pretty good but it stutters constantly, losing packets every 2 seconds.

Sometimes it would take so long to load, it felt like I was playing on a PS3. It's way too slow to get you in the game.
It crashed more than once, and I encountered some huge graphical glitches (one of them in the photo below).

All of this to say, cloud gaming is not anywhere near close being something that's for everyone.

View attachment 12679View attachment 12680
This is completely the opposite of my experience with Geforce Now which was damn near pristine. That said, it's been a while since I've tried it out.
 
This is completely the opposite of my experience with Geforce Now which was damn near pristine. That said, it's been a while since I've tried it out.
That's what I was hoping for it. I forgot to mention that I tried it on a LG C1 OLED with both wifi and a LAN cable. The app was so laggy that it was borderline hard to use. I don't like being so negative but I had very high expectations :cry:
 
That's what I was hoping for it. I forgot to mention that I tried it on a LG C1 OLED with both wifi and a LAN cable. The app was so laggy that it was borderline hard to use. I don't like being so negative but I had very high expectations :cry:

Doing a search briefly it shows other people having this problem with the native app. It's likely the C1s SoC is not fast enough. Remote gamestreaming does lower the requirements on the client end massively but there is still a non neglible amount of processing needed. Also we normally don't think of this way with non interactive video but many decoders and decoding pipelines are not optimized and designed for low latency which matters for game streaming. Android devices for example can vary from ~10ms latency to ~50ms+ on common devices/SoCs.

Another factor is that the "performance" tier I believe only give you a 4c/8t slice off a Zen 2 server CPU (so lower clocks and slower memory than desktop) with the RTX 3060 servers. The CPU is even slower if you happen to get bumped down to the older GPUs (such as a RTX 2080 rig).
 
Doing a search briefly it shows other people having this problem with the native app. It's likely the C1s SoC is not fast enough. Remote gamestreaming does lower the requirements on the client end massively but there is still a non neglible amount of processing needed. Also we normally don't think of this way with non interactive video but many decoders and decoding pipelines are not optimized and designed for low latency which matters for game streaming. Android devices for example can vary from ~10ms latency to ~50ms+ on common devices/SoCs.

Another factor is that the "performance" tier I believe only give you a 4c/8t slice off a Zen 2 server CPU (so lower clocks and slower memory than desktop) with the RTX 3060 servers. The CPU is even slower if you happen to get bumped down to the older GPUs (such as a RTX 2080 rig).
The stream was at 1080p at 60hz 8bit. It would be pretty bad if a high end 2021 TV didn't have enough power for that.
If common everyday TV's can't have a good cloud gaming experience, what hopes does the tech have?

Also, 11€ for a 4c 8t CPU and a 3060 is some pretty bad value. Clearly want to upsell you to the ultimate tier.
 
? https://dev.epicgames.com/documentation/en-us/unreal-engine/downloading-unreal-engine-source-code

Unless you are using some GNU-style definition which doesn't seem relevant to the conversation here.

Is Unreal Engine 5 Open Source?

No, Unreal Engine 5 is not open source. It operates under a source-available license, allowing developers to access and modify the source code, but with restrictions.

Commercial products made with Unreal Engine are subject to royalties, and contributions to the source code must be approved by Epic Games.

Question: Is Unreal Engine Open Source?​

Answer​

Unreal Engine, developed by Epic Games, is not open source; it is a proprietary game engine available under a source-available license. This means that while the source code of the engine can be accessed and modified by developers, there are restrictions placed on its use according to a licensing agreement.

UE5 is not open source, it is shared source or "source available". Assuming it follows the model of UE4.
https://en.wikipedia.org/wiki/Source-available_software
 
Right those answers are with respect to the GNU-style definition. But that has little relevance here, and nothing to do with the topic of sharing knowledge/research. The relevant question in terms of knowledge sharing is can I learn from what Unreal does and build on it myself, for which I will quote the FAQ:

Can I study and learn from Unreal Engine code, and then utilize that knowledge in writing my own game or competing engine?​


Yes, as long as you don’t copy any of the code. Code is copyrighted, but knowledge is free!

I don't mean to get on a big tangent here, but I highly disagree with the notion that Unreal is contributing at all to any lack of industry advancement and information-sharing. There's lots of things you can complain about Epic, but Unreal pretty much defines the high road as far as that stuff goes, especially compared to hardware and platform vendors.
 
Last edited:
So the "fear" puzzles me 🤷‍♂️
I don't think you appreciate how we got where we are in gaming then. Siggraph exists to freely share knowledge and its 50 years old. All the elite names in game tech have stood up and shared their research and discoveries, and learnt from others who have done so. We know exactly how Id Tech 5 works because Carmack told us, freely, and people could take his research into Megatextures and use it, advance it, whatever. This is exactly what Knowledge has been for hundreds of years, with scientists doing exactly the same, where Knowledge and Understanding were the aims, not Big Profits.

The free sharing of knowledge isn't opposite commercial ventures either and you don't have to have everything public domain to satisfy the principle of free knowledge. The cost to implement still has value, so an engine that takes 50 years of free CG knowledge and packages it for developers to use needn't be free to still benefit users and benefit from the past knowledge. I have a choice to either program my own implementation of something like A*, or buy a package someone else has made. I can read up how to write my own engine, or use an existing one that has a cost because it's the workload of hundreds of people over many years. Carmack sharing megatexturing as an idea didn't undervalue his company's work or products.

So it's not a choice between commercial and open source. It's both. It's always been both. Scientists and engineers have always preferred knowledge and seen the value in it and business has operated just fine with those standards. It's only suits and business people who want to lock everything behind patents to monetise it, having no interest in the wider benefits, and who cannot be reasoned with.
 
Last edited:
GPU vendors are still publishing papers on SIGGRAPH so I don't think that's a big problem. We still have published idea and algorithms, and sometimes (probably not very optimized) codes. This is not new either. For example, GPU vendors don't tell us how their anisotropic filters work. We can try to observe their behavior, but we don't have the exact algorithms, and people don't seem to have any problem with that.

So if we use DLSS as an example, the idea behind DLSS is simple. Everyone can do that. The key is those weights trained using machine learning. These weights are very valuable and I do understand why NVIDIA don't want to share them.
IMHO ray tracing codes are also similar. Ray tracing is not a new idea. People know how it works. The key is how to optimize it to run well on a specific hardware. This is also valuable and I don't see why GPU vendors should open source these.

I understand some might think it's better if everyone shares everything. It looks like to be the case, but it's not necessarily true, because it can quickly turned into a case of tragedy of the commons, as people would have no incentive to improve existing techs. For example, right now DLSS is much better than the alternatives. If NVIDIA suddenly decides to not doing this anymore and publishes all the weight and open source everything, it's quite possible that people will find the current solutions good enough and there'd be no significant improvement anymores. Or someone would take the existing weights and improve into something that they can sell. This is not necessarily better for everyone.
 
That's what I was hoping for it. I forgot to mention that I tried it on a LG C1 OLED with both wifi and a LAN cable. The app was so laggy that it was borderline hard to use. I don't like being so negative but I had very high expectations :cry:
maybe when L4S becomes a internet standard.... I live and was born in a mountainous and very low populated area and never thought I'd have a good internet connection. Now I have 300Mb fibre -600Mb soon-, and even had 1Gb fibre a year ago. Best of all, the speed is real
 
GPU vendors are still publishing papers on SIGGRAPH so I don't think that's a big problem.
The concern is the future. Specifically, the video that started this was DF's take on Guerilla's home-grown upscaling. It's reported by DF as fabulous, but we don't know how it works. We also don't know how Insomniac's 'Temporal injection' works. Wouldn't it be better for everyone if they shared their technique? Is the lack of a paper an indicative of a more closed future? I dunno. Guerilla was fabulous in the past, with a landmark retrospective on Killzone's deferred renderer that I think was very informative, and maybe even transformative, for that entire generation of games - many games ended up on a deferred renderer rather than forward renderer.

So many AA techniques were talked about openly. Ubi's checkboarding spawned an entire new AA paradigm (that or MLAA or whatever led into the years of so many ...AA acronyms we couldn't follow them all). So how does the new AA system work? Will we get to learn, and if not, why not?
 
So if we use DLSS as an example, the idea behind DLSS is simple. Everyone can do that. The key is those weights trained using machine learning. These weights are very valuable and I do understand why NVIDIA don't want to share them.
Sure, but are we then saying that the ideas behind any of these other things are not valuable? Many techniques that are simple to implement are shared freely, including basically all of the foundation of computer graphics. DLSS is a case where NVIDIA very much could still today - many years later - be profiting primarily off their ML hardware since it is likely not very practical to run on GPUs without similar hardware. Intel could likely do it of course, but I don't think the distance between DLSS and XeSS is really large enough to be a primary driver of why one would buy one or the other.

Of course it's obvious why these companies keep things proprietary; it was obvious with GameWorks and all the modern versions too. But make no mistake, it *does* slow industry progress and it *is* a replication problem for academic research. If I was reviewing an academic paper on a new AA technique, I'm not even sure I could reasonably require them to compare to DLSS because it's not published or has any sort of stable reference implementation to use. There is actually a broader reproducibility crisis that has become more acute due in part to ML and people not publishing enough training data/details. How much you may or may not care about this vs the capitalism angle is up to you, but IMO there's no real debate that the behavior explicitly hurts global academic progress.

I understand some might think it's better if everyone shares everything. It looks like to be the case, but it's not necessarily true, because it can quickly turned into a case of tragedy of the commons, as people would have no incentive to improve existing techs.
This really reads as some mental gymnastics to justify it to me. We have no real examples of this that I can recall in the graphics industry and tons of counter-examples. People improve things as they need to improve them. They make progress faster if they can start as close to the state of the art as possible and not waste time reimplementing/reinventing things that have already been done. If ML/AI upscaling is truly the future of the field, we absolutely need open reference implementations, not platform-specific black boxes.
 
Last edited:
I'm watching this before the DF interview

He calls the PS5 gpu an RDNA2 gpu. Wasn't this one of the things that people questioned? Wasn't there some discussion about whether PS5 was RDNA1 or 2 on this forum?
 
Back
Top