Digital Foundry Article Technical Discussion [2025]

GOWR, at least according to Jetpack, compiles every possible shader - no QA playtesters gathering up PSO's. I'm wondering if this is so thorough, why isn't everyone doing it? GOWR just has a more manageable amount of shaders whereas with other games this would be a 30 minute wait? Dunno.
Yes, exactly this.

It's actually unbelievable how "console like" in consistency Ragnarok is on PC. It's just pristinely smooth comparatively to most games. It feels like there's no edge cases here.. it just works.

Give a player the damn option to pre-compile EVERYTHING possible. Or skip it if they want. Win/Win.
 
TBH I've never understood why there are so many shaders. Surely a handful cover most of the materials you actually want and parameters adjust their look?

Isn’t it because every parameter value creates a new permutation that has to be pre-compiled? Imagine if CPUs worked this way. Sheesh.
 
GOWR, at least according to Jetpack, compiles every possible shader - no QA playtesters gathering up PSO's. I'm wondering if this is so thorough, why isn't everyone doing it? GOWR just has a more manageable amount of shaders whereas with other games this would be a 30 minute wait? Dunno.
I think they did both.

Very early on we decided not to just have QA play the game and accumulate PSOs that way, then ship some pre-known set and hope in the wild that players don't look off into a corner. We did the full build offline and created the PSOs offline, so all of the data is known beforehand in the pipeline for us, and it took us a substantial amount of time to get that right.

Shader compilation stutters seem like a problem that can be fixed rather easily if proper care is given to that step. Maybe not fully fixed, but it can be minimized enough not to be intrusive. It just baffles me that you start the game, see huge frame time spikes at the start, and the developer either didn't catch that or maybe didn't bother. Sure, it sounds like a lengthy process, but the experience is so dramatically improved that it should be a priority.
 
Last edited:
Isn’t it because every parameter value creates a new permutation that has to be pre-compiled? Imagine if CPUs worked this way. Sheesh.
Not exactly -- there are much worse performance tradeoffs for branching, so (particularly older) shaders are heavily annotated to create new permutations that can be organized and dispatched together rather than one big shader with all of the code. We're moving away from this to an extent, but there are still performance tradeoffs and authoring challenges. (Sometimes when you guys post a "bad" gpu occupancy graph of a game with less stutters you're probably looking at the other side of this tradeoff)
 

New vid and below I've timestamped an answer that I agree with to the extreme.

Eh, there's a reason why people are criticizing his approach. Framing the lack of real criticism as "outrage and anger" I think skirts the root of people's criticism. Richard says "I find it hard to get angry about a product you don't have to buy" but in the same vein finds it easy to get excited about a product you don't have to buy(RTX 4090, RTX 3080, etc).... Looking at the whole segment on value in the RTX 5080 review, the MSRP was used to compare value of products... Unfortunately, we knew the msrp was a lie so that comparison doesn't hold water. Even more frustrating is that in today's market, consumers are faced with the reality of determining value based on actual market prices and this should be kept in mind when reviewing a product... On that aspect, DF often misses the mark.

Regardless, I'm not even sure if DF buy's their GPUs in the first place as surely, they receive units for review well ahead of release. So when Nvidia offers no real improvements in $/frame vs available options on the market today, I can see how he would be apathetic. I'm not going to even delve into the fact that DF gets early access to previews for GPUs since ampere which is undoubtedly due to their generally positive coverage of the products. If you watch all the DF Nvidia GPU review videos since RTX 2000 series and weigh them as to whether it's positive or negative, you'll find that the reviews skew towards the positive end. I mean, look at the RTX 3080Ti review video title "Big Performance, Big Price" on product that jacked up the price by 71% and the performance by 12% over the 3080 which was launched at $699. In fact, I think this is 3 generations in a row where their preview coverage has served as part of the start of the marketing cycle for the new GPU releases. This access is often not given to other more experienced and frankly more robust GPU reviewers.

I like Digital Foundry for all sorts of content but GPU reviews are not one of them as there is a visible conflict of interest. Then again, I don't give much credence to the opinions of those who do not pay for their products and this isn't really related specifically to DF but applies to a whole bunch of reviewers in general.
 
Eh, there's a reason why people are criticizing his approach. Framing the lack of real criticism as "outrage and anger" I think skirts the root of people's criticism. Richard says "I find it hard to get angry about a product you don't have to buy" but in the same vein finds it easy to get excited about a product you don't have to buy(RTX 4090, RTX 3080, etc).... Looking at the whole segment on value in the RTX 5080 review, the MSRP was used to compare value of products... Unfortunately, we knew the msrp was a lie so that comparison doesn't hold water. Even more frustrating is that in today's market, consumers are faced with the reality of determining value based on actual market prices and this should be kept in mind when reviewing a product... On that aspect, DF often misses the mark.

Regardless, I'm not even sure if DF buy's their GPUs in the first place as surely, they receive units for review well ahead of release. So when Nvidia offers no real improvements in $/frame vs available options on the market today, I can see how he would be apathetic. I'm not going to even delve into the fact that DF gets early access to previews for GPUs since ampere which is undoubtedly due to their generally positive coverage of the products. If you watch all the DF Nvidia GPU review videos since RTX 2000 series and weigh them as to whether it's positive or negative, you'll find that the reviews skew towards the positive end. I mean, look at the RTX 3080Ti review video title "Big Performance, Big Price" on product that jacked up the price by 71% and the performance by 12% over the 3080 which was launched at $699. In fact, I think this is 3 generations in a row where their preview coverage has served as part of the start of the marketing cycle for the new GPU releases. This access is often not given to other more experienced and frankly more robust GPU reviewers.

I like Digital Foundry for all sorts of content but GPU reviews are not one of them as there is a visible conflict of interest. Then again, I don't give much credence to the opinions of those who do not pay for their products and this isn't really related specifically to DF but applies to a whole bunch of reviewers in general.
I don't agree with your basic premise that there is a conflict of interest.

With regards to the specific points you bring up on the way to that assumption:
- We may have to deal with scarcity for a while. We'll see how it goes. You can't say the MSRP is a lie just 4 days after launch.
- I see no reason to assume DF gets cards significantly earlier than other reviewers, if at all.
- The price increase on the 3080Ti is right there prominently in the title of the video. DF generally hits the same beats as other reviewers. The negatives are invariably mentioned.
- But then they do more. They go deeper into image quality detail than anyone else. I'm not sure what you're looking for from 'more robust GPU reviewers' but I think they're actually showing others a thing or two.
- I don't know how anyone could seriously consider it important that reviewers should pay for products out of their own pocket. For all sorts of reasons. You want reviewers to only review products they'd buy? How would they know what to get? You want to force reviewers to pay for bad products? Have you thought about this, like at all?
 
Eh, there's a reason why people are criticizing his approach. Framing the lack of real criticism as "outrage and anger" I think skirts the root of people's criticism. Richard says "I find it hard to get angry about a product you don't have to buy" but in the same vein finds it easy to get excited about a product you don't have to buy(RTX 4090, RTX 3080, etc).... Looking at the whole segment on value in the RTX 5080 review, the MSRP was used to compare value of products... Unfortunately, we knew the msrp was a lie so that comparison doesn't hold water. Even more frustrating is that in today's market, consumers are faced with the reality of determining value based on actual market prices and this should be kept in mind when reviewing a product... On that aspect, DF often misses the mark.

Regardless, I'm not even sure if DF buy's their GPUs in the first place as surely, they receive units for review well ahead of release. So when Nvidia offers no real improvements in $/frame vs available options on the market today, I can see how he would be apathetic. I'm not going to even delve into the fact that DF gets early access to previews for GPUs since ampere which is undoubtedly due to their generally positive coverage of the products. If you watch all the DF Nvidia GPU review videos since RTX 2000 series and weigh them as to whether it's positive or negative, you'll find that the reviews skew towards the positive end. I mean, look at the RTX 3080Ti review video title "Big Performance, Big Price" on product that jacked up the price by 71% and the performance by 12% over the 3080 which was launched at $699. In fact, I think this is 3 generations in a row where their preview coverage has served as part of the start of the marketing cycle for the new GPU releases. This access is often not given to other more experienced and frankly more robust GPU reviewers.

I like Digital Foundry for all sorts of content but GPU reviews are not one of them as there is a visible conflict of interest. Then again, I don't give much credence to the opinions of those who do not pay for their products and this isn't really related specifically to DF but applies to a whole bunch of reviewers in general.
DF are like the elite of technology and it's only usually nVidia who offers something different. I can't discern the complicated from the simple.

But ideally there should be more competition offering alternatives. If you can't go with AI, use something different to offer equal or better results, or put a new technology to the table like nVidia did with Gsync and FG, that are now universal.

As for DF reviews, it might depend on a product. I loved DF coverage of XeSS with examples, that's why I got an ARC.

DF reviews also show the power consumption of several cards. When I see a GPU in their reviews with a greatl optimised power consumption/performance ratio, that gets my attention if it has good technologies to support that efficiency -i.e. dlss, xess, fg, etc-.
 
I don't agree with your basic premise that there is a conflict of interest.

With regards to the specific points you bring up on the way to that assumption:
- We may have to deal with scarcity for a while. We'll see how it goes. You can't say the MSRP is a lie just 4 days after launch.
AIBs have come out to say that the current msrp is too tight and have claimed it's "Equivalent to a charity". Source. In fact, I expect pricing to be highly elevated for the whole generation as Nvidia will tightly control the supply. The margins they gain by redirecting those chips to datacenter are far higher than selling it to gaming customers.
- I see no reason to assume DF gets cards significantly earlier than other reviewers, if at all.
Yea, comments from other reviewers would contradict this....
- The price increase on the 3080Ti was right there in the title. DF generally hits the same beats as other reviewers. The negatives are invariably mentioned.
I don't quite agree. The title suggests an equivalence between the price increase and the performance increase which we know is not true. A more correct title would be "Big Price, Small Performance Increase".
- But then they do more. They go deeper into image quality detail than anyone else. I'm not sure what you're looking for from 'more robust GPU reviewers' but I think they're actually showing others a thing or two.
Yes they do but others test more games, examine the history and pricing of the x80 series, do more comprehensive testing like GN on things like cooling dynamics both in and outside of a case. Testing like the 12vhpwr testing to ensure reliability, transient testing, etc.
- I don't know how anyone could seriously consider it important that reviewers should pay for products out of their own pocket. For all sorts of reasons. You want reviewers to only review products they'd buy? How would they know what to get? You want to force reviewers to pay for bad products? Have you thought about this, like at all?
Yes, I've seriously thought about it and there are lots of reviewers who pay for their products. Some even after receiving a review sample will purchase the product and give it away for free to their userbase. It's a lot different when you have skin in the game. As it stands, there's an uneasy dynamic between reviewers and manufacturers. To continue receiving products, the manufacturer has to be satisfied with your review. We saw this play out with HUB where they were blacklisted by Nvidia for a period of time. We've seen this with other manufacturers where they only send products to people who are likely to give favorable reviews to control the narrative. There's a lot of money at stake with these products and you can't have people torpedoing your sales with bad reviews. So yes, reviewers should pay for their product to address the coercive nature of this relationship.
 
Last edited:
AIBs have come out to say that the current msrp is too tight and have claimed it's "Equivalent to a charity". Source. In fact, I expect pricing to be highly elevated for the whole generation as Nvidia will tightly control the supply. The margins they gain by redirecting those chips to datacenter are far higher than selling it to gaming customers.
You can't call it a lie yet. No matter what rumors you've been listening to or what you expect.
Yea, comments from other reviewers would contradict this....
Who is saying DF is getting preferential treatment? And also, what could be the motivation for such claims?
I don't quite agree. The title suggests an equivalence between the price increase and the performance increase which we know is not true. A more correct title would be "Big Price, Small Performance Increase".
It isn't actually a case of correct or not correct. It's about emphasis and tone. Either way, they thought it was worth highlighting in the actual title itself.
Yes they do but others test more games, examine the history and pricing of the x80 series, do more comprehensive testing like GN on things like cooling dynamics both in and outside of a case. Testing like the 12vhpwr testing to ensure reliability, transient testing, etc.
Different choices, then, and some of those end up in standalone articles, not the review - appropriately so. But that's just more in line with what you're looking for, not necessarily 'more robust'. DF spends more of their time on image quality, and it's good someone does. I reckon there are more than enough videos that actually take the time to speak a list of percentage differences in 25+ games out loud.
Yes, I've seriously thought about it and there are lots of reviewers who pay for their products. Some even after receiving a review sample will purchase the product and give it away for free to their userbase. It's a lot different when you have skin in the game. As it stands, there's an uneasy dynamic between reviewers and manufacturers. To continue receiving products, the manufacturer has to be satisfied with your review. We saw this play out with HUB where they were blacklisted by Nvidia for a period of time. We've seen this with other manufacturers where they only send products to people who are likely to give favorable reviews to control the narrative. There's a lot of money at stake with these products and you can't have people torpedoing your sales with bad reviews. So yes, reviewers should pay for their product to address the coercive nature of this relationship.
Yeah, no, this isn't sustainable at all. HUB doesn't buy the 50 motherboards in their line up either. That'd be insane. But go ahead and give extra props to reviewers who buy their stuff if you feel like it, it is meaningless to me. The integrity of the reviews themselves and the reputation of the media organization is the skin in the game.
 
You can't call it a lie yet. No matter what rumors you've been listening to or what you expect.
I can call it a lie because I can't purchase the product at msrp. When apple launches their latest macbook, iphone, etc and list the msrp, I can purchase it at msrp on day 1, if not shortly there after. The same is true for a whole host of other manufacturers.
Who is saying DF is getting preferential treatment? And also, what could be the motivation for such claims?
Now you're changing the content of the comment. You said "I see no reason to assume DF gets cards significantly earlier than other reviewers, if at all". However, other reviewers, specifically smaller ones have indicated that they've received review samples late leading to a rushed review cycle for them. Some didn't even receive samples even though Nvidia indicated that they may receive samples. Meanwhile, DF has GPUs early enough to release an early preview weeks in advance....
It isn't actually a case of correct or not correct. It's about emphasis and tone. Either way, they thought it was worth highlighting in the actual title itself.

Different choices, then, and some of those end up in standalone articles, not the review - appropriately so. But that's just more in line with what you're looking for, not necessarily 'more robust'. DF spends more of their time on image quality, and it's good someone does. I reckon there are more than enough videos that actually take the time to speak a list of percentage differences in 25+ games out loud.
It's certainly more robust by definition as other reviewers cover far more areas of testing than just image quality, fps, and light power testing.
Yeah, no, this isn't sustainable at all. HUB doesn't buy the 50 motherboards in their line up either. That'd be insane. But go ahead and give extra props to reviewers who buy their stuff if you feel like it, it is meaningless to me. The integrity of the reviews themselves and the reputation of the media organization is the skin in the game.
I do not recall framing my comments to account for what's meaningful to you. You're free to judge by your own metrics as you are your own individual. I'm merely pointing out what is obvious and I understand if it's an uncomfortable topic for you to discuss. As for HUB, they've taken a different approach which is to play chummy with the aib's. It's why you always see sponsored content from aib's on their channel. The benefit of that is that it solves their issue with major product reviews like the gpu and cpu. The downside of that is that it makes the reviews of products from their AIB sponsors subject to the same coercive relationship I referenced earlier. Every reviewer has their bias/slant/conflict of interests. The smart ones inform the audience so the audience can create the appropriate mental offset to interpret their comments/praises/critics. The not so smart ones pretend like it doesn't exist.
 
Last edited:
Amen. It baffles me how people can be so angry about new technologies when the alternative is … nothing.
that's the issue with AMD, they are under huge pressure, and Intel to a lesser extent, too. AMD is nowhere near nVidia in many aspects. The best thing Intel did is opening Xess to everyone's GPUs. But other than that, they are like a very little nVidia, a duck against a peregrine falcon, no chance.

If any of them can come up with a power efficient huge gains in raster, to give some extra life to raster and faster base framerates, without needing a transformer the size of your room to run a game, or with a whole new way to make a GPU....

They can take some time, gamers have a huge backlog of atemporal titles to play meanwhile -Company of Heroes 2, Monster World... etc-, and even demanding modern games run decently well on AMD and Intel hardware as long as you don't enable path tracing.
 
I can call it a lie because I can't purchase the product at msrp. When apple launches their latest macbook, iphone, etc and list the msrp, I can purchase it at msrp on day 1, if not shortly there after. The same is true for a whole host of other manufacturers.
For GPUs this has been happening for as long as I can recall. Since long before any AI or crypto craziness, long before NVIDIA had 90% marketshare. IDK why it's done this way when like you said many other tech products are generally available when they are launched. But by your logic, many if not most GPU launch reviews were a lie.

That said I'm not feeling too confident about Blackwell availability but it's much too early to draw any conclusions.
 
For GPUs this has been happening for as long as I can recall. Since long before any AI or crypto craziness, long before NVIDIA had 90% marketshare. IDK why it's done this way when like you said many other tech products are generally available when they are launched. But by your logic, many if not most GPU launch reviews were a lie.
Not at all, many reviewers accounted for current real world pricing of competing products when calculating the cost per frame. The 5090 and certainly the 5080 will not be available at anytime soon at the msrp for most people.
That said I'm not feeling too confident about Blackwell availability but it's much too early to draw any conclusions.
You shouldn’t be feeling confident at all. After Evga left, it was clear to see the game nvidia was playing. They started introducing competing reference cards while slowly scaling up their internal manufacturing capabilities. They started to attack the margins of aibs by charging aibs in a way that would make the msrp hard to hit.

Look it’s fine if Nvidia wants to squeeze aibs out and become like Apple, however they need to commit. Apple can launch and sell 20 million iPhones in a year. If you want an iPhone, MacBook, AirPods, at launch, you’ll get one pretty much right away. They launch a iPhone every year and get it right every year. None of this paper launch stuff like Nvidia who only ships like 700 5090s to micro center for all of the USA. Nvidia has 18-24 months between product releases and they get it wrong every single time.
 
You shouldn’t be feeling confident at all. After Evga left, it was clear to see the game nvidia was playing. They started introducing competing reference cards while slowly scaling up their internal manufacturing capabilities. They started to attack the margins of aibs by charging aibs in a way that would make the msrp hard to hit.
Not sure if you are new to the GPU scene but reference cards are nothing new, in fact all IHVs have made them when introducing new products. IHV reference models have always been seen as setting the MSRP price floor. AIBs can choose to price their products as they please anywhere equal to or above the MSRP.

AIBs want to earn as much as they can and it's not because nvidia is attacking their margins. Why do you think you see such a wide price range for AIB products if they are all getting the same "expensive" memory from Nvidia at prices much cheaper then the open market? It's not because their margins are tight rather they want to charge a price based on the value they place on their product.

I feel very confident. You should feel less paranoid about your perception of what Nvidia is doing and maybe be more concerned about AIBs pricing their products based on market demand/hype and whether that is a bad thing as a corporation.
 
Not sure if you are new to the GPU scene but reference cards are nothing new, in fact all IHVs have made them when introducing new products. IHV reference models have always been seen as setting the MSRP price floor. AIBs can choose to price their products as they please anywhere equal to or above the MSRP.

AIBs want to earn as much as they can and it's not because nvidia is attacking their margins. Why do you think you see such a wide price range for AIB products if they are all getting the same "expensive" memory from Nvidia at prices much cheaper then the open market? It's not because their margins are tight rather they want to charge a price based on the value they place on their product.

I feel very confident. You should feel less paranoid about your perception of what Nvidia is doing and maybe be more concerned about AIBs pricing their products based on market demand/hype and whether that is a bad thing as a corporation.
What makes you so sure that AIB's margins aren't tight?

https://videocardz.com/newz/nvidias...ure-on-board-partners-msrp-feels-like-charity

This is like the tenth time I have heard this story in the last years.

I have a pretty nuclear take tho: I think AIB's should not even exist anymore. Nvidia has the resources to make all the cards that they want, and the reference cards are pretty much the best.
 
Not at all, many reviewers accounted for current real world pricing of competing products when calculating the cost per frame. The 5090 and certainly the 5080 will not be available at anytime soon at the msrp for most people.
I'm talking about the launch reviews that are written before official availability. It would be impossible to use real world pricing when the card being reviewed can't be purchased. You could compare the MSRP of the new card to the street price of existing cards and I do like when reviewers do this.
 
Yeah, I figured. This has been a problem since the days of UE3 as far as I'm aware.
I mean shader JIT stutter was a problem even before DX12/Vulkan. In fact it was one of the main reasons that motivated making PSO creation explicit in those APIs because in DX11 and previous there was basically no way for an application to do anything about it even if it wanted to. Obviously there have been downsides to the amount of generality/leeway that was exposed in the newer APIs, but it is not an issue that is unique to them (although the name "pipeline state object" is from them).
TBH I've never understood why there are so many shaders. Surely a handful cover most of the materials you actually want and parameters adjust their look?
The thing that makes it hard to manage is the combinatorial explosion of conceptually separate things that are required by GPUs to be baked down into unique inline programs. For instance in the Fortnite case, you might have a few materials on your fancy new skin (of which there are at least a few hundred/thousand), and for each of those you need a version for a few different rendering situations (base pass, z prepass, shadow, etc). You may need a few versions to handle differences in input (vertex streams, texture layouts and encoding, virtual textures vs. not virtual) and output formats (gbuffer layouts and BRDFs), and then you probably have a few factors related to scalability settings. And of course, the ugliest bit of all is that the graphics pipeline state you need to represent (say, render target formats, blend modes, rasterizer winding orders, depth modes, etc) further multiplies into this matrix so that the driver could have the flexibility to bake shader code based on those things, even if in practice it never does (but a future theoretical driver might). You can see how it's not very hard to get to hundreds of thousands of potential permutations when you multiply these together.

Aside: that last bit is probably the main case where you can argue that DX12/Vulkan went too far and should have been more aggressive on nailing down state that IHVs are not allowed to use to affect shader compilation. Recall however that at the time a big goal of these APIs was to work on previous hardware that was never designed with those constraints in mind. It's hard to perfectly guess how much longer it would have taken for these APIs to be accepted (if they ever were at all) if they had not worked on existing hardware, but I think the concern was probably valid, even if it's retrospectively a thorn.

Now in practice, the artists are not creating separate materials for even those fancy new skins or whatever - they are messing with parameters in some conceptual big base material. But because of how GPUs work, you are often punished heavily in performance if you include extra code - even if it is never called - or parameters in a shader, hence needing to separate out many of these parameters into static permutations.

As I noted in another thread - in modern games for the part of the engine that is sensitive to shader explosions you should not think about BRDFs and so on... it's really the code that is effectively doing procedural generation by combining textures, doing some math, and providing a variety of artist tweakable parameters that is the hard to handle bits, but it is also the part that gives significant power to the artists and allows a lot of material variety. Obviously it's a PITA as graphics engineers and I would love if for instance Unreal didn't have quite as powerful a material editor, but on the other hand who can say how much more restricted and graphically uniform games would be if it didn't. 🤷‍♂️

Honestly though I'm sort of curious why no one has done a "fossilize" type thing for Windows yet. Obviously it's ideal if games gather their own PSO lists, but there's also no reason not to crowd-source it so that the damage is minimized across the player base even for games that don't.
 
I mean shader JIT stutter was a problem even before DX12/Vulkan. In fact it was one of the main reasons that motivated making PSO creation explicit in those APIs because in DX11 and previous there was basically no way for an application to do anything about it even if it wanted to. Obviously there have been downsides to the amount of generality/leeway that was exposed in the newer APIs, but it is not an issue that is unique to them (although the name "pipeline state object" is from them).

The thing that makes it hard to manage is the combinatorial explosion of conceptually separate things that are required by GPUs to be baked down into unique inline programs. For instance in the Fortnite case, you might have a few materials on your fancy new skin (of which there are at least a few hundred/thousand), and for each of those you need a version for a few different rendering situations (base pass, z prepass, shadow, etc). You may need a few versions to handle differences in input (vertex streams, texture layouts and encoding, virtual textures vs. not virtual) and output formats (gbuffer layouts and BRDFs), and then you probably have a few factors related to scalability settings. And of course, the ugliest bit of all is that the graphics pipeline state you need to represent (say, render target formats, blend modes, rasterizer winding orders, depth modes, etc) further multiplies into this matrix so that the driver could have the flexibility to bake shader code based on those things, even if in practice it never does (but a future theoretical driver might). You can see how it's not very hard to get to hundreds of thousands of potential permutations when you multiply these together.

Aside: that last bit is probably the main case where you can argue that DX12/Vulkan went too far and should have been more aggressive on nailing down state that IHVs are not allowed to use to affect shader compilation. Recall however that at the time a big goal of these APIs was to work on previous hardware that was never designed with those constraints in mind. It's hard to perfectly guess how much longer it would have taken for these APIs to be accepted (if they ever were at all) if they had not worked on existing hardware, but I think the concern was probably valid, even if it's retrospectively a thorn.

Now in practice, the artists are not creating separate materials for even those fancy new skins or whatever - they are messing with parameters in some conceptual big base material. But because of how GPUs work, you are often punished heavily in performance if you include extra code - even if it is never called - or parameters in a shader, hence needing to separate out many of these parameters into static permutations.

As I noted in another thread - in modern games for the part of the engine that is sensitive to shader explosions you should not think about BRDFs and so on... it's really the code that is effectively doing procedural generation by combining textures, doing some math, and providing a variety of artist tweakable parameters that is the hard to handle bits, but it is also the part that gives significant power to the artists and allows a lot of material variety. Obviously it's a PITA as graphics engineers and I would love if for instance Unreal didn't have quite as powerful a material editor, but on the other hand who can say how much more restricted and graphically uniform games would be if it didn't. 🤷‍♂️

Honestly though I'm sort of curious why no one has done a "fossilize" type thing for Windows yet. Obviously it's ideal if games gather their own PSO lists, but there's also no reason not to crowd-source it so that the damage is minimized across the player base even for games that don't.
Exactly. Why is there not more of a push from people within the industry for this? Studios could utilize the functionality to basically crowdsource PSOs so much quicker. Before the game is launched, deploy a build of the game for the team to play with the specific purpose of building a cache, collecting as many PSOs as possible, then release the game and have the community very quickly build any remaining caches that were missed. Windows needs this functionality! IMO you can't really call yourself a gaming oriented OS if you don't. Not after SteamOS. It would be incredible to have DX9-12 games run through a layer which does this, and the community would very quickly solve this issue for a huge library of titles. It's the way to go about fixing this issue for the games already released.

Question though, DX12/Vulkan PSOs were meant to solve the problems with compilation stuttering, knowing all the state ahead of time and being able to switch states more quickly... but that was only ever going to work if PSOs were created during load-time, right?. They had to have known this. Was it the expectation from the beginning that developers would create all their PSOs offline? I think more attention should have been put on this issue right from the beginning. It's quite concerning that it's a decade later and we're still waiting for some devs to figure this stuff out.

At this point, I want the Fossilize "solution" because we simply can't rely on devs to a) do it in the first place.. or b) do a thorough job of it.. and I'd like the ability of the community to at least be able to mitigate it.
 
Back
Top