Digital Foundry Article Technical Discussion [2022]

Status
Not open for further replies.
I don't think so. Wouldn't it be smarter to replace the series s and x with SOC of similar transistor size as the series x but using newer architecture ? If the series x is 15B tranistors at 7nm and say the replacement is 3nm with 15B tranistors it should be cheaper to manufacture and if its all new tech you take advantage of the uplift in specs. ...
Major problem here is, the the cost per transistor is almost the same. It doesn't get cheaper like before. So you could shrink your the size of the APU but that might also increase error-rates. With a bit of luck you might still get down the per-console price because the cooling-solution can be smaller, the power supply can be smaller, ... but the development costs for such a shrink ... I don't know whether it is ever worth is if the price of the APU is still the same.
Also it might get a bit tricky to get e.g. the smaller chip connected with the GDDR6 memory as the chip is shrinked it might get to small so they might have to develop a new memory connection for the chip (this also costs).

In the past shrinking the chip was always cheaper. Well expect for the original xbox as nvidia didn't give MS the shrink-advantage over in terms of the per-chip price. But that changed over the last few years. Shrinking will no longer give a cheaper chip. And this is a huge bummer for a console shrink.

Also newer architectures almost never give you more performance per transistor. They instead have more performance per clock with more transistors used to get faster. So newer architecture + same transistor count normally wouldn't work (expect if they did something really wrong with the current transistors ... I don't think so).

But I guess with the current generation we will see a more iterative console-cycle as the main bottleneck (the CPU) is now away and they could build on that.
 
Major problem here is, the the cost per transistor is almost the same. It doesn't get cheaper like before. So you could shrink your the size of the APU but that might also increase error-rates. With a bit of luck you might still get down the per-console price because the cooling-solution can be smaller, the power supply can be smaller, ... but the development costs for such a shrink ... I don't know whether it is ever worth is if the price of the APU is still the same.
And as 5nm and 3nm become the commercial cutting edge and big customers like Apple transition from 7nm, freeing up capacity combined with the 7nm facilities under construction coming online, 7nm will become much cheaper. There are two parts to fab costs; the first is materials and process and the second is capacity and availability, i.e. supply and demand.

As supply of 7nm exceeds demand it becomes cheaper. Demand for 5nm and 3nm will be high for some years.
 
Major problem here is, the the cost per transistor is almost the same. It doesn't get cheaper like before. So you could shrink your the size of the APU but that might also increase error-rates. With a bit of luck you might still get down the per-console price because the cooling-solution can be smaller, the power supply can be smaller, ... but the development costs for such a shrink ... I don't know whether it is ever worth is if the price of the APU is still the same.
Also it might get a bit tricky to get e.g. the smaller chip connected with the GDDR6 memory as the chip is shrinked it might get to small so they might have to develop a new memory connection for the chip (this also costs).

In the past shrinking the chip was always cheaper. Well expect for the original xbox as nvidia didn't give MS the shrink-advantage over in terms of the per-chip price. But that changed over the last few years. Shrinking will no longer give a cheaper chip. And this is a huge bummer for a console shrink.

Also newer architectures almost never give you more performance per transistor. They instead have more performance per clock with more transistors used to get faster. So newer architecture + same transistor count normally wouldn't work (expect if they did something really wrong with the current transistors ... I don't think so).

But I guess with the current generation we will see a more iterative console-cycle as the main bottleneck (the CPU) is now away and they could build on that.

It would depend on how popular the node they settle on is. Right now 7nm is expensive because everyone is on it , now everyone is transitioning to 6nm and then 5nm. In 2024 the bulk of production may be on 3nm.

I'm also not proposing a shrink. I'm proposing a new console design based around the new micron process. That way you get more than just a smaller apu.

I am also wondering if we will see a separation of CPU and GPU in the future . I found it interesting that for AM5 socket AMD has introduced a 2 north bridge set up. Perhaps it will be cheaper for MS or Sony in the future to simply design a multichip system.

Could end up with a chip with the cpus , 3d cache and perhaps some custom silcon and then a chip with the gpu and a bunch of infinity cache.

Will be interesting to see what happens over the next few years
 
Major problem here is, the the cost per transistor is almost the same.

It isn't the same, it's more expensive per transistor to go down a node now and it's only going to get worse.

It would depend on how popular the node they settle on is. Right now 7nm is expensive because everyone is on it , now everyone is transitioning to 6nm and then 5nm. In 2024 the bulk of production may be on 3nm.

That's part of it, but it's also that it's becoming exorbitantly expensive for a fab to move to a smaller node now. It was already stupendously expensive before but it's getting downright absurd now. So a significantly larger chunk of the cost associated with a customer moving down a node is just due to the fab's need to recoup the cost of moving down to a smaller node. That is going to increasingly be amortized over a longer length of time with a higher cost per transistor otherwise most tech companies wouldn't be able to afford to have chips made on the smaller nodes.

That's the main reason that cost per transistor will continue to increase relative to a previous node even when bulk production starts. The fact that there's large demand is the only reason fabs are moving to these nodes, if there wasn't then they wouldn't do it because the cost is becoming even more prohibitively expensive than it was in the past. Fabs can no longer afford to speculatively move to a smaller node and hope there will be enough customers to fill capacity. They need to have capacity booked prior to moving to a new node now.

Regards,
SB
 
I dont think you can say "use FSR 2.0" on AMD. This was my trip to Alfheim with FSR 2.0 Performance, TAAU@65% and DLSS Performance - watch in HDR:
Scene 1
https://drive.google.com/file/d/10UQ-d3vWxl2QBRazWMa7dpZoyz65vrkf/view?usp=sharing
https://drive.google.com/file/d/1etq1-U7UWQFE2Zf0lf3Sg5VlMi3qZVu7/view?usp=sharing
https://drive.google.com/file/d/1xgiJYyb72FZjUdkqSfr2pZLv_lsp5VEs/view?usp=sharing

Scene 2
https://drive.google.com/file/d/12gPqNziYnv6WubZU7i3KIpDVPOE2tWfZ/view?usp=sharing
https://drive.google.com/file/d/1WVM2B36aW1o05_zEtDHNz2YuW2wnzYLw/view?usp=sharing
https://drive.google.com/file/d/1ipEEYpD9DHZ3jWhWQkRGD-wkeTgD56yz/view?usp=sharing

These differences are so obvious that i dont even need to name the setting...

GoW is a huge game with difference assets and styles. FSR 2.0 is failling so often, has so many problems and the lows are really low that the ingame TAAU is the way to go.
 
I dont think you can say "use FSR 2.0" on AMD. This was my trip to Alfheim with FSR 2.0 Performance, TAAU@65% and DLSS Performance - watch in HDR:
Scene 1
https://drive.google.com/file/d/10UQ-d3vWxl2QBRazWMa7dpZoyz65vrkf/view?usp=sharing
https://drive.google.com/file/d/1etq1-U7UWQFE2Zf0lf3Sg5VlMi3qZVu7/view?usp=sharing
https://drive.google.com/file/d/1xgiJYyb72FZjUdkqSfr2pZLv_lsp5VEs/view?usp=sharing

Scene 2
https://drive.google.com/file/d/12gPqNziYnv6WubZU7i3KIpDVPOE2tWfZ/view?usp=sharing
https://drive.google.com/file/d/1WVM2B36aW1o05_zEtDHNz2YuW2wnzYLw/view?usp=sharing
https://drive.google.com/file/d/1ipEEYpD9DHZ3jWhWQkRGD-wkeTgD56yz/view?usp=sharing

These differences are so obvious that i dont even need to name the setting...

GoW is a huge game with difference assets and styles. FSR 2.0 is failling so often, has so many problems and the lows are really low that the ingame TAAU is the way to go.

Who would use performance mode on either one? It looks like arse for both FSR 2.0 and DLSS 2.x. Sure FSR 2.0 performance is quite a bit worse than DLSS 2.x performance, but again, both look like arse. :p Do I want to look at arse or worse arse? Neither. :D

Use the quality mode or nothing, IMO. And for me in most titles that use either, it's "native" (is anything truly native anymore?) rendering with reduced settings as that generally, but not always, doesn't come with temporal artifacts that both FSR and DLSS quality modes can introduce. But for people that are less annoyed by those artifacts than either one provides a good alternative to the standard rendering in a game for most people.

Regards,
SB
 
I dont think you can say "use FSR 2.0" on AMD. This was my trip to Alfheim with FSR 2.0 Performance, TAAU@65% and DLSS Performance - watch in HDR:
Scene 1
https://drive.google.com/file/d/10UQ-d3vWxl2QBRazWMa7dpZoyz65vrkf/view?usp=sharing
https://drive.google.com/file/d/1etq1-U7UWQFE2Zf0lf3Sg5VlMi3qZVu7/view?usp=sharing
https://drive.google.com/file/d/1xgiJYyb72FZjUdkqSfr2pZLv_lsp5VEs/view?usp=sharing

Scene 2
https://drive.google.com/file/d/12gPqNziYnv6WubZU7i3KIpDVPOE2tWfZ/view?usp=sharing
https://drive.google.com/file/d/1WVM2B36aW1o05_zEtDHNz2YuW2wnzYLw/view?usp=sharing
https://drive.google.com/file/d/1ipEEYpD9DHZ3jWhWQkRGD-wkeTgD56yz/view?usp=sharing

These differences are so obvious that i dont even need to name the setting...

GoW is a huge game with difference assets and styles. FSR 2.0 is failling so often, has so many problems and the lows are really low that the ingame TAAU is the way to go.
I think I am just being a bit diplomatic when I say "use FSR in AMD" or "FSR2 is a viable alternative to TAAU" - I won't Just spend a whole video only saying negative stuff. I think we at DF try and allow for the fact that some people have diff priorities and preferences than we have. FSR 2 has some advantages for sure over TAAU but you then have to deal with ghosting or fizzle in comparison.
Perhaps some people want that over more lacklustre details and static view flicker that TAAU provides.

I personally would use TAAU If I had to (but I would never have to since I would always have an RTX card even If I did not work in games journalism), but I personally have a bigger bias against movement instability than the general gaming populace - the general gaming populace seems to love oversharpening and even turning off TAA for maximum instability!!!

So when I say use FSR 2 on AMD, I am trying to channel the will that I read in the comments. Same how when I make recommended/optimised settings I try and channel the preferences of a fictional performance-minded player that is just a conglomeration of ideas that I read in DF video comments. Sometimes for optimised settings I make decisions about "optimised visuals" that are below my personal threshhold for quality, but I imagine a fictional viewer would find them fine as they have different preferences than me.
 
Last edited:
John has a little Starfield video coming out. I like this tweet from him. I'd agree that the Starfield presentation is how game presentations should be done.
This did this for Fallout 4 as well. A bit better because they had more to show off - that demo was closer to launch.
 
So if I summarize what they were doing before and after:

- Before they were naively using the previous half resolution (raw) frame with the current raw image in order to reconstruct without motion vectors (because it was not needed). But the result was awful. I am actually very surprised they did such a basic and amateurish method of TAA + CBR and that they shipped the game with it.
- After the patch they finally decided to use the fully aa-ed (and full resolution) previous frame in order to reconstruct their image but had to use motion vectors to get rid of the ghosting. I mean, like many third parties Playstation developers do since 2017 (like in Dark Souls Remastered released in 2018) as Sony actually have libraries in their SDK that show how to do exactly that (with ID BUffer etc) when using CBR on Pro.

It's like their engine didn't evolve for 5 years on that matter. Very odd. Finally I think they could have upped the maximum resolution available on PS5 (to 4K, even only for the PS5 Pro). That version is still very much a patched PS4 Pro version.
 
It's like their engine didn't evolve for 5 years on that matter. Very odd. Finally I think they could have upped the maximum resolution available on PS5 (to 4K, even only for the PS5 Pro). That version is still very much a patched PS4 Pro version.
still imo best looking game and diff between h2 and h1 is big (opposite to gow ragnarok vs gow1 at least on trailer I saw or lou1 remake vs lou2)
 
So if I summarize what they were doing before and after:

- Before they were naively using the previous half resolution (raw) frame with the current raw image in order to reconstruct without motion vectors (because it was not needed). But the result was awful. I am actually very surprised they did such a basic and amateurish method of TAA + CBR and that they shipped the game with it.
- After the patch they finally decided to use the fully aa-ed (and full resolution) previous frame in order to reconstruct their image but had to use motion vectors to get rid of the ghosting. I mean, like many third parties Playstation developers do since 2017 (like in Dark Souls Remastered released in 2018) as Sony actually have libraries in their SDK that show how to do exactly that (with ID BUffer etc) when using CBR on Pro.

It's like their engine didn't evolve for 5 years on that matter. Very odd. Finally I think they could have upped the maximum resolution available on PS5 (to 4K, even only for the PS5 Pro). That version is still very much a patched PS4 Pro version.
Another way to think about it is they are using more frame history. Before it was previous frame (like SMAA T2x) raw. With it being previous frame + AA continuously it is actually a minimum of probably around 4 previous frames now. So it is more accumulation style TAA than T2x style. Accumulation style is a lot more modern though, so you are 100% right on the money there.
 
This is one where I can't say I agree with Alex's take on the 1min and 5min trailers. He's concerned about the 1 min trailer being comprised of visuals from replays with RT reflections and RTGI being passed off as "in-game".. but what racing game doesn't use replays to show off their game in trailer form? I don't remember him having a problem with GT7's trailer which states "mix of gameplay and in-game cinematics"... the only difference here being they're split up into two different trailers. I think it's pretty clear to tell what images are gameplay and what are replays.. I mean there's an entire 5 min trailer dedicated to the actual visuals you see during gameplay.. how is it misleading people when you show both sides?

I guess I just don't understand why all the fuss for Forza, but nothing for Gran Turismo 7 and the way they presented their game.
 
This is one where I can't say I agree with Alex's take on the 1min and 5min trailers. He's concerned about the 1 min trailer being comprised of visuals from replays with RT reflections and RTGI being passed off as "in-game".. but what racing game doesn't use replays to show off their game in trailer form? I don't remember him having a problem with GT7's trailer which states "mix of gameplay and in-game cinematics"... the only difference here being they're split up into two different trailers. I think it's pretty clear to tell what images are gameplay and what are replays.. I mean there's an entire 5 min trailer dedicated to the actual visuals you see during gameplay.. how is it misleading people when you show both sides?

I guess I just don't understand why all the fuss for Forza, but nothing for Gran Turismo 7 and the way they presented their game.
Scroll Back in time and you will hear me saying similar commentary about GT7 only having RT reflections in their replays. I find it detewtable that tracing Games are the only Genre to do this - it makes no sense to have better graphics full Stop in a non gameplay Mode.
 
Status
Not open for further replies.
Back
Top