Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
As a 4070ti owner I don't think it's poor value at all.

It cost less than double than the 3060ti it replaced and in RT games offers 2x the performance.
smh

Well at least we know why you're so intent on defending Nvidia's pricing, since you've gotta try and justify it to yourself.

It's depressing that people have forgotten that we're supposed to get significant leaps in performance per dollar every generation, though. That's literally the most exciting thing about a new generation of GPU's from a consumer perspective. It's why even people who only bought like $250-400 GPU's could still be excited about parts like a 980Ti, 1080Ti and 3080, because they knew that such performance would be attainable in their own price range soon enough.

Now we get a $600 part with the same performance as a $700 part from two and a half years ago. Do we now have to wait another two years before we can get RTX3080 performance for $500? How can you not see the issue here? :/
 
You seem to have a massive problem with Nvidia.
I've not owned anything except Nvidia parts for a decade.

This isn't it man. We're just being reasonable. All while you have to tell yourself that the bog standard midrange part you bought for $800 was a totally reasonable price and purchase.
 
smh

Well at least we know why you're so intent on defending Nvidia's pricing, since you've gotta try and justify it to yourself.
Why do I need to justify how I spend my own money?

If I had purchased a 7900XT it would have been same situation.
It's depressing that people have forgotten that we're supposed to get significant leaps in performance per dollar every generation, though.
You do know we're not in the 2000's now right?
That's literally the most exciting thing about a new generation of GPU's from a consumer perspective.
Well it's not, new technology and features is.
It's why even people who only bought like $250-400 GPU's could still be excited about parts like a 980Ti, 1080Ti and 3080, because they knew that such performance would be attainable in their own price range soon enough.
Those days are gone.
Now we get a $600 part with the same performance as a $700 part from two and a half years ago.
Or, for $100 more than someone paid for an RTX2070 you get 2x (or more) performance increase.

der8auer made a good point in his video, the 4070 consumes ~120w less than a 3080 and depending on your country that's a good chunk of power savings.

How can you not see the issue here? :/

Because it's a none issue, the days of 2-3x performance increases with each generation at a reasonable price have long gone.

If you don't like the price of a product and don't see any value in it then don't buy it.

No one forces anyone to buy anything.
 
Last edited:
All while you have to tell yourself that the bog standard midrange part you bought for $800 was a totally reasonable price and purchase.

The only person who decides if the price I paid for my 4070ti was reasonable is me.

If you don't like the fact that I paid that much for a GPU because I saw value in doing so then that's your responsibility to change your feelings and not my responsibility to pander to them.
 
Why do I need to justify how I spend my own money?

If I had purchased a 7900XT it would have been same situation.

You do know we're not in the 2000's now right?

Well it's not, new technology and features is.

Those days are gone.

Or, for $100 more than someone paid for an RTX2070 you get 2x (or more) performance increase.

der8auer made a good point in his video, the 4070 consumes ~120w less than a 3080 and depending on your country that's a good chunk of power savings.



Because it's a none issue, the days of 2-3x performance increases with each generation at a reasonable price have long gone.

If you don't like the price of a product and don't see any value in it then don't buy it.

No one forces anyone to buy anything.
The idea that the days of getting good improvements in performance per dollar are 'long over' is just a straight up lie. Even the last generation of Nvidia parts alone brought sizeable improvements in performance per dollar. This is not just a terrible argument, it's just straight up dishonest and you know it. The only reason we aren't getting it with Lovelace is cuz of Nvidia's pure and extortionate greed.

And you dont have to justify how you spend your own money, but that's exactly what you're doing here in trying to suggest that Nvidia's prices are fine. You are trying to justify your own purchase, because you'd otherwise have to admit you bought something that is very overpriced. If you didn't think you needed to justify it, you'd be able to admit that it's overpriced, stop defending it, and just say you bought it anyways cuz you could.
 
The idea that the days of getting good improvements in performance per dollar are 'long over' is just a straight up lie.
So explain the last few years.

The HD6950 released in 2010 was barely faster than the HD5870 that was released in 2009.

R9 390 released in 2015 is barely any faster than the R9 290 released in 2013.

History is littered with situations like the 4070 vs 3080.

Even the last generation of Nvidia parts alone brought sizeable improvements in performance per dollar.
My 4070ti offers the same performance per dollar ratio than the 3060ti it replaced did.

So what's the problem?
This is not just a terrible argument, it's just straight up dishonest and you know it. The only reason we aren't getting it with Lovelace is cuz of Nvidia's pure and extortionate greed.
Those are your feelings.
And you dont have to justify how you spend your own money, but that's exactly what you're doing here in trying to suggest that Nvidia's prices are fine.
Looking at AMD's prices they are fine.
You are trying to justify your own purchase, because you'd otherwise have to admit you bought something that is very overpriced.
Overpriced compared to what exactly?
If you didn't think you needed to justify it, you'd be able to admit that it's overpriced, stop defending it, and just say you bought it anyways cuz you could.
Again, overpriced compared to what exactly?
 
The idea that the days of getting good improvements in performance per dollar are 'long over' is just a straight up lie.
We've been through this, but here's another well-researched article. I doubt evidence will cause you to change your imaginary outrage-fueled narrative, but whatever.


Here's a slide from Marvel's investor presentation linked in that article:
https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F505a30e4-733d-49e4-86ea-f074a170373a_684x630.png


Note that every fabless IHV negotiates different pricing for itself from a foundry, so this data may not be exactly representative for what, for example, Nvidia would have had to pay. That chart above likely plots the cost/gate at a similar relative point in time in each technology's development timeline. Foundries will reduce the price of a tech node over time, and yields improve as well. And so if an IHV thinks they can make a competitive product with an older/cheaper node, they will do so. But they can only buck the trend for so long.

You can find tens of articles and papers on what's happening. Here are some:

The numbers will be slightly different (again, the exact numbers are buried in contracts), but the trends are the same. In the academic computer architecture community, nearly every paper mentions this trend as part of their introduction. It's well-known, and it's very irritating to see uninformed "opinions" to the contrary.

Without the foundries providing a generational $/transistor improvement, the only scaling fabless IHVs can provide are from clock speed, architectural improvements and clever algorithms.

Even the last generation of Nvidia parts alone brought sizeable improvements in performance per dollar.
Yes, that's because they used an older Samsung node that was way cheaper. That's why we got a xx102-based xx80 (non-ti).

For the Ampere->Ada move we're seeing the multiplicative effect of the fact that they went from a hungry competitor (Samsung) to the king of the hill (TSMC), AND that they went from an old, cheap node to a leading edge node. Everything we're seeing is the net impact of these cost increases. So even though the process node gives massively increased transistors-per-mm^2, that comes with a massively increased price-per-mm^2. So again, the only scaling that's possible came from clock speed, architectural improvements and clever algorithms.

If there's a silver lining here, it's that based on what I've seen so far in public documents TSMC isn't increasing 3nm price _that_ dramatically. Hopefully Samsung ups their foundry game at some point, and maybe Intel will become a viable foundry as well. Some competition will help, but let's not jump on the corporate vilification bandwagon and start attacking TSMC here. They are solving amazingly hard problems that everyone else seems to be struggling to do.
 
Dont forget that the the lower power consumption contributes to a better price/performance, too. AD104 is half the size of GA102 but the process costs a lot more (3x+). But with 120W less power (even 150W using the 12GB 3080) power savings here in Germany is around 66€ over two years and an average of 2hs/day playtime. Inflation is ~15% since end of 2020 so the inflation adjusted with higher power costs included MSRP would be 904€ for the 3080 12GB (729€ at launch).

And nVidia has no competition at the moment. It is an improvement of around 37% for the 4070. Nothing special, but the additional 2GB VRAM is a plus, too.
 
We've been through this, but here's another well-researched article. I doubt evidence will cause you to change your imaginary outrage-fueled narrative, but whatever.


Here's a slide from Marvel's investor presentation linked in that article:
https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F505a30e4-733d-49e4-86ea-f074a170373a_684x630.png


Note that every fabless IHV negotiates different pricing for itself from a foundry, so this data may not be exactly representative for what, for example, Nvidia would have had to pay. That chart above likely plots the cost/gate at a similar relative point in time in each technology's development timeline. Foundries will reduce the price of a tech node over time, and yields improve as well. And so if an IHV thinks they can make a competitive product with an older/cheaper node, they will do so. But they can only buck the trend for so long.

You can find tens of articles and papers on what's happening. Here are some:

The numbers will be slightly different (again, the exact numbers are buried in contracts), but the trends are the same. In the academic computer architecture community, nearly every paper mentions this trend as part of their introduction. It's well-known, and it's very irritating to see uninformed "opinions" to the contrary.

Without the foundries providing a generational $/transistor improvement, the only scaling fabless IHVs can provide are from clock speed, architectural improvements and clever algorithms.


Yes, that's because they used an older Samsung node that was way cheaper. That's why we got a xx102-based xx80 (non-ti).

For the Ampere->Ada move we're seeing the multiplicative effect of the fact that they went from a hungry competitor (Samsung) to the king of the hill (TSMC), AND that they went from an old, cheap node to a leading edge node. Everything we're seeing is the net impact of these cost increases. So even though the process node gives massively increased transistors-per-mm^2, that comes with a massively increased price-per-mm^2. So again, the only scaling that's possible came from clock speed, architectural improvements and clever algorithms.

If there's a silver lining here, it's that based on what I've seen so far in public documents TSMC isn't increasing 3nm price _that_ dramatically. Hopefully Samsung ups their foundry game at some point, and maybe Intel will become a viable foundry as well. Some competition will help, but let's not jump on the corporate vilification bandwagon and start attacking TSMC here. They are solving amazingly hard problems that everyone else seems to be struggling to do.
Not to be pedantic, but to illustrate it cost of chips are going to come down, you should do price per wafer by node size. As in order to gain more computational power requires more gates, the bigger the chip by area and the cost of the wafer and actual defect yield will actually provide a better idea if chips are going up or down in price. Because the larger the chip means a higher chance of defect, and that means perfect chips per wafer that bin is going to cost significantly more than a defect filled bin.

If you look at price per gate, then you would see that our wattage and chip price should be stable or dropping, but we see both constantly going up - and that’s the result of increased chip size and yield challenges.

The real issue is, even if price manages to stabilize or drop through better manufacturing, wattage continues to increase in a smaller surface area. The laws of physics will eventually kick in here because no amount of cooling will be able to reasonably contain the amount of power of w/cm^2.

The costs are going to need to pour into a better way of doing work with our silicon, or come up with novel ways to separate the chips to bring cooling down.

That being said: I agree with you in saying foundries should not be vilified. The challenge around cost is entirely different.
 
Last edited:
At relevant settings the 4070Ti can actually play regardless of VRAM? What games are those?
2 games off the top of my head. RE4 from what I saw in someone's youtube videos will actually use more than 12gb at 4k. I'll try to find the video. I also believe TLOU does as well in the same video.
So you're expecting the 4070Ti to play these "next gen raytracing/path tracing games" at native 4K and only be hindered by it's VRAM?
I expect the VRAM to be a limiting factor long before performance becomes a limiting factor on the 4070/TI. Secondly, the 4070ti is $800. If it can't play games at 4k, it's not a high end gpu and shouldn't be priced at $800.
Why does this matter? What matters is the overall memory performance of the card, not the width of the bus. The 4070Ti obliterates many 256bit GPU's , handily beats the 320bit 3080 and even edges out the 384bit 3090.

Judging a GPU's worth by the width of it's memory bus is even worse than judging it based on TFLOPs.
It matters in this case because the card offers 13% increase in bandwidth over the 3070. Also, a 256bit bus would allow them to offer 16gb of vram on the card and the side benefit of that would be an increase in bandwidth. With regards to the 4070ti obliterating other gpus, well that's easy to do when you run at a much higher clock speed. Other than SER which is used in one game, its easy to argue that a majority of the gains come from ridiculous 700 - 900 mhz increase in clock speed alone. Is Ada actually faster clock for clock than ampere? I don't know if that's actually been tested. Thankfully now that the 4070 is out, someone can test it against the 3070 by limiting the clock and memory to find out.
 
It's obvious now the target platform was PC and the consoles only ports that needed important patches to be in a decent state.
It is a weird situation as I would say the opposite case for Village which felt super "consolised" in terms of the attention to detail on that PC version when it launched back then (the super low quality RT, the non functioning AA (still lol), and the fact that each time you killed an enemy the game would stutter).

AA being bad now on consoles though in a big way makes me wonder just if capcom is not doing visual QA in the same way as they maybe did in the past, or implementing changes. For example... checking if visual effects ARE DIFFERENT between builds. I know some devs I have visited even implement entire automated systems to double check each build whether shader visuals have changed.
 
If you look at price per gate, then you would see that our wattage and chip price should be stable or dropping, but we see both constantly going up - and that’s the result of increased chip size and yield challenges.
Yes, mathematically yields should go down as densities increase, but the issue is somewhat mitigated by the fact that GPUs are amenable to heavy floorsweeping. Instead, the first-order cost driver is the increasing capital expenditure for building a new fab. That in turn is due to the R&D expense of developing the node itself and building the manufacturing equipment. All of that is encapsulated in the cost/gate metric.

The real issue is, even if price manages to stabilize or drop through better manufacturing, wattage continues to increase in a smaller surface area. The laws of physics will eventually kick in here because no amount of cooling will be able to reasonably contain the amount of power of w/cm^2.
Agreed. But there's some headroom to improve this on the architecture side. It's not a silver bullet that will keep on giving, but there are a bunch of one-time optimizations that will keep things going for a while.
 
2 games off the top of my head. RE4 from what I saw in someone's youtube videos will actually use more than 12gb at 4k. I'll try to find the video. I also believe TLOU does as well in the same video.

I have both (well the demo on RE4 anyway) along with a 12GB GPU and I can assure you neither game comes even remotely close to 'obsoleting' it based on VRAM.

Putting aside the fact that TLOU should not be used as a watermark for anything given how much of a technical mess it is. It's also not particularly playable on a 4070Ti at native 4K ultra settings which is pretty much what you'd need to break 12GB. I can get the game comfortably below 12GB by turning on DLSS Quality and reducing the environment textures from Ultra to High with literally no visible difference in texture quality. And that's with the game supposedly reserving 2.5GB VRAM for the rest of the system, which according to @yamaci17 it's not actually doing anyway (and is enormous overkill for what most people actually need).

RE4 is a classic case of hyper inflated vram requirements for no appreciable gain. You can max everything out while staying under 12GB by simply reducing shadow quality from Max to High for no appreciable image quality loss. I suspect it's AMD sponsored roots are at play there.

I expect the VRAM to be a limiting factor long before performance becomes a limiting factor on the 4070/TI. Secondly, the 4070ti is $800. If it can't play games at 4k, it's not a high end gpu and shouldn't be priced at $800.

It's highly likely that even the $1600 4090 won't be able to play "next gen raytracing/path tracing games" at native 4K, and don't even get me started on the $1000 7900XTX, so I really don't see why you're expecting that of the lower end cards. Upscaling has already been targetted squarly at games with RT because the performance just isn't there to use it at high native resolutions. Try playing CP2077 at native 4k with Path tracing enabled on a 4090 and see what happens.

It matters in this case because the card offers 13% increase in bandwidth over the 3070.

No, this doesn't matter in the slightest. It's a far faster card than the 3070 by much more than 13% in every scenario. Ada obviously has significant changes over Ampere (largely the heavily increased caches) which make it much less reliant on VRAM bandwidth than previous architectures.

I mean come on, the 7900XTX literally has less bandwidth than the Radeon VII. Are you suggesting that makes it a worse GPU somehow?

Also, a 256bit bus would allow them to offer 16gb of vram on the card and the side benefit of that would be an increase in bandwidth.

Hey no-ones saying that more VRAM wouldn't have been nice, of course it would. But that would also have made it more expensive, and the 12GB it does have is not even remotely going to obsolete it for the remainder of this generation. There may be the odd corner case where some minor compromises have to be made where they otherwise wouldn't if it had 16GB, but my money is on those being very few and far between, and very minor in nature. Nothing like the launch issues with Forsaken for example which meant 8GB GPU's had to suffer PS3 like textures at any setting (which has been resolved now for the record, just like most of the recent 8GB hobbling issues).

With regards to the 4070ti obliterating other gpus, well that's easy to do when you run at a much higher clock speed. Other than SER which is used in one game, its easy to argue that a majority of the gains come from ridiculous 700 - 900 mhz increase in clock speed alone. Is Ada actually faster clock for clock than ampere? I don't know if that's actually been tested. Thankfully now that the 4070 is out, someone can test it against the 3070 by limiting the clock and memory to find out.

Again, why does this matter in the slightest? If the redesigned architecture allows it to run at a much higher clock speed which in turn results in much higher performance, along with much improved performance per watt to boot, why does it matter that they achieved it that way?
 
It's obvious now the target platform was PC and the consoles only ports that needed important patches to be in a decent state.

The PC's interlaced mode has actually regressed in this game over Village, FSR is ok but a DLSS mod provides significantly better image quality, there's still problems with TAA - just that consoles now have them too. RT still renders reflections with no simple resolution upgrades to take advantage of PC RT hardware. Bizarre texture management issues that can result in worse quality textures at points on higher settings than lower.

It's generally well performing on the PC because the ReEngine performs well on PC (especially in an era where we're holding our breath for new releases just to get the bare minimum like shader compilation), but it's obviously not flawless. FSR is better than their awful interlaced mode sure, but just based on pure rendering performance at equivalent settings, it falls pretty much in line with most titles, in that a PS5 will slightly outperform my 3060, but with DLSS I can outpace it - so it really isn't that much of an outlier in terms of optimization. It seems like there's plenty of minor QA critique to go around on all platforms.

I believe RE titles sell well on PC, but compared to the entire installed console base? I highly doubt this game designed for the PC first and foremost and consoles ported as an afterthought.
 
Secondly, the 4070ti is $800. If it can't play games at 4k, it's not a high end gpu and shouldn't be priced at $800.

The '70' class Nvidia GPU's have always been 1440p GPU's. (With '60' class for 1080p and '80' class and above for 4k) so what are you even doing talking about 4k performance for a '70' class Nvidia GPU?

Can we apply this same silly logic to the 7900XT? As unlike the 4070ti the '900' class for AMD GPU's is a 4k GPU.
 
Last edited:
Nothing like the launch issues with Forsaken for example which meant 8GB GPU's had to suffer PS3 like textures at any setting (which has been resolved now for the record, just like most of the recent 8GB hobbling issues).

Perhaps they have been 'resolved' in some fashion, but not with RT even at 1080p.

1681500900161.png


Hogwarts has slower loading textures at even medium, and more intermittent stuttering. Again RT is not helping the usage here, but again - it's 1080p, and it's a prime selling point of RTX cards. Maybe it's not indicative of more optimized titles to come down the pike, sure - neither title was exactly exemplary releases, to put it mildly. But if they've improved with patches, seems pretty marginal.
 
Ray tracing is a no-go from this point forward with 8 GB budget, even at 1080p, for most cases. That's a pretty much given... I would be glad or rather settle for decent textures at 1440p (I hope it doesn't come to 1080p). Problem is getting N64 textures with regular rasterization. I personally made my peace with that. To be honest, I've played a lot of ray traced titles from 2019 to 2022, gracefully on my 3070. On top of it, occasional path tracing mods like Half-Life/Quake etc. were great boon and I had great time with them, and such stuff will keep coming and I'm also grateful for that. I hope Half Life 2 RTX remix mod is also playable with 8 gb vram/3070.

Yes, it is sad that despite capable, VRAM will stop 3070 being a competent ray tracing card going forward for future games. But I cannot ignore the value I've gotten so far either. It even let me enjoy cyberpunk with path tracing, albeit around 30 FPS at 1440p/dlss balanced but it was enough to explore/drive/walk/run around.

It was a pleasure having such high quality textures alongside with a path traced renderer at 1440p/upscaled on a 8 GB budget however. Puts things into a funny perspective indeed. And makes you question what they did right and what others did wrong in terms of texture management.

RJPDJCn.png

MwPGQjJ.png

FPOZpxZ.png

bvlmWce.png

TRw30cb.png




I still don't regret getting the 3070 and I still think it will serve me another good 2-3 years as long as I'm not being forced into N64 textures territory. I still will keep enjoying Reflex+DLSS and that's fine by me. It sucks that more VRAM would've made the card a legendary card, but it is what it is. Still a champ in my eyes tho. I will keep waiting for the eventual 16 GB 70 card. 4070 won't fool me.

Frame generation requires VRAM. Newgen textures will require more and more VRAM. Ray tracing requires VRAM, and more complex ray tracing requires even further VRAM. It may do good enough for 2 years, but I simply don't want to experience the same thing I did with 3070. Good luck with that.
 
I will keep waiting for the eventual 16 GB 70 card. 4070 won't fool me.

By the time 12GB actually becomes any kind of even moderate issue, the next gen consoles will be on the horizon by which time 16GB will land you right back in the same position.

You will not be in any kind of crippling situation with a 12GB GPU for the rest of this generation, but if it gives you peace of mind, just get a 7900XT now. Although on balance, I suspect you will see worse performance in those heavily Ray Traced games that you suspect will stress the VRAM. And without the option of frame generation at that.
 
Status
Not open for further replies.
Back
Top