NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
I do not choose my platform - gamers do.

Right, then theres no reason to be all-negative to the platform. Its the largest singular 'system'.

My predictions are PC arrives at APUs in general. SteamDeck, Rembrandt etc., are first signs, but it will come to desktop as well.
NV looks like they already prepare for this. Their consumer focus seems much more cloud gaming and content creation. At least that's how it looks to me.

But that's jsut me, and we shall see.

Nah. APU's will be a part of the consumer market, more so than before perhaps. Laptops generally have taken that approach since ages already. APU's currently have too many restrictions in special performance wise but also flexibility. A big APU to match say a 3080Ti and 12900k combo (for example) is going to be just as expensive if not more so, with the classic APU drawbacks like performance and zero upgrade plans.

Everyone is adding focus to cloud and online gaming, even your platform holder does, perhaps more so than what NV do. Cloud and streaming/services are going to be end for your platform sometime, no idea when, but its ultimately going to be the bigger market (unfortunately) and might take over the evil pc sometime after that aswell. Its what it is, but perhaps nothing to think about atm....

Yeah the product itself is clearly awesome, but pricing and marketing is what dictates peoples expectations of how awesome it should be and I think this is where NVIDIA have fallen hard tbh. That 4080 12GB should have been marketed as a 4070. If the price were then even $100 over the 3070 people would I suspect still see that as a good deal. I think even a $150 increase would have been accepted with a few grumblings. But whacking it up by $400 and then renaming it a 4080 to trick people into thinking they're getting better value than they are is where Nvidia have gone wrong IMO. But hey, maybe they simply had no choice on the pricing and people will buy it anyway in which case what I'm saying doesn't matter.

For my part though as a pretty serious PC hardware enthusiast of decades whos been itching to get my hands on a new GPU for the last few years and have been waiting on this with baited breath, I'm put off. I was ready to put my money down as soon as these were available before the announcement but now I'll wait and pray that AMD release something more compelling. Perhaps that may also have a knock on effect on Ampere prices bringing that performance level pricing in line with what we'd expect following a new generation launch. If that were to happen then I'd be tempted by a high end Ampere.

Indeed. The high prices are a mirror and result of the market as it is today. Literally no competition in the dGPU/gaming space for NV, a large backlog of RTX3000 hardware, TSMC premiums, global recession, supply issues etc etc. I think/hope that AMD will start offering some competition by either performance or price, or both if they can. intel entering the market is a good thing aswell.
When RTX3000 backlog has cleared and mid/low range Ada gpus release, along with some competition from AMD and a more mature TSMC process things might cool down regarding prices.

Anyway, if Ada is such a small improvement over Ampere as console gamers say, then that puts the RTX3000 series in a good spot seeing what they go for and how they perform. A 3080/Ti could be a much cheaper upgrade but still offering enough performance. Remember the baseline is 5700XT/6600XT level. Its just that hardware has been going quite fast in such a short time (at a cost).
Heck a 3060Ti is already above the baseline. 3090Ti/4090 are just extremely capable. Like with Apple, how many do really need the 60/70k M1 ultra when vanilla M1 or M1 air already
is above the baseline.

Edit: wonder what Ada in laptops will look like, last time with Ampere you could get high-end performance (3070 level) in a 1500usd total package, all the while dGPUs where quite expensive.
 

One particularly impressive aspect of DLSS3 vs DLSS2 is its ability to scale at very high framerates. I guess there must be a limit to that though. It'd be interesting to see what the cost is in terms of ms to create the new frame.

Also what happens if it misses the window? Clearly its not an exact doubling from all the examples we've seen which means some pairs of frames must be getting a new AI generated frame and others not. Isn't that going to lead to uneven frame pacing?
 
It's only me that watch the 2 year old CP2077 running at 22fps on the still unreleased most powerful gpu in the world, and thinks that maybe RT is still 10 years away?

The same game with a different set of developers might run at 44fps

The game made from the ground up with RT in mind might also run better.

There's too many things that can affect performance and I don't feel these cross gen games with ray tracing 'bolted on' best reflect performance to be honest.

Metro Exodus Enhanced Edition managed to have better RTGI and a performance bump so there's room to improve techniques and optimisations.
 
One particularly impressive aspect of DLSS3 vs DLSS2 is its ability to scale at very high framerates. I guess there must be a limit to that though. It'd be interesting to see what the cost is in terms of ms to create the new frame.

Also what happens if it misses the window? Clearly its not an exact doubling from all the examples we've seen which means some pairs of frames must be getting a new AI generated frame and others not. Isn't that going to lead to uneven frame pacing?
Upscaling costs 1ms from 1080p -> 2160p on a 3090. even with 5ms DLSS 3 should be fast enough to always be faster than the next frame.
 
One particularly impressive aspect of DLSS3 vs DLSS2 is its ability to scale at very high framerates. I guess there must be a limit to that though. It'd be interesting to see what the cost is in terms of ms to create the new frame.

Also what happens if it misses the window? Clearly its not an exact doubling from all the examples we've seen which means some pairs of frames must be getting a new AI generated frame and others not. Isn't that going to lead to uneven frame pacing?
And the poor bastards on 2060-tier cards who actually need it to go from 30-40fps to 60+ are left in the cold. What's the point of 200fps in that game?
 
One particularly impressive aspect of DLSS3 vs DLSS2 is its ability to scale at very high framerates. I guess there must be a limit to that though. It'd be interesting to see what the cost is in terms of ms to create the new frame.

Also what happens if it misses the window? Clearly its not an exact doubling from all the examples we've seen which means some pairs of frames must be getting a new AI generated frame and others not. Isn't that going to lead to uneven frame pacing?
What do you mean, "Also what happens if it misses the window"?

The optical flow block tracks pixel movement and collects speed data, thereby compensating for artifacts and helping build a new frame from zero. If it works slowly, new frame will be generated more slowly
Read this and you'll understand everything about dlss3
 
Right, then theres no reason to be all-negative to the platform. Its the largest singular 'system'.
I'm not negative - i'm sad.
PC is my personal platfrom of choice, and til there is some open alternative, it will reamin that.

Nah. APU's will be a part of the consumer market, more so than before perhaps. Laptops generally have taken that approach since ages already.
No, sadly not.
I want some Rembrandt laptop. Becasue that's almost a SeriesX/S. I'm sure that's enough specs for 60 fps next gen games. It's the first proper APU to play games.
Now go and try to find a laptop with 6800U.
There are a few, but ALL of them also have a dGPU, e.g. 3060M.
Why should i pay for a dGPU which is only slightly more powerful than the built in iGPU?
This is just stupid. And this is what makes me sad about the PC platform. They fail to offer attractive products in general. It's not just NV which feels completely out of this world.
Cloud and streaming/services are going to be end for your platform sometime, no idea when, but its ultimately going to be the bigger market
Yeah. And you know what? I can't wait for it.
Imagine: All those dumb power fantasy games for teens, retro pixel kiddy crap, and 200h RPG nonsense goes away.
Maybe, just maybe, game devs will realize: Billions of grown ups still use computers, and might deserve to get some games for their age as well. It might be good business.
Wishful thinking, probably. But still - more platforms means better offers to their users, eventually. Maybe cloud is for good.

The high prices are a mirror and result of the market as it is today.
People blame NV of being greedy, or the current crisis, economy, Putin, whatever.
But that's not the main reasons GPUs are too expensive.
The main reason is: People fail to lower their expectations.
They can not accept that tech progress at some point has to slow down, that Moores Law has an end.
We are all gulity. Devs were lazy and based their progress on the next generation HW improvements. Gamers took the bait of an ever increasing realism promise, e.g. path traced games.

And now we see it does not work, and point with fingers at anybody else. Just not on ourselfes, the true origin of the failure.
So it will take a wohle lot of time, until we seriously realize: APU is the saviour of gaming.
But after that, it will be all fine. \ :D /

The same applys to humanity as a whole, btw. If we fail to replace envy with simple needs, we will all die.
 
I'm looking forward to the 4080 10Gb for $700, 4080 8Gb for $600 and 4080 6Gb for $500 to flesh out the "midrange" cards.

Yeah, 4080 8GB is basically 4070 ti/super due to less memory bandwidth.

Maybe this time Nvidia will use memory size to divide the tiers hahahaha
 
So, RTX 4080 (12GB) has the same memory interface width as GTX 1060 3GB? Nice move. Ok, obviously transfer rate will be much higher, still it has to make do with roughly half the transfer rate of the TFLops-wise similarly configured RTX 3090 Ti. That's probably gonna be a rude awakening in some applications.
 
Upscaling costs 1ms from 1080p -> 2160p on a 3090. even with 5ms DLSS 3 should be fast enough to always be faster than the next frame.

Yeah that's a good point actually. Taking a native frame rate of 50fps / 20ms for example and a frame generation cost of 5ms then (I think!) it would look something like this:

50fps native
40fps with frame generation cost
double that to 80fps

= 60% performance boost

What do you mean, "Also what happens if it misses the window"?

I meant what if the time between frames 2 and 3 is not long enough to generate the AI intermediate frame between frames 1 and 2. But I see now I was thinking about it wrong because the time between frames 2 and 3 just gets longer to accommodate the extra frame generation work.

Also didn't someone post some crazy high esports benches earlier which would suggest the generation time is very low or else you'd start seeing negative performance from turning on DLSS 3.
 
So, RTX 4080 (12GB) has the same memory interface width as GTX 1060 3GB? Nice move. Ok, obviously transfer rate will be much higher, still it has to make do with roughly half the transfer rate of the TFLops-wise similarly configured RTX 3090 Ti. That's probably gonna be a rude awakening in some applications.
Yeah, efficient bandwidth is the big unknown at the moment, and its deficit may explain some scaling issues compared to Ampere.
 
I suppose the best test for DLSS 3.0 is going to be taking 20/30fps to ~60fps.

These 200+ frame rates in games like Spiderman are all well good but with a frame rate already high before DLSS 3 it would, I imagine, be difficult to feel anything that doesn't quite feel right with DLSS 3.

Where as we're all very good at feeling the difference between 20/30fps and 60fps so I feel that's where we'll notice if something doesn't quite feel like it should with DLSS 3.
 
Last edited:
Yeah, efficient bandwidth is the big unknown at the moment, and its deficit may explain some scaling issues compared to Ampere.
Currently, 24 Gbps is the highest announced speed for G6X with the announcement being from April this year. Going by that, a 192-bit-card can reach 576 GB/s. That's less than of 3070 Ti with less than half the TFlops. Yes, caching can get you an extra mile or two and SER will maybe have an effect as well (lessening the burden of the caches, so they can better serve other consumers). But all in all, i'm afraid, it's too little.

edit: Seemingly, many vendors like KFA2, MSI or Zotac will continue to use 21 Gbps memories, so even less than calculated with above: 504 GB/s.
 
Last edited:
Where as we're all very good at feeling the differencing between 20/30fps and 60fps so I feel that's where we'll notice if something doesn't quite feel like it should with DLSS 3

Yes, but will it "feel" like 60 fps or will it still "feel" like 20/30 fps but look like 60 fps? The main input and game processing loop would still be at the lower raw rate, right?
 
Status
Not open for further replies.
Back
Top