I do not choose my platform - gamers do.
My predictions are PC arrives at APUs in general. SteamDeck, Rembrandt etc., are first signs, but it will come to desktop as well.
NV looks like they already prepare for this. Their consumer focus seems much more cloud gaming and content creation. At least that's how it looks to me.
But that's jsut me, and we shall see.
Yeah the product itself is clearly awesome, but pricing and marketing is what dictates peoples expectations of how awesome it should be and I think this is where NVIDIA have fallen hard tbh. That 4080 12GB should have been marketed as a 4070. If the price were then even $100 over the 3070 people would I suspect still see that as a good deal. I think even a $150 increase would have been accepted with a few grumblings. But whacking it up by $400 and then renaming it a 4080 to trick people into thinking they're getting better value than they are is where Nvidia have gone wrong IMO. But hey, maybe they simply had no choice on the pricing and people will buy it anyway in which case what I'm saying doesn't matter.
For my part though as a pretty serious PC hardware enthusiast of decades whos been itching to get my hands on a new GPU for the last few years and have been waiting on this with baited breath, I'm put off. I was ready to put my money down as soon as these were available before the announcement but now I'll wait and pray that AMD release something more compelling. Perhaps that may also have a knock on effect on Ampere prices bringing that performance level pricing in line with what we'd expect following a new generation launch. If that were to happen then I'd be tempted by a high end Ampere.
It's only me that watch the 2 year old CP2077 running at 22fps on the still unreleased most powerful gpu in the world, and thinks that maybe RT is still 10 years away?
Upscaling costs 1ms from 1080p -> 2160p on a 3090. even with 5ms DLSS 3 should be fast enough to always be faster than the next frame.One particularly impressive aspect of DLSS3 vs DLSS2 is its ability to scale at very high framerates. I guess there must be a limit to that though. It'd be interesting to see what the cost is in terms of ms to create the new frame.
Also what happens if it misses the window? Clearly its not an exact doubling from all the examples we've seen which means some pairs of frames must be getting a new AI generated frame and others not. Isn't that going to lead to uneven frame pacing?
And the poor bastards on 2060-tier cards who actually need it to go from 30-40fps to 60+ are left in the cold. What's the point of 200fps in that game?One particularly impressive aspect of DLSS3 vs DLSS2 is its ability to scale at very high framerates. I guess there must be a limit to that though. It'd be interesting to see what the cost is in terms of ms to create the new frame.
Also what happens if it misses the window? Clearly its not an exact doubling from all the examples we've seen which means some pairs of frames must be getting a new AI generated frame and others not. Isn't that going to lead to uneven frame pacing?
Wow Nvidia comparing the 3080 to the 2080 and not the 2080ti ? Thats impossible I am told.Well I took relative performance to include ray tracing performance, since they said "up to" 2x faster.
Edit:
It's in my post above.
Wow Nvidia comparing cards which launched on the same price? How does that work?Wow Nvidia comparing the 3080 to the 2080 and not the 2080ti ? Thats impossible I am told.
What do you mean, "Also what happens if it misses the window"?One particularly impressive aspect of DLSS3 vs DLSS2 is its ability to scale at very high framerates. I guess there must be a limit to that though. It'd be interesting to see what the cost is in terms of ms to create the new frame.
Also what happens if it misses the window? Clearly its not an exact doubling from all the examples we've seen which means some pairs of frames must be getting a new AI generated frame and others not. Isn't that going to lead to uneven frame pacing?
I'm not negative - i'm sad.Right, then theres no reason to be all-negative to the platform. Its the largest singular 'system'.
No, sadly not.Nah. APU's will be a part of the consumer market, more so than before perhaps. Laptops generally have taken that approach since ages already.
Yeah. And you know what? I can't wait for it.Cloud and streaming/services are going to be end for your platform sometime, no idea when, but its ultimately going to be the bigger market
People blame NV of being greedy, or the current crisis, economy, Putin, whatever.The high prices are a mirror and result of the market as it is today.
I'm looking forward to the 4080 10Gb for $700, 4080 8Gb for $600 and 4080 6Gb for $500 to flesh out the "midrange" cards.
Upscaling costs 1ms from 1080p -> 2160p on a 3090. even with 5ms DLSS 3 should be fast enough to always be faster than the next frame.
What do you mean, "Also what happens if it misses the window"?
Yeah, efficient bandwidth is the big unknown at the moment, and its deficit may explain some scaling issues compared to Ampere.So, RTX 4080 (12GB) has the same memory interface width as GTX 1060 3GB? Nice move. Ok, obviously transfer rate will be much higher, still it has to make do with roughly half the transfer rate of the TFLops-wise similarly configured RTX 3090 Ti. That's probably gonna be a rude awakening in some applications.
Currently, 24 Gbps is the highest announced speed for G6X with the announcement being from April this year. Going by that, a 192-bit-card can reach 576 GB/s. That's less than of 3070 Ti with less than half the TFlops. Yes, caching can get you an extra mile or two and SER will maybe have an effect as well (lessening the burden of the caches, so they can better serve other consumers). But all in all, i'm afraid, it's too little.Yeah, efficient bandwidth is the big unknown at the moment, and its deficit may explain some scaling issues compared to Ampere.
Where as we're all very good at feeling the differencing between 20/30fps and 60fps so I feel that's where we'll notice if something doesn't quite feel like it should with DLSS 3