AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

I was under the assumption that inline was the ideal method of calling for rays go forward, that way you wouldn't need separate draw calls for RT. Nvidia shouldn't perform any worse with going with inline. It just may not perform better than it currently is.

Not quite. There are pitfalls with inline raytracing. Essentially it makes sense for simple RT scenarios but may not be ideal if you’re doing something more convoluted where dedicated shaders may be better optimized by drivers/hardware.

https://github.com/microsoft/DirectX-Specs/blob/master/d3d/Raytracing.md#inline-raytracing

The motivations for this second parallel raytracing system are both the any-shader-stage property as well as being open to the possibility that for certain scenarios the full dynamic- shader-based raytracing system may be overkill. The tradeoff is that by inlining shading work with the caller, the system has far less opportunity to make performance optimizations on behalf of the app. Still, if the app can constrain the complexity of its raytracing related shading work (while inlining with other non raytracing shaders) this path could be a win versus spawning separate shaders with the fully general path.

It is likely that shoehorning fully dynamic shading via heavy uber-shading through inline raytracing will have performance that depends extra heavily on the degree of coherence across threads. Being careful not to lose too much performance here may be a burden largely if not entirely for the the application and it's data organization as opposed to the system.
 
Difference being the current and next >180 million 9th-gen consoles userbase will have >12GB of available VRAM and RDNA2 levels of RT performance, not 8/10GB available VRAM with RTX30 levels of RT performance.

RT performance will just be superior on NV hardware (or RDNA3/4) as opposed to consoles. Things do scale pretty well already. That or a complete lack of it in more demanding games.
 
4GB highend cards like the RX290 and GTX980 had pretty bad performance in 2015 onwards.

Not at console settings which is the context of your statement. We've had this discussion before and weren't able to find any games that didn't run much better on the 4GB 980 than the PS4/XBO. Therefore it stands to reason that sufficiently powerful 8GB GPU's should be able to do the same this generation, although as mentioned before the SSD's might change that equation.
 
It is almost to easy to spot a "AMD raytraycing" title these days...raytraced shadows and not much more.
I guess that is the price for having to little hardware raytracing acceleration in their SKU's.

If you look at PS5 Spiderman, then not really. We are seeing around RTX2080(Ti) RT performance in most situations, so delivering games with full set of effects should be possible, but they will obviously run a lot faster on RTX30x0 series. So seeing only RT shadows in use is more to do with performance positioning than anything else. Besides, at least for WoW I don't see too much need for full RT reflections, wrong type of game. Dirt on another hand would be much better pick as cars do tend to be glossy :)
 
If you look at PS5 Spiderman, then not really. We are seeing around RTX2080(Ti) RT performance in most situations, so delivering games with full set of effects should be possible, but they will obviously run a lot faster on RTX30x0 series. So seeing only RT shadows in use is more to do with performance positioning than anything else. Besides, at least for WoW I don't see too much need for full RT reflections, wrong type of game. Dirt on another hand would be much better pick as cars do tend to be glossy :)

The reflection are running 1/2 the FPS at 1/4 the resolution so not a stellar example that one.
 
8th gen consoles bumped the RAM sizes by 16x, from 512MB to 8GB while staying with HDDs as the main storage.

9th gen consoles bumped the RAM sizes only by 2x, from 8 to 16 GBs while moving to ultra fast SSDs for storage.
It doesn't matter if RAM sizes were bumped by 2x or 16x. What matters is that consoles are used as baseline for VRAM usage on all major engines because they're a much larger source of income to publishers than gaming PCs.
For the 8th-gen consoles the baseline was placed at around 6GB of RAM available for the GPU for rendering at 1080p. For the 9th-gen we're now looking at more than 12GB for rendering at 1440-2160p.


I imagine that no cards with 8+ GB VRAM will ever have any issues running multiplatform games with console level IQ.
If your goal with $500-700 graphics cards is as low as "not having issues" with the IQ of $500 consoles with GPUs that have half the processing throughput, then you've proven my point.



Devs will have to be careful and "creative" to support the cards with less than 16 GBs
Devs would have to be careful and creative with their PC ports, but not all of them will be. Also, I'm not saying 16GB is minimum, as 12GB might be more than enough for most performance targets for a while (consoles don't have much more than that available for the GPU).
For example, this might put the 12GB 3060 oddly positioned against the 8GB 3060 Ti for higher resolutions, on the long run.



Not at console settings which is the context of your statement. We've had this discussion before and weren't able to find any games that didn't run much better on the 4GB 980 than the PS4/XBO.
Don't start skewing my statements. I made no performance comparison claims between the GTX980 and the 2013 consoles.
I don't know who set out to look for games that ran "much better" on the 2013 $400 consoles with 1.3/1.8TFLOPs than nvidia's $550 card from 2014 with 5TFLOPs, but if they did then it was a pretty stupid quest to start with.

The 4GB cards (RX290, GTX 980/970, Fury) aged badly from 2015 onwards, on the resolutions they were marketed to work at (1440p and 4K), as typical VRAM occupancy on 1440p between games released up to 2015 and what we had by 2017 rose quite drastically.
At Ultra Preset,
1440p Battlefield 4: 2280MB
1440p Battlefield 5: 5490MB

4K Battlefield 4: 2988MB
4K Battlefield 5: 6990MB

The practical difference is that while the Fury X was beating the 4GB 290X by almost 40% on the 2015 Battlefield 4 at 1440p, on Battlefield 5 it was losing by 24% with the 390X (same card as 290X but 8GB VRAM).

So don't worry, the VRAM usage spike due to 8th-gen consoles didn't just affect your precious GTX980. Perhaps it affected AMD cards even more.
 
If you look at PS5 Spiderman, then not really. We are seeing around RTX2080(Ti) RT performance in most situations, so delivering games with full set of effects should be possible, but they will obviously run a lot faster on RTX30x0 series. So seeing only RT shadows in use is more to do with performance positioning than anything else. Besides, at least for WoW I don't see too much need for full RT reflections, wrong type of game. Dirt on another hand would be much better pick as cars do tend to be glossy :)

The ray tracing in MM PS5 doesn't scream 2080/Ti in any imaginary way.

It doesn't matter if RAM sizes were bumped by 2x or 16x. What matters is that consoles are used as baseline for VRAM usage on all major engines because they're a much larger source of income to publishers than gaming PCs.
For the 8th-gen consoles the baseline was placed at around 6GB of RAM available for the GPU for rendering at 1080p. For the 9th-gen we're now looking at more than 12GB for rendering at 1440-2160p.

Last time, consoles had 8gb of GDDR5, while common on pc was 2gb (2013). This time around, all GPU's basically match or exceed what consoles have available to VRAM.
Consoles are used as a baseline because their the lowest common denominator, i also doubt their the highest source of income anyway, in special not considering platform for platform. Combining switch, PS and XB etc then maybe, but even then not for all games.

If your goal with $500-700 graphics cards is as low as "not having issues" with the IQ of $500 consoles with GPUs that have half the processing throughput, then you've proven my point.

A gamer doesnt have to lower everything to console settings to reduce vram usage.

The 4GB cards (RX290, GTX 980/970, Fury) aged badly from 2015 onwards,

In that case, so did consoles. Multiplat games started to suffer more and more as time went on. To the point where we landed at CP2077.
I remember DF mentioning in a CoD analysis 'its really getting time for a new gen' etc.


Those are stupid comparisons to begin with. 1440p and 4k are kinda unheard of for the 2013 consoles. Someone getting a 3060 or even 3080 now isnt looking at 8k ultra down the line in five years.

So don't worry, the VRAM usage spike due to 8th-gen consoles didn't just affect your precious GTX980. Perhaps it affected AMD cards even more.

A 7950 3gb, a 2012 gpu, outperformed both base consoles through the whole generation. A 7870 tags alone fine too for the most, bad ports aside.
 
Using console PRICE as an argument is doomed to fail, as some generation the hardware was sold at a LOSS...and recouped by the higher cost of games on consoles...just to be nit-picky.
So PRICE of the hardware for a console is nothing than a fallacy...
 
New Using console PRICE as an argument is doomed to fail

Yes you have a lot of money to spend and price isn't important to you nor should it be important to anyone else.
You told us as much already.

Though you conveniently forgot to mention the compute throughput disparity and year of release because those wouldn't help your argument.
 
Yes you have a lot of money to spend and price isn't important to you nor should it be important to anyone else.
You told us as much already.

Though you conveniently forgot to mention the compute throughput disparity and year of release because those wouldn't help your argument.

Again, hardware being sold at at loss, while games are more expensive cannot be ignored.

Trying to hide the real cost is a fallacy...like it or not.
 
It doesn't matter if RAM sizes were bumped by 2x or 16x. What matters is that consoles are used as baseline for VRAM usage on all major engines because they're a much larger source of income to publishers than gaming PCs.
Let's just ignore the fact that at the moment of PS4/XBO launch a typical PC GPU had 1-2GB of VRAM which is 1/8-1/4 of what consoles had which actually had a profound effect on PC GPU VRAM sizes while 8GB GPUs now are at 1/2 of the new consoles already. It doesn't matter (c)
We should also ignore the fact of radical storage change in consoles which lead to games needing less VRAM to achieve the same IQ on new consoles than they needed on previous ones.
Anything which goes against the pushing for oh so needed 16GBs must be dismissed and ignored. No thinking and evidence needed.

For the 8th-gen consoles the baseline was placed at around 6GB of RAM available for the GPU for rendering at 1080p.
That "baseline" was and still is 2GBs. Any 4GB GPU has no issues with rendering like 99,9% of current gen games in 1080p at console level of settings.

For the 9th-gen we're now looking at more than 12GB for rendering at 1440-2160p.
More than 16 even I'd say cause why not while we're just pulling stuff out of various places?
 
Again, hardware being sold at at loss, while games are more expensive cannot be ignored.

And the need to pay to be able to play online for most games.... Yes upfront the pc is going to be more expensive, much more so if you want highend at launch.
But you also get more, say a 6800/XT, you get double the capacity over what the PS5 delivers in raw gpu power, probably also better RT as it scales with TF down the range. In case of Ampere, you get much and much more rasterization power (like RDNA2 dGPUs do), but also ray tracing and reconstruction tech.
When talking GPU, the consoles are closer to 2018/last gen GPUs (Turing) in both rasterization (around 2080 at best for PS5), and RT lays around 2060 performance.
You pay less, you get less. Its all about what you want, which is going to differ for everyone.

Same for the CPU, its basically a generation behind (Zen2), much lower clocked at it. Also, no serious system is going to be build with just 16GB of total ram, theres also system ram aiding things. Any RDNA2 today already sits at 16GB Vram (with IC), where PS5 seems rather BW limited (like the Pro was actually).
Yes the SSD its high-end, but already slower before compression, until DS hits and we sit much higher.

Price wars usually dont find their way into technical discussions, but from time to time, they do as seen here.

Let's just ignore the fact that at the moment of PS4/XBO launch a typical PC GPU had 1-2GB of VRAM which is 1/8-1/4 of what consoles had which actually had a profound effect on PC GPU VRAM sizes while 8GB GPUs now are at 1/2 of the new consoles already. It doesn't matter (c)

Yes what i pointed out before, VRAM was rather limited across midrange products backthen, though there did exist 3 tand 6gb 7970 gpus already in 2012. The first titan (2013) had 6gb too. Anyway, even with 2gb, im quite surprised how well those held up, the AMD GCN gpus then. Most PS4 games actually never went much above 3GB for vram usage. In no way will games on PS5 consume anywhere close to 16gb for just the GPU.

That "baseline" was and still is 2GBs. Any 4GB GPU has no issues with rendering like 99,9% of current gen games in 1080p at console level of settings.

Exactly. But tottentranz ofcourse came with 1440p/4k Ultra max settings on GPUs launched in the same timeframe as the 2013 base consoles.
 
We should also ignore the fact of radical storage change in consoles which lead to games needing less VRAM to achieve the same IQ on new consoles than they needed on previous ones.
No, you shouldn't. Most gaming PCs won't have the same up-to-6GB/s I/O of the SeriesX, much less the up-to-22GB/s I/O as the PS5.
Since PC game devs can't make games for the PC that require a >3GB/s NVMe, you can count on VRAM allocation on PC dGPUs being larger than on consoles.


Anything which goes against the pushing for oh so needed 16GBs must be dismissed and ignored.
Yes, you need to somehow negate the VRAM advantage of the RX6800 series over the RTX3070/3080.
Let me guess, you think RT performance is of the utmost importance for long-term GPU performance?



More than 16 even I'd say cause why not while we're just pulling stuff out of various places?
Pulling out of what places?
The Series X has 16GB total, out of which 2.5GB are allocated for the OS. The PS5 should be similar.

That leaves 13.5GB for games, of which the majority is to be used by the GPU. Therefore, 12GB should be sufficient for getting VRAM parity with most multiplatform titles in the long run. 8GB should not.
 
No, you shouldn't. Most gaming PCs won't have the same up-to-6GB/s I/O of the SeriesX, much less the up-to-22GB/s I/O as the PS5.

I highly doubt SSD's will account for, and replace VRAM to an extend where image quality matters. The 22GB/s figure isnt anything to go by either in most use cases.
Also, in a pc you usually have large amounts of system ram, which the consoles lack (only gddr6 ram). Say a rdna2 16gb gpu with anything between 16 and 32gb ddr4, the system ram is much faster and less latency intrusive vs the SSD found in the PS5.

Most gaming PC's wont have the same IO speed, but due to system ram they might not even need it right now. Anyway, direct storage, RTX-IO promise numbers way higher than what the PS5 does. Talking about right now, CP2077 sure does show some impressive SSD streaming on pc, basically no load times and ultra fast streaming speeding through the big city. The studio even mentioned that due to it being designed around fast IO, the older hardware suffers.

That leaves 13.5GB for games, of which the majority is to be used by the GPU. Therefore, 12GB should be sufficient for getting VRAM parity with most multiplatform titles in the long run. 8GB should not.

Maybe, 8GB seems abit low indeed. At the same time, that 8GB isnt as much BW limited as the consoles are, its able to fill it much faster. More ram defenitely doesnt directly mean better. A 9600XT 256 wasnt performing better then a 128mb 9700pro either in the vast majority of games. What i want to say is, its not just soley about memory amount when gauging performance, in special not between different architectures, even more so not between different architectures in a console vs a pc.
Like noted above, their setups are a whole lot different. On the consoles, the 16GB is shared between everything basically, both by amount and bandwith.

We can point to current benchmarks, but that is going to be said its false and 'wait for later games'. On the other hand, when comparing Ampere to PS5/RDNA, current games do matter :p
 
Back
Top