AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

Really weird that WCCFTech is covering this stuff before other outlets. Are they trying to be seen as a serious outfit?

Hmmm SAM and DLSS in the same graph. That’s sure to rattle some nerves.

He said he’s trying to make that his niche to fill the gaps other sites aren’t covering.
 
December 18, 2020
As first benchmarks from the ComputerBase community show, an AMD Ryzen 7 2700X (test) based on the already older Zen+ architecture of AMD Smart Access Memory (SAM) also benefits.

In Assassin's Creed Valhalla (Test), community member "Strugus" achieved about 15 percent more frames per second than without the function with an AMD Radeon RX 6800 graphics card and activated Re-Size BAR.
AMD Smart Access Memory: Auch Zen+ und Zen 2 beherrschen den VRAM-Vollzugriff - ComputerBase
 
Given how good Q2RTX still runs on Radeon RX 6800, I tried it on a GTX 1080 just for giggles. In Q2-esque resolutions its borderline playable in single player mode. In default quality and 1024 x 768, I'm getting around 30-ish fps - which is more basically, than what my rig achieved, when I first played through Q2 before the turn of the millenium.

Just wish AMD would enable shader-based DXR/VKRT for RX 5700 and Vega as well. Could spark more interest in RT and desire to upgrade for Radeon users.
 
Last edited:
Given how good Q2RTX still runs on Radeon RX 6800, I tried it on a GTX 1080 just for giggles. In Q2-esque resolutions its borderline playable in single player mode. In default quality and 1024 x 768, I'm getting around 30-ish fps - which is more basically, than what my rig achieved, when I first played through Q2 before the turn of the millenium.

Just wish AMD would enable shader-based DXR/VKRT for RX 5700 and Vega as well. Could spark more interest in RT and desire to upgrade for Radeon users.

Heck, that's better than when I first played it. 640x480 on a Voodoo Rush with decent performance. I could do 800x600, but the performance just wasn't there for competitive play. Ah the memories. I was finally happy with performance at 1024x768 once I got a Voodoo 2.

Regards,
SB
 
Heck, that's better than when I first played it. 640x480 on a Voodoo Rush with decent performance. I could do 800x600, but the performance just wasn't there for competitive play. Ah the memories. I was finally happy with performance at 1024x768 once I got a Voodoo 2.

Regards,
SB

You needed 2 X Voodoo 2s for 1024x768 didn't you?
 
You needed 2 X Voodoo 2s for 1024x768 didn't you?

Trying to remember, it's been so long. For competitive play yeah, you'd need SLI to be able to hit 60 FPS. But you could do 30 FPS at 1024x768, IIRC.

With a single card, I would lower the resolution down to 640x480 for competitive play.

This was when I was still willing to do 30 FPS for single player games. By the time Unreal Tournament came around, 30 FPS was no longer good enough, for me, even for single player modes.

Regards,
SB
 
I seem to recall 800x600 was max resolution for one card (even the 12MB versions), and 2 enabled 1024x768, but like you say, it was a long time ago

Can you connect old D Sub or component CRTs up to modern graphics cards?
 
I seem to recall 800x600 was max resolution for one card (even the 12MB versions), and 2 enabled 1024x768, but like you say, it was a long time ago

Can you connect old D Sub or component CRTs up to modern graphics cards?

OK, your memory is better than mine. I just had a look and that's correct, you needed SLI to even do 1024x768.

Quake II Benchmarks - Voodoo² Accelerator Review September 1998 | Tom's Hardware

So, looks like I was remembering Quake 2 with 1 V2 card when I was thinking of the Voodoo Rush. I must have gotten the V2 around when Q2 came out and got confused at the timing when remembering back.

Regards,
SB
 
Heck, that's better than when I first played it. 640x480 on a Voodoo Rush with decent performance. I could do 800x600, but the performance just wasn't there for competitive play. Ah the memories. I was finally happy with performance at 1024x768 once I got a Voodoo 2.

Same here, was on a voodoo2 for a while. You also had to thinkoff color dept as it could impact performance on those (sounds strange these days :p )
1024x768 was a beastly resolution back then lol.
 
Same here, was on a voodoo2 for a while. You also had to thinkoff color dept as it could impact performance on those (sounds strange these days :p )
1024x768 was a beastly resolution back then lol.

I was Riva128 guy and then went to S3 Savage and Riva TNT, never had Voodoo for myself, but build few PC's with all the versions over the years. Voodoo2 was really good for it's time, but then nVidia and ATi started cranking the pressure.
 
I seem to recall 800x600 was max resolution for one card (even the 12MB versions), and 2 enabled 1024x768, but like you say, it was a long time ago

Can you connect old D Sub or component CRTs up to modern graphics cards?
V2 had fixed 4MB for the frame buffer. That would fit two 16-bit frames (standard double-buffering) and a 32-bit depth-buffer, all at 800*600 resolution.
 
This is one of the reasons I'm suggesting ditching hw tracing. You can do what you want in compute, forget the api restrictions.

The other is that the baseline requirements aren't Nvidia cards, they aren't even the PS5/SX. It's the Series S that all high end titles must include. Core features have to run on there, the console where Watchdogs Legion looks like a PS2 game thanks to how low res the raytracing is. Call of Duty doesn't even enable raytracing.

That's why hw raytracing is potentially too costly even for devs that think it's a good idea.
My issue with it is efficiency. I see no way that it can scale down in the future to mass market prices and power draws, and that doesn’t improve as users keep moving to ever more mobile devices. In which case it is reduced to being a cool effect you can switch on to see how it looks. And making lighting function well with RT will be work and effort spent on top of the other path(s).
What is done now to reduce RT costs to reasonable levels basically results in exchanging one set of (relatively minor) artifacts to another set of (relatively minor) artifacts. At the cost of die size, cost, and power. I can see Nvidias interest in this, but not consumers. Nor can I see "bigger, more expensive, higher power draw" being a recipy for the future.

In the past, these concerns were solved by improvements in lithography, but the days where a decades worth of lithographic improvement delivered a factor of fifty higher performance along with a reduction in silicon area are gone. The last decade only delivered roughly a factor of four at iso power and die size, and the future outlook is even grimmer. (Here is a recent piece that analyses the Apple A14 from a silicon perspective, and that’s bad enough. HP silicon faces even steeper challenges.)

This time, lithography won’t come to the rescue.
 
Back
Top