Instead, they chose the much more profitable option.Yeah, it was the 60/256b part.
Instead, they chose the much more profitable option.Yeah, it was the 60/256b part.
Yeah.Instead, they chose the much more profitable option.
Too bad SI-only or hunt for bulk SKUs.Radeon 6900XT Liquid Cooled is official.
If those are genuine 18Gbps parts then woohoo, only took 3.5 years since the announcement.Looks like 18Gbps GDDR6 modules exist after all
Yeah, it was the 60/256b part.
Way more than that for maximum funny.
Absolute performance too, given it was a 150W part by itself.6800 would've been better for efficiency not so much for absolute performance.
Both.RDNA3 will clock even higher than 3GHz or IPC improvement?
RDNA3 will clock even higher than 3GHz or IPC improvement?
This was where my head was going; glad I'm not the only one thinking it. Here's hoping FXR (or whatever) is sufficiently competitive with DLSS.It's also going to be harder to keep wide and slow architectures busy given the regression in target resolutions. Upscaling tech will be all the rage for this generation.
The point is, until cards can actaully run RT at these enormous resolutions at 60+ FPS consistently, most users will be running at reduced resolutions with something like DLSS or FXR on top to upscale. 1080p is certainly a significant resolution regression (one quarter) compared to 4k.I really doubt that we'll see any kind of resolution regression on PC in the future. 1080p is the most popular resolution on PC even today.
4K isn't a typical resolution for PC gaming right now, 1080p is. Thus it's not that we'll see a regression of resolution but more of a prolongation of 1080p-1440p range into the future thanks to reconstruction techniques. So a "wide and slow" GPU will work about the same as it does now, probably a bit better due to games which will target native 4K anyway (4K displays should become more widespread on PC over the next years) or use more complex shading.The point is, until cards can actaully run RT at these enormous resolutions at 60+ FPS consistently, most users will be running at reduced resolutions with something like DLSS or FXR on top to upscale. 1080p is certainly a significant resolution regression (one quarter) compared to 4k.
I guess his point was not a lot of people play in 4k.
That's true. There's a huge gulf right now between what reviewers are doing and how PC gamers actually play games. So I should rephrase my earlier comment to say there will be a regression in the "internal" resolution that games are benchmarked at.
RDNA2 would have to be clocked at around 5GHz and have 30Gbps G6 probably to match Nvidia's performance in RT.
No such thing.Which brings up an interesting possibility for the RDNA2 refresh.
I'm still not convinced these clocks are 'real' (as in the card actually 100% utilizes it and benefits from it), it just seems that the GPU goes sky high when there is some sort of bottleneck somewhere else apart from CUs themselves. What I mean, XTXH cards are not locked at silly 2150 mhz memclock, but they are not leaps and bounds over the regular 6800xt/6900xt s in 4K even if they are extremely overclocked and fine-tuned both in terms of core clock and mem clock. I wish I could get one at not eye-gouging price, it would certainly be interesting to test, especially with water cooling.The RDNA2 chips can overclock close to 3GHz with enough voltage, but the stock Navi21 cards usually run at <2.3GHz.
They are, I've had me boys torture whole two 6800XTs with +15% PLs and not.I'm still not convinced these clocks are 'real'
Done.especially with water cooling.
Memclk does nothing for N21.XTXH cards are not locked at silly 2150 mhz memclock
So you mean we are basically bound by IC clock or maybe the ROPs? It'd be nice to be able to tweak cache clocks if it is possible.Memclk does nothing for N21.
In games - most probably, but it could be of some use in specific memory bound benches (like FS GT2, Timespy gt2, some gpgpu stuff)Memclk does nothing for N21.