RDNA4

It's little low hanging fruit things like this that gives NVidia the market share it has, AMD's HW encoding quality is really bad with either AVC/AV1, encoding anything at twitch standard 6Mbit settings for ex. with either HQ/CBR/VBR in any barely moving image just encodes a bad mix of blurry and blocky image with no detail preservation whatsoever. I'm not even a streamer but sometimes one would like to record stuff for a guide or showing funny stuff, etc. and even simple things like this makes me uneasy about going with AMD next round since very little to nothing has been done to improve their encoding quality, instead they decide to waste a few millions chasing the overblown "AI" bubble.
AMD aren't really spending anymore than the other players are in integrated consumer AI applications since they're counting on their partners like Microsoft to do most of the heavy lifting for them and even if they do fail to make something out of it, they still have a fallback plan to provide a programmable interface for their XDNA NPUs if software developers want to create arbitrary software programs for them. Who really cares what they do with their instinct products since they're siloed off from their other product segments ?

If you're looking for higher quality/data rate live video encoding, integrated video engines aren't the answer when they're balanced around performance/low hardware footprint. A modern high-end CPU or an ASIC are more suitable for that purpose ...
 
It's little low hanging fruit things like this that gives NVidia the market share it has, AMD's HW encoding quality is really bad with either AVC/AV1, encoding anything at twitch standard 6Mbit settings for ex. with either HQ/CBR/VBR in any barely moving image just encodes a bad mix of blurry and blocky image with no detail preservation whatsoever. I'm not even a streamer but sometimes one would like to record stuff for a guide or showing funny stuff, etc. and even simple things like this makes me uneasy about going with AMD next round since very little to nothing has been done to improve their encoding quality, instead they decide to waste a few millions chasing the overblown "AI" bubble.
I really doubt most consumers know about any of these things, let alone care that much about them to where it's the reason Nvidia has the marketshare it has now.
 
These improvements seem minor though and are unlikely to provide much performance.
There are others though I presume, hidden in that image.
 
What is 64B RT Node? I thought BVH data was currently 64B and moving to 128B to match the cacheline size? Am I missing something?
 

3.4ghz. Right on track for slightly >4070tiS level performance, adjusted for Frontiers of Pandora 1440p. If this could just launch before/during the holiday season for <= $599 it'd do so well. Don't wait for CES AMD that's when Blackwell gets shown off, the price/performance advantage won't be nearly as much :cry:
RDNA4 will fight Blackwell, not Ada. And unfortunately for AMD, Nvidia will increase the clocks too (abeit not the biggest perf boost factor)
 

3.4ghz. Right on track for slightly >4070tiS level performance, adjusted for Frontiers of Pandora 1440p. If this could just launch before/during the holiday season for <= $599 it'd do so well. Don't wait for CES AMD that's when Blackwell gets shown off, the price/performance advantage won't be nearly as much :cry:
A few months ago I was expecting it this year, but if they are planning on CES then they need to get it out before CNY.
Some are suggesting Nvidia may not have anything but the top two chips until the middle of 2025 and wont be less than $1k, ($999 xx70Ti)
RDNA4 will fight Blackwell, not Ada. And unfortunately for AMD, Nvidia will increase the clocks too (abeit not the biggest perf boost factor)
Eh, ~10% to ~2.8ghz. They can maybe push it to tickle 3ghz but likely won't break it.
They are doubling down on wide&slow, ~765mm2.
 
Last edited:
A few months ago I was expecting it this year, but if they are planning on CES then they need to get it out before CNY.
Some are suggesting Nvidia may not have anything but the top two chips until the middle of 2024 and wont be less than $1k, ($999 xx70Ti)

Eh, ~10% to ~2.8ghz. They can maybe push it to tickle 3ghz but likely won't break it.
They are doubling down on wide&slow, ~765mm2.

It is the middle of 2024 tho. We are 7 months into the year almost 8
 

3.4ghz. Right on track for slightly >4070tiS level performance, adjusted for Frontiers of Pandora 1440p. If this could just launch before/during the holiday season for <= $599 it'd do so well. Don't wait for CES AMD that's when Blackwell gets shown off, the price/performance advantage won't be nearly as much :cry:
This would be a 50ish% improvement in rasterization and much more in RT. Seems highly unlikely in the current landscape.
 
We don't need to wait to know you are saying this in bad faith. Since when is "core for core" the same as IPC?

You think you are better than Kepler? You are caught lying unnecessarily on purpose.
That's exactly what it means doesn't it? Kepler pushed that 40% faster pretty much as a fact right until the very end and it's obvious he got it wrong.
 
That's exactly what it means doesn't it?
It means the performance per clock, usually measured on single core but not necessarily.
So chip/core A at X GHz vs chip/core Y at X GHz, the actual clock is irrelevant, it just needs to be the same for both)
 
Back
Top