AMD Execution Thread [2024]

For you, yes. I think hardware enthusiasts who talk about this stuff online, much like gamers online in general, have a huge tendency to think they're more representative of the overall market than they really are, though.

You also seemed to have timed it well and got a bit lucky too. Remember the plan was originally NOT to support Zen 3 stuff on earlier AM4 motherboards. Plus the entire market isn't all buying into the earliest generation of a given platform, meaning the scope for upgrading is lessened. Most people will simply not want to upgrade to just a one architectural generation ahead CPU, given the mere 20-25% gains you'll typically get under good circumstances for the extra $250+ cost.

It's a nice option to have for sure, but it doesn't inherently make an Intel CPU 'very poor value' as the other person claimed. That's just a terribly hyperbolic claim.
I don’t think it’s hyperbolic at all. Unless you buy a very low end CPU, it’s very rare that you will ever have a worthwhile upgrade option without purchasing a new motherboard at the absolute minimum. Then we get into whether or not they knowingly release products with the potential for huge performance degradations should their exploits be uncovered.

Can you point out a single area where Intel provides some type of value compared to the competition?
 
I don’t think it’s hyperbolic at all. Unless you buy a very low end CPU, it’s very rare that you will ever have a worthwhile upgrade option without purchasing a new motherboard at the absolute minimum. Then we get into whether or not they knowingly release products with the potential for huge performance degradations should their exploits be uncovered.

Can you point out a single area where Intel provides some type of value compared to the competition?
Again, the vast majority of people do not upgrade their CPU every 2-3 years, or even every 3-4 years. There's usually just not a lot of need for it. It's a luxury. And remember when we're talking value, you are still having to buy new CPU's to upgrade. That's additional cost. The actual value proposition of upgrading your CPU frequently is not terribly great.

Having to buy a new motherboard and RAM for a new CPU is just the typical CPU upgrade experience, cuz so much will have changed by the time most people get around to wanting/needing to do it. Heck, even if you dont need a new motherboard, upgrades for WiFi, USB options, PCI-E, M2 slots, etc can still be tempting for a lot of people who find keeping up with the latest stuff important.

You're certainly allowed to feel that upgradeability on the same platform is very important to you, but to make sweeping statements like saying Intel CPU's are 'very poor value' all because they dont offer more significant upgrade options is extremely hyperbolic.

And no, I'm not going to extend this into a more generic brand warrior debate about strengths of Intel options.
 
A very interesting and insightful post from an ex AMD engineer about the history of AMD, ATI and the fight with Intel and NVIDIA.



So now that Nvidia has far outstripped the market cap of AMD and Intel, I thought this would be a fun story to tell. I spent 6+yrs @ AMD engg in mid to late 2000s helping design the CPU/APU/GPUs that we see today. Back then it was unimaginable for AMD to beat Intel in market-cap (we did in 2020!) and for Nvidia to beat both! In fact, AMD almost bought Nvidia but Jensen wasn’t ready to sell unless he replace Hector Ruiz of AMD as the CEO of the joint company. The world would have looked very different had that happened. Here’s the inside scoop of how & why AMD saw the GPU oppty, lost it, and then won it back in the backdrop of Nvidia’s far more insane trajectory, & lessons I still carry from those heady days.

I joined AMD right when their stock price was ~$40, and worked on the 1st dual-core architecture (single die, two cores) AMD-X2. Our first mistake -- and AMD insiders won’t like me saying this -- was made here.

We were always engineering-led and there was a lot of hubris around “building a pure dual-core” -- single die split into two separate CPU cores w/ independent instruction & data pipes, but shared cache, page tables etc. Even the fabs didn’t yet have the processes ready.

While we kept plodding on the “pure dual-core”, Intel, still smarting from the x64 defeat just slapped two 1x cores together, did some smart interconnects, & marketed it as “dual core”. Joke at AMD was that Intel’s marketing budget was > our R&D (true fact). Customers ate it up.

We did launch a “true” dual core, but nobody cared. By then Intel’s “fake” dual core already had AR/PR love. We then started working on a “true” quad core, but AGAIN, Intel just slapped 2 dual cores together & called it a quad-core. How did we miss that playbook?!

AMD always launched w/ better CPUs but always late to market. Customers didn’t grok what is fake vs real dual/quad core. If you do `cat /proc/cpu` and see cpu{0-3} you were happy. I was a proud engineer till then but then saw the value of launching 1st & fast. MARKET PERCEPTION IS A MOAT.

Somewhere between this dual→ quad journey, AMD acquired ATI, the canadian GPU company. Back in 2006, acquiring a GPU company did not make a lot of sense to me as an engineer. The market was in servers & client CPUs and GPUs were still niche. We didn’t want a GPU company so much that the internal joke was AMD+ATI=DAMIT.

But clearly, someone at AMD saw the future. We just saw it partially. We should have acquired Nvidia - and we tried. Nvidia – for those who remember – was mostly a “niche” CPU for hardcore gamers and they went hard on CUDA and AMD was a big believer in OpenGL. Developers preferred OpenGL vs CUDA given the lock-in with the latter. Jensen clearly thought very long term and was building his “Apple'' strategy of both HW and SW lock-in. He refused to sell unless he was made the joint-company’s CEO to align with this strategy. AMD blinked and our future trajectories splintered forever.

ATI was a tough nut - we didn’t really “get” them; they saw the world very differently. We wanted to merge GPU+CPU into an APU but it took years of trust & and process-building to get them to collaborate. Maybe if we had Slack, but we only had MSFT Sharepoint

While all this APU work was going on, AMD totally missed the iPhone wave. When we built chips for the PC, world was moving to laptops, when we moved to laptops, world moved to tablets, & when we moved there, world moved to cell phones.

We also missed the GPU wave trying to introduce a fundamentally better but also fundamentally new architecture: APUs (CPU+GPU on the same die - we love “same die everything” I guess). CATEGORY CREATION IS HELL but “if you’re going through hell, keep going”. We hesitated and...

...2008 crisis happened & we were totally caught with our pants down. After that, AMD basically lost the market to pretty much everyone: Intel, ARM, Nvidia. I learned it the hard way how SUPERIOR PRODUCTS LOSE TO SUPERIOR DISTRIBUTION LOCK-INS & GTM.

Huge respect for Nvidia though. They were just one of the little boys back then - we lost some GPU sales to them but never thought of them in the same league as ARM/Intel. They stuck to their guns, and the market came to them eventually when AI took off. BELIEF IN YOUR VISION and an unrelenting and dogged pursuit of your goals is a HIGHLY UNDERRATED SKILL. Most give up, Jensen just kept going harder.
 
Last edited:
AMD is starting a 3 to 5 years plan to become a software first company.

AMD is making changes in a big way to how they are approaching technology, shifting their focus from hardware development to emphasizing software, APIs, and AI experiences. Software is no longer just a complement to hardware; it's the core of modern technological ecosystems, and AMD is finally aligning its strategy accordingly

AMD has "tripled our software engineering, and are going all-in on the software." This not only means bring in more people, but also allow people to change roles: "we moved some of our best people in the organization to support" these teams

AMD commented that in the past they were "silicon first, then we thought about SDKs, toolchains and then the ISVs (software development companies)." They continued "Our shift in strategy is to talk to ISVs first...to understand what the developers want enabled

the old AMD would just chase speeds and feeds. The new AMD is going to be AI software first

 
The former is a "matrix multiply" to transcode regular textures to universally hardware supported formats. The later is proprietary and incompatible with existing hardware. It's very different solutions (apart from using addmul).
Both are the former, and both try to do the same thing really. The only difference is in how the ML part is being fit into the modern h/w pipeline. I don't see anything preventing Nvidia from modifying their approach to provide the exact same results as AMD's NTBC.
In any case this will only get any traction if it will be adopted as a DX compression standard, no matter who designs what now.
 
Both are the former, and both try to do the same thing really.
That's not what is described in the papers.
The former is a transcoder into existing block compression formats, from disk, at load/stream time. It works with existing hardware BC decoders and filtering. It reduces storage sizes only.
The latter is customly coded texture data, with bit's own decoder and (arguably problematic) filtering implementation. It reduces storage and memory sizes.
 
I don't see anything preventing Nvidia from modifying their approach to provide the exact same results as AMD's NTBC.
NVIDIA's solution is direct texture compression using small overfitted neural networks. It achieves compression ratios with signal to noise higher than those provided by JPEG (many times higher compression ratios over current advanced BC compression), but it is not supported by current texture hardware, requiring the use of stochastic filtering. It's truly amazing what you can achieve with such direct neural compression. Take a look here, for example - https://c3-neural-compression.github.io/

In contrast, AMD has demonstrated a 2x compression (on top of the BC compression) using a neural network predicting parameters of the simple block compression (BC4) format instead of directly predicting the texture. They have not shown encoders for the more advanced BC formats yet. When compared to the common LZMA and DEFLATE compression with RDO tuning for BC compressed textures (used for storage), the compression ratios of NTBC are comparable to those of LZMA/DEFLATE. Noise to signal ratios and speeds are similar to the current production pipelines used on consoles and PC. While the current advancements seem promising, they are not earthshattering. Currently, NTBC appears to be in the early stage, it may take a few years before it becomes viable for production.
 
Last edited:
The former is a transcoder into existing block compression formats, from disk, at load/stream time. It works with existing hardware BC decoders and filtering. It reduces storage sizes only.
Which is very obviously a part of what Nvidia's solution does.

The latter is customly coded texture data, with bit's own decoder and (arguably problematic) filtering implementation. It reduces storage and memory sizes.
Which you can add on top of what AMD's NTBC does instead of outputting your regular BC formats to the GPU h/w.

All the benefits stated for NTBC are present in Nvidia's solution too, "downgrading" it to run in-between storage and modern day texturing h/w would essentially make it the same as what NTBC is.

And honestly we should be hoping that there is some sort of feature convergence as we are too far along the IHV exclusive rendering features route already.
 
AMD's solution is a transcoding step to reduce disk storage requirements. It works on all texture types and is easily implemented into existing pipelines, especially those who already use oodle kraken. This is very good for teams with an custom engine as an free alternative to oodle kraken if they don't mind a little loss in quality.

Nvidia's solution goes one step further. It actually lowers the VRAM requirements, which till now other solutions did not do. But it has a lot of disadvantages. It relies on all textures (diffuse, normal, roughness, etc.) used by the material to be compressed together to get the results in the paper as it relies on similarities in the textures for the compression. But because in real world scenarios textures are usually shared between materials you would see diminishing returns. Most engines also allow to switch textures at runtime so it will be very hard to implement Nvidia's solution into existing pipelines. I actually don't see any widespread use of Nvidia's solution any time soon, even if only used as a transcoder like @DegustatoR mentions. It just gives too many constraints on how to use your textures while the trend in the industry is to soften those constraints.
 
NVIDIA's solution is direct texture compression using small overfitted neural networks. It achieves compression ratios with signal to noise higher than those provided by JPEG (many times higher compression ratios over current advanced BC compression), but it is not supported by current texture hardware, requiring the use of stochastic filtering. It's truly amazing what you can achieve with such direct neural compression. Take a look here, for example - https://c3-neural-compression.github.io/

In contrast, AMD has demonstrated a 2x compression (on top of the BC compression) using a neural network predicting parameters of the simple block compression (BC4) format instead of directly predicting the texture. They have not shown encoders for the more advanced BC formats yet. When compared to the common LZMA and DEFLATE compression with RDO tuning for BC compressed textures (used for storage), the compression ratios of NTBC are comparable to those of LZMA/DEFLATE. Noise to signal ratios and speeds are similar to the current production pipelines used on consoles and PC. While the current advancements seem promising, they are not earthshattering. Currently, NTBC appears to be in the early stage, it may take a few years before it becomes viable for production.

Sony Santa Monica shipped GoW Ragnarok with neural net compressed, or rather uprezzed, normal maps. So I'm not sure it's that far away, as NTBC is a different branch but somewhat similar in concept.

The practicalness of using neural nets to upscale textures seems the most promising here. It costs little in terms of runtime, would only cost runtime for those that can afford it most (anyone wanting higher settings anyway), helps solve the fundamental problem of game sizes getting too big (virtualized textures solve texture use in RAM, so who cares about compressing ram size), and doesn't require any dedicated silicon like Nvidia's solution would.

Considering Ubisoft has been toying with similar concepts, and there's other research into texture magnification with upscalers anyway, that seems like the way to go.
 
Last edited:
 
Back
Top