AMD Execution Thread [2024]

BTW its not like there's hard a 'consensus' as to what the employees at AMD think will be the one dominant indirect lighting technique in the near/extended future ...

Of course but I was referring to the “AMD” that manifests as new graphics chips.

There's one that's considering improving reflective shadow maps (one bounce diffuse only/light leaking/streaking) but others are working on sparse distance fields (shadowing only/no material data) or two-level radiance caching (prohibitively expensive updates for data structure) and the most popular AAA game PC/console engine appears to be settling on signed distance fields with surface cards ...

All interesting compute based approaches that aim to reduce the cost of light transport by lowering precision vs triangle tracing. Reflective shadow maps seem extremely hacky. We’ve seen a few surfel cache based papers now but afaik Lumen is the only actual implementation. Hope to see more innovation there. Or maybe per-pixel GI will soon be a solved problem.

Even AMD's competitor realizes that it's unsustainable to keep integrating unused hardware (extraordinarily so with the same process technology) which is why they've been looking for more use cases like texturing (wasn't put into practice) and especially denoising/radiance caching for ray tracing ... (higher quality temporal upscaling alone may not turn out to be a good enough justification for them keeping hardware for it)

Are you referring to tensor use cases? Yeah it’s pretty wasteful to have tensors on gaming chips just for DLSS. I had assumed ML based denoising would be a thing by now. However from a business and marketing perspective it’s still been a massive win. Maybe individual tensors will shrink as chips get bigger. There’s no need to scale tensors further just for upscaling as resolutions aren’t increasing significantly anytime soon.
 
AW2 doesn't have geometry detail on any level similar to UE5, it's still traditional geometry just pushed to what current console hardware can handle.
I think this is done on purpose, Alan Wake 2 world is dynamic (just like Control), full of destructible probs and physics, this doesn't align with Nanite. Nanite is fantastic for static objects and environments, but doesn't work with dynamic ones. That's why UE5 games that will feature similar dynamic worlds will also not rely on Nanite.
 
I think this is done on purpose, Alan Wake 2 world is dynamic (just like Control), full of destructible probs and physics, this doesn't align with Nanite. Nanite is fantastic for static objects and environments, but doesn't work with dynamic ones. That's why UE5 games that will feature similar dynamic worlds will also not rely on Nanite.

For AW2, it was a time limitation for Remedy, they wanted to get something like Nanite in, but they lacked the necessary time to implement it into their engine for contractual release time frames for AW2.

Thus, they'll be looking into it more fully for their next title.

NOTE - this doesn't mean they won't run into something that prevents them from implementing a nanite-like geometry system, but it's what they are aiming for as it is key to having a more believable world presentation.

Regards,
SB
 
One of the Remedy devs said they could push a lot more geometry detail in the terrain using their system but where bottlenecked by VRAM budgets. This of course does not say anything about their non-terrain meshes. Remember that nanite is more than just dense geometry. The geometry is also virtualized akin to virtual textures so if parts of a mesh is hidden or does not need high detail they can choose to not stream those parts in and only the parts that are actually drawn. It's unclear how far the tech of Remedy goes. I hope they do a presentation on it and make it public. The more studios experiment with this tech and share their results the better it will become in the end in every engine.
 
It's a weird fuckup that behaves weirdly but it's still passable 'nuff.
Anyway, strixes soon(tm).
Zen 5 is gonna be somehow shown on Computex in June. This might include the mighty AI APU™ - Strix. Real-world benchmarks (actual laptops) would still be months away. RTG powerpoint slides have zero cred.

As for the ATi/AMD failed gens:
* R600 was a late, broken, hot, very large, and super hyped - AMD even held a fancy launch event in Tunisia
* Vega was repeatedly delayed, came after a generation with no highend (Polaris), hot, large, used fancy HBM, was hyped for months at Raja's special events (also Poor Volta).
* RDNA3 came after success of RDNA1-2 with enough R&D and development time, got pre-launch hype slides, used fancy chiplet tech, used TSMC 5nm, was hyped with doubled FLOPS.
 
Not following that closely but this guy is coming off real aggressive. Whats with the threats and ultimatums. AMD doesn’t owe him anything.

Why didn’t he think through these dependencies before starting a business?
They're mad because they claimed that they(along with AMD) could steal 80% of the revenue from Nvidia by making their software work on gaming Radeon but they didn't realize that the bug was in the hardlocked firmware and even AMD isn't interested in fixing it.
Here you can read the tiny corp's ultimate vision:
 
They're mad because they claimed that they(along with AMD) could steal 80% of the revenue from Nvidia by making their software work on gaming Radeon but they didn't realize that the bug was in the hardlocked firmware and even AMD isn't interested in fixing it.
Here you can read the tiny corp's ultimate vision:

Oh wow, dude seems quite sure of himself. His whole premise is bizarre though. Developing enterprise quality software that runs on a 7900 XTX is a nice hobby project. Even if he somehow is able to write better drivers and SDKs than AMD can muster (good luck with that) how does that make a dent in the hyperscale or edge inference markets where AI lives today.

If running AI on gaming Radeon was the path to success then CDNA 3 wouldn’t exist.
 
Oh wow, dude seems quite sure of himself. His whole premise is bizarre though. Developing enterprise quality software that runs on a 7900 XTX is a nice hobby project. Even if he somehow is able to write better drivers and SDKs than AMD can muster (good luck with that) how does that make a dent in the hyperscale or edge inference markets where AI lives today.

If running AI on gaming Radeon was the path to success then CDNA 3 wouldn’t exist.

I don't even understand how somebody not involved in the driver and hardware development at AMD/Intel/NV who has assembled the required experience for years can make such statements in the first place. Using these products to write shader/matrix calculations by some interfaces these vendors provide doesn't qualify somebody to create the "driver".
 
Real-world benchmarks (actual laptops) would still be months away
Strix isn't Computex (it lives in a discrete MS AI PC™ event together with other parts) and the actual devices (albeit limit number of them) is Q3.
RTG powerpoint slides have zero cred.
Oh but those are AMD client pptware, not Gaming.
Very Different Things.
RDNA3 came after success of RDNA1-2 with enough R&D and development time, got pre-launch hype slides, used fancy chiplet tech, used TSMC 5nm, was hyped with doubled FLOPS.
It doesn't do anything fancy with chiplets and the real issue lies with Cac (or so it seems).
Also AMD FAD stuff isn't "hype slides", don't be silly.
They're guidelines for people like me to hold shares (or buy moar).
Not following that closely but this guy is coming off real aggressive
geohot has a few (too many) screws loose after Sony sued him into oblivion.
 
Last edited:
Zen 5 is gonna be somehow shown on Computex in June. This might include the mighty AI APU™ - Strix. Real-world benchmarks (actual laptops) would still be months away. RTG powerpoint slides have zero cred.

As for the ATi/AMD failed gens:
* R600 was a late, broken, hot, very large, and super hyped - AMD even held a fancy launch event in Tunisia
* Vega was repeatedly delayed, came after a generation with no highend (Polaris), hot, large, used fancy HBM, was hyped for months at Raja's special events (also Poor Volta).
* RDNA3 came after success of RDNA1-2 with enough R&D and development time, got pre-launch hype slides, used fancy chiplet tech, used TSMC 5nm, was hyped with doubled FLOPS.

Vega was a decent upgrade over Polaris, but not enough. Their clocks were set to around 1.5-1.6GHz with 'upto' modifier while nvidia cards were doing at least 1.8GHz and close to 2GHz actual with non-FE cards.

RDNA3 has been stuck at the same clocks as later RDNA2 chips. The doubled FLOPS made rumors go wild with higher die-sizes and so you had 6900XT level performance from the 6nm RDNA3 chips instead of where it has landed currently.
 
Yes they did, AMD was first with a DX11 GPU (the 5870), and they marketed Tessellation hard with Dirt 2, Aliens vs Predator and STALKER.


They even made fun of NVIDIA for focusing on PhysX and not talking enthusiastically about Tessellation and DX11 during the 6 months period of Fermi delays.


Once Fermi was released and got revealed to be a Tessellation monster, AMD withdrew from the scene in silence and tried to cheat their way out by setting sepcial driver hacks to reduce Tessellation factors and lessen the performance impact on their hardware.

After that they struggled hard with it for too many generations, they only got good enough with GCN4 and Vega.

The worst thing about the hairworks and tessellation-related gameworks effects is that AMD would've not suffered as much if they had fixed their driver when it was still the 290X series and not when they rebranded it with 390.

Hocp tested couple of games with them gameworks effects enabled and 390X was much faster than 290X.

 
Just curious. Looking at this chart, can we assume that AMD is steadily making >$1b from console SoCs every quarter?
Because it does not seem to be affected by the cryptocurrency boom or PC market downturn.
chart.png
 
Just curious. Looking at this chart, can we assume that AMD is steadily making >$1b from console SoCs every quarter?
Since the graph starts from Q2 21, then unfortunately you can't, since AMD shuffled different categroies together in this period.

Before Q2 14, dGPUs were listed under Graphics and Visual Solutions (which included Radeon, Radeon Pro and consoles SoCs).


Then after Q2 14, they were listed under AMD’s "Computing and Graphics" segment, which included both Ryzen and Radeon and even Instinct data center GPUs, Epyc and consoles SoCs were listed under "Enterprise, Embedded and Semi-Custom", this continued till Q1 2022!


Then, since Q2 2022 they are now listed under "Gaming" along side consoles SoCs, they moved Instinct GPUs and Epyc to "Data Center", Ryzen to "Client", and Xilinix to "Embedded".

 
Then, since Q2 2022 they are now listed under "Gaming" along side consoles SoCs, they moved Instinct GPUs and Epyc to "Data Center", Ryzen to "Client", and Xilinix to "Embedded".
I luckily found that they silently disclosed revenue from a year ago according to the re-arranged business categories in the Q2 22 report. So I added in the chart.😉
q2_21.jpg
 
Back in 2023 they disclosed Sony was 16% of their revenue in 2022, which came to $3.8B for the year. With Xbox as well, easily over a billion per quarter for consoles
 
Back
Top