AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
My Theory why AMD packed RT Cores in TMUs. At first: I found it strange that UE5 Nanite Engine had only used 768MB of Ram for this demo. My thaughts for this: If you have Polygons which are smaller than a pixel, you don't need any bix texture for this pixel, it will be enough if this polygon has only one color. There is only one thing wher you can fail: If you go realy close to an object you must be safe that also in this position the polyogons are smaller than a pixel.

Sorry 768MB means streaming poo datal. If you have a streaming pool of 768MB can the ram have mor MBs than the streaming pool?
 
Last edited:
My Theory why AMD packed RT Cores in TMUs. At first: I found it strange that UE5 Nanite Engine had only used 768MB of Ram for this demo. My thaughts for this: If you have Polygons which are smaller than a pixel, you don't need any bix texture for this pixel, it will be enough if this polygon has only one color. There is only one thing wher you can fail: If you go realy close to an object you must be safe that also in this position the polyogons are smaller than a pixel.

Sorry 768MB means streaming poo datal. If you have a streaming pool of 768MB can the ram have mor MBs than the streaming pool?

Not all their polys were pixel sized. Many were reasonably large in fact. They are still using textures.
 
My Theory why AMD packed RT Cores in TMUs. At first: I found it strange that UE5 Nanite Engine had only used 768MB of Ram for this demo. My thaughts for this: If you have Polygons which are smaller than a pixel, you don't need any bix texture for this pixel, it will be enough if this polygon has only one color. There is only one thing wher you can fail: If you go realy close to an object you must be safe that also in this position the polyogons are smaller than a pixel.

Sorry 768MB means streaming poo datal. If you have a streaming pool of 768MB can the ram have mor MBs than the streaming pool?
yes. The streaming pool only holds the textures for processing. The remaining memory is still needed to build buffers etc
 
Latest RedGamingTech video is interesting, because it's based on one his best source (sorry can't share from this device). But I wonder how a big cache will impact temps and powerdraw...

Edit :
 
Can we please refrain from posting up every single speculative click-bait video? Everyone knows to slum those youtube channels and twitter feeds if they want to see such low quality material.
 
Well, RGT's information about Vega 20 / Radeon VII were correct. If I remember, they were also the first source informing about Navi 1x (mainstream-only) line-up in 2019 and Navi 2x high-end line-up in 2020. Dtto for Navi 2x supporting ray-tracing (before anyone alse). Their track record isn't bad (like AdoredTV or Moore's Law Is Dead) and this thread is called AMD: Navi Speculation, Rumours and Discussion. If this is low quality material, can you recommend some of higher quality, please? News from a source with quite good track record is deleted, local leaker is banned. What's the purpose of this thread?
 
We're censoring redgamingtech's videos now?
Why? They have an excellent track record!
They exclusively broke the Radeon VII specs with pictures weeks before the announcement.

What else are we supposed to comment in the speculation and rumors thread? Only tweets from data miners?


And why was Bondrewd banned? If his predictions become true, is B3D going to be known as the forum who bans legitimate leaders?
 
And why was Bondrewd banned?
Already specified in the thread.

It doesn't matter if they're true or not when all they do is continually drag down the discussion to one liners and prevent any sort of meaningful discussions from happening. They were pure noise generator. https://forum.beyond3d.com/posts/2153555/



As to the other aspect of these videos, sure if you put out a new video every single day with new rumors, you're likely bound to get something correct one of the hundreds of time. Everyone only ever remembers what they got right and always overlooks where they were entirely wrong or simply made up imaginary things.
 
Heres a quickrun down on the vid

60% performance per watt uplift
*No HBM2
*Not using 512bit bus. Lower Bus width
*Infinity Cache on the GPU 128MB which helps make up for the lack of memory bandwidth of GDDR6
*Clock Frequency similiar/around to PS5
*80CU for top sku
*6700, 6800, 6900 skus
*6700 will compete 3070
*6800 will compete against the 3080
*6800XT will compete against the 3080 TI
*6900 will compete 3090 but will be faster then a 3080TI. Not sure if it will beat a 3090
*No word if they will undercut Nvidia pricing.
*Hybrid Ray Tracing (AMD Patent)
*Up-sampling handled via lower precision operations.
*Decompression: unknown at this time


I don't know what to make of the 128mb cache. I would have called it most likely inaccurate but didn't realise he had a good track record with VII.

Still i don't think that the skus and their positioning on the stack looks correct. More likely a top 6900xt will compete with 3080ti and cut down 6900 will compete 3080

I still think the bus width has to be 384bit or higher. 512 seems unlikely, hbm2 is still a possibility
 
128MB isn't very much all things considered. Intel's old Crystal Well IGP has 128mb of L4 for something like 1/20th the performance. Would be an interesting to see how it works out.
 
So I saw this rumor on a website that is totally not RedGamingTech, claiming the following:

- Big Navi's higher efficiency is related mostly to improved power-clock curves (i.e. clocks are much higher at ISO power), referring to the PS5 as base for GPU clocks.
- Power efficiency isn't +50% over NAVI10, it's +60%
- There's no HBM in any of the upcoming cards
- There's GDDR6 with a bus below 512bit
- There's 128MB of Infinity Cache to compensate for not having GDDR6X or HBM.
- They're considering not undercutting nvidia at similar performance levels (undecided)




As to the other aspect of these videos, sure if you put out a new video every single day with new rumors, you're likely bound to get something correct one of the hundreds of time. Everyone only ever remembers what they got right and always overlooks where they were entirely wrong or simply made up imaginary things.
Redgamingtech doesn't claim exclusive and accurate news every day. Feel free to point out what they got wrong when they published a video similar to this latest one.
 
Last edited by a moderator:
As to the other aspect of these videos, sure if you put out a new video every single day with new rumors, you're likely bound to get something correct one of the hundreds of time. Everyone only ever remembers what they got right and always overlooks where they were entirely wrong or simply made up imaginary things.

That's the unfortunate state of things. The amount of nonsense being pumped out by techtubers claiming they have 'sources' is ridiculous.
 
The claims match up with what it technically possible on the higher end, so it's not like it's impossible. But the bandwidth claims are just plain weird. It's been tried before, the exaggerated "it's not possible!" is utterly silly, Intel does it, the Xbox 360 and One did it.It'd be interesting to see how it performs, even in hypothesis; there is a lot of bandwidth reading back and forth for swapping in and out the numerous render pipeline stages and writing/reading back previous stages for TAA. After all the Xbox One's problem wasn't the idea necessarily but the small size and the low bandwidth versus the PS4's GDDR. Doesn't mean I'd really buy it.

Regardless as others pointed out, even if AMD beats out a 3090 a bit, and assumedly other cards on down the line, DLSS 2.0 is a nice temporary advantage for Nvidia that a decent amount of consumers will probably take into account when buying this year and could easily extend into next year as well. If I were at AMD I'd try to counter it with something vaguely similar, but in the form of an open source SDK. Sure that'd imply that devs would have to add it themselves if they didn't sign up for AMD's partner program thing. But likely it'd be worth it, it could be cross platform including consoles which need to be released anyway. Something like this (ironically done partially by Intel) could work, RDNA2 has eight times rate Int 4 support right? https://creativecoding.soe.ucsc.edu/QW-Net/

The results are generally excellent, none of DLSS 2.0's "cranked the sharpen filter up too much" look or disappearing pixel edge artifacts; and after all the hardest work done it could be modified to do upsampling instead of just TAA, change the fixed frame sample count to a sliding motion based one and all the other best practice stuff. Would be damned smart on AMD's part but obviously no idea if they've done anything as such or ever will.
 
128MB isn't very much all things considered. Intel's old Crystal Well IGP has 128mb of L4 for something like 1/20th the performance. Would be an interesting to see how it works out.
Crystalwell was off-chip, and it only delivered 50GB/s duplex. I got the feeling they're talking about on-chip eSRAMfor Navi, similar to the XBone's 32MB though this time possibly with higher bandwidth.
I don't know what they could be using the 128MB cache for, though. To replace L2 overall? Make it a L3?
Is 128MB useful enough for framebuffer in a card that targets 4K?

I remember @Bondrewd claiming AMD was looking into ways to get these new cards into mobile and that's why AMD couldn't push very wide VRAM buses. This could be a way to get more out of lower external bandwidth.


In the Road to PS5, Cerny made mention of "generous amounts of ESRAM" for the I/O subsystem. Maybe there's embedded RAM for an IO system in the cards.
 
Redgamingtech also claims that rdna2 beyond 2.23ghz produces logic problems, supported by someone from Sony on the ps5 apparently.

What logic problems could arise due to high frequencies?
 
Redgamingtech also claims that rdna2 beyond 2.23ghz produces logic problems, supported by someone from Sony on the ps5 apparently.

What logic problems could arise due to high frequencies?

Crossing your 0s and 1s... Perhaps signal issues where the differential between becomes impossible to accurately determine?
 
60% performance per watt uplift
*No HBM2
*Not using 512bit bus. Lower Bus width
*Infinity Cache on the GPU 128MB which helps make up for the lack of memory bandwidth of GDDR6
*Clock Frequency similiar/around to PS5
*80CU for top sku
*6700, 6800, 6900 skus
*6700 will compete 3070
*6800 will compete against the 3080
*6800XT will compete against the 3080 TI
*6900 will compete 3090 but will be faster then a 3080TI. Not sure if it will beat a 3090
*No word if they will undercut Nvidia pricing.
*Hybrid Ray Tracing (AMD Patent)
*Up-sampling handled via lower precision operations.
*Decompression: unknown at this time

I tried watching the vid but it was painful.

There is at most 20% performance between a 3080 and a 3090 so it would be silly for AMD to have 3 cards in the same range of performance. That tidbit certainly seems to be nonsense.

The cache stuff is really intriguing. If AMD can successfully boost performance with a massive cache it could change the trajectory of graphics architectures going forward. Big cache + high clocks....maybe RDNA2 is really Zen 3! :D
 
Redgamingtech also claims that rdna2 beyond 2.23ghz produces logic problems, supported by someone from Sony on the ps5 apparently.

What logic problems could arise due to high frequencies?
Vega would start culling stuff it shouldn't at extreme over locks, possibly related?
 
Status
Not open for further replies.
Back
Top