DegustatoR
Legend
Or in the other dimension.
?not "RBE".
That's 16 bytes per pixel at 3840x2160. That's so much you will be able to see it from space. Godlike. If I was NVidia I would be scared.
Chip yield should be helped massively, too.
If this is true, then yes, this is a whole new era of graphics performance. This is XB360 on alien technology. This finally answers for me doubts about the consoles targetting 120fps for 4K TVs.
No need for a 384-/512-bit or HBM memory system.
a lot of lower precision ones with a couple higher precision ones. Some titles requiring up to 50+ various targets per frame.Are render targets these days typically 16, 32 or 64 bits per pixel? Maybe AMD is planning to “disrupt 4K gaming” by keeping primary buffers on chip and saving a whole load of ROP bandwidth.
What are you talking about, there isn't a single overall performance number for this in the whole paper.
To be taken with "Grain of Salt" as with all leaked benchmarks.
AMD’s Next-Gen Radeon RX 6000 Series ‘RDNA 2’ Graphics Cards Alleged Benchmarks Leak Out
https://wccftech.com/amd-next-gen-radeon-rx-6000-series-rdna-2-graphics-cards-benchmarks-leak/
How many of those are concurrently being written?a lot of lower precision ones with a couple higher precision ones. Some titles requiring up to 50+ various targets per frame.
You missed the important part of the title: As Fast As A Stock RTX 2080 Ti
Hmm. I don’t know if it matters. I think what matters is how much fill-rate you require to complete the full frame. I don’t think games will usually max out full rate for the whole second.How many of those are concurrently being written?
isn't the 3070 supposed to be that fast also ?You missed the important part of the title: As Fast As A Stock RTX 2080 Ti
nice! Since I am not in a hurry to decide my future graphics card since the 1080 is performing okay as of currently, this info has been the most valuable for now regarding AMD.Latest RedGamingTech video is interesting, because it's based on one his best source (sorry can't share from this device). But I wonder how a big cache will impact temps and powerdraw...
Edit :
nice! Since I am not in a hurry to decide my future graphics card since the 1080 is performing okay as of currently, this info has been the most valuable for now regarding AMD.
I am still missing a DLSS like solution, but something like the per watt performance seems to be improved a lot and given the fact that I want to play all games at 165fps 1440p native, and I have a limited power budget -550W PSU, yeah I thought about it before buying it and knew what I wanted-, things are getting more interesting by the day.
It's supposedly not N21 but N22, but who knows, it's wild west out there.isn't the 3070 supposed to be that fast also ?
another reason amd should have planned for an earlier announcement. Now its just almost 2 months of rumors and the majority seem to be negative.It's supposedly not N21 but N22, but who knows, it's wild west out there.
Hasnt XSX APU 76 MB of ESRAM nobody has cleared where comes from?.You know the rumor does seems weirder and weirder the more you think about it. The obvious thing staring one in the face is, if 128mb of cache is so magical, why doesn't the PS5 or XSX use it, why did the latter go for a giant 320bit bus if it was so useful? And of course the XSX shouldn't need a 320bit bus if somehow a 20+ teraflop RDNA2 chip only needed 256.
I mean, what would you even do with 128mb, fit the world's thinnest 4k g-buffer?
What are you talking about, there isn't a single overall performance number for this in the whole paper.
Not that was even the point, just showing that other researchers can produce potentially realtime ai upscaling and it'd be a smart thing for AMD to do. Just going through their numbers it looks like it should be possible in under 3ms on their "unnamed high end GPU" as compared to UNET, other than a few hiccups they had ideas for solving but never got to.
Unknown at this time, but AMD's hybrid RT patent has the SIMD pass a BVH pointer and ray origin and direction data to the hardware in the texture block. The RT hardware returns a payload containing intersection test results and pointers to the next set of BVH nodes.If the BVH traversal logic is indeed handled by the CUs.. it's not a given the texture units fetch BVH and triangle data. Perhaps the CUs do it and send everything to the texture units to accelerate intersections.
Microsoft has used its own particular wording for hardware blocks before, going by its naming convention for compute units in the current gen. The coloration of the color/depth blocks is also green versus the blue cache blocks in the diagram. Seems like an omission to not note a 128MB collection next to the 4MB L2, for example. As great as it might be to have a massive frame buffer on-die, it seems like something could be done to make more use of it than closing the ROP memory loop on-die. For example, the geometry engine and binning rasterizers would likely have a decent idea of how many screen tiles may be reasonably needed in a given time window, and that could leave much of that cache available for something other than ROP exports.128MB huh? Perhaps this is why on this diagram it says "Color/Depth":
The context seems to be that the cache is meant to make up for not having an extremely wide GDDR6 bus or HBM. A more modest GPU might be low enough to be satisfied with regular GDDR6 bus without incurring a significant die cost that a console may not be able to justify.You know the rumor does seems weirder and weirder the more you think about it. The obvious thing staring one in the face is, if 128mb of cache is so magical, why doesn't the PS5 or XSX use it, why did the latter go for a giant 320bit bus if it was so useful? And of course the XSX shouldn't need a 320bit bus if somehow a 20+ teraflop RDNA2 chip only needed 256.
The comparison I was thinking of was between a more conventional hierarchy and one with a 128MB additive cache layer.There's no way DRAM latency is lower than a cache hit since you need to check the cache before going to DRAM.