Bondrewd
Veteran
Ian (or was it Ian?) poking at AMD for die name is exactly that.They can put what they want under this name
Ian (or was it Ian?) poking at AMD for die name is exactly that.They can put what they want under this name
Now that's a nice coincidence: the double of that is: 316 mm².Radeon RX 5500 is based on Navi 14 with confirmed die size of 158 mm². So it's not Navi 10 (which is 251 mm²).
Most likely they're upgrading their texture units to accelerate RTA slight side note to the topic at hand.
What would be the likeliest way for AMD to introduce dedicated RT silicon to Navi? I get the distinct feeling that AMD is working heavily at making compute units capable enough to perform RT competitively Which makes sense, compute units are more flexible. But does there come a point where a chiplet design with dedicated RT cores make more sense? At least in the short term? In my mind proper RT capable hardware will have to achieve basic RT (whatever that can be considered as being) without meaningfully impacting the framerate. I.e at a minimum offering equal performance and visual fidelity as rasterisation. Preferably at equal or near equal price/performance. With a chiplet AMD could offer RT and simply lop it off when necessary in the short term, before pushing out a monolithic design with yet more RT capable compute units.
In a way I'm sure that the future lies in AMD's preferred compute approach. It doesn't break (hopefully) backwards compatibility, it offers great flexibility in software implementations, and means no dead silicon. But until then...
Patent was published this year, but it doesn't say when it was applied for nor when the development began.Interesting solution. Seems like it's a balancing act: Shader neutral, but limited, fixed function RT with massive buffers VS. programmable RT through texture units, impinging on available shader resources depending on the RT solutions complexity.
So on one hand you'd need to dedicate silicon to buffer space and risk idling of resources or reaching a wall with regards to available fixed function hardware. On the other more texture and shader units to offset the RT requirement for concurrently raster and shader based programs (but similarly more performance if the program doesn't demand 50/50 for RT and raster work either way).
Again. Very interesting. But considering this patent was filed this year, how realistic is it that it will make its way into Navi at all? Sounds like a complete redesign of the shader silicon to me with all that entails of time consuming development and testing. Would it even make it in time for a potential Navi 2?
There are intermediate solutions. Nvidia's method likely more dedicated hardware and some amount of buffer space, but unlike how AMD's patent characterizes the situation there are possible implementations where storage is not scaled to the level needed to buffer the full depth of BVH traversal. Nvidia's patents allow for a unit with some amount of buffering of context data like previously visited nodes that it needs to go back to in order to perform more checks or traversal to other child nodes. However, in the event that this context storage is exceeded, the system doesn't break down. The purpose of tracking already visited nodes is to avoid backtracking or redundant accesses, which while undesirable doesn't break the algorithm. Accepting a hopefully low level of backtracking in more complex traversals can allow for more modest storage requirements.Interesting solution. Seems like it's a balancing act: Shader neutral, but limited, fixed function RT with massive buffers VS. programmable RT through texture units, impinging on available shader resources depending on the RT solutions complexity.
It's unclear how much would be redesigned. Some of the elements would sit beside the existing path, which doesn't disrupt as much. Other possible implementations could also rely on what the TMU does as part of its existing functionality. Part of its job for texture formats or forms of multi-tap filtering is handling the generation of multiple target addresses based on a base pointer sent by the shader. There are some parallels with reading the contents of a compressed or formatted data location and processing it into a final result that match reading in a node's data, doing some comparisons to known values or some additional math and then sending a formatted outcome to the SIMD. AMD has research into using limited precision for structures holding geometry that are close together, saving space and hardware for calculating intersections. That may also overlap with the level of ALU complexity of a TMU.Again. Very interesting. But considering this patent was filed this year, how realistic is it that it will make its way into Navi at all? Sounds like a complete redesign of the shader silicon to me with all that entails of time consuming development and testing. Would it even make it in time for a potential Navi 2?
The document lists a filing date in December of 2017.Patent was published this year, but it doesn't say when it was applied for nor when the development began.
AFAIK there's little you can do with shaders while processing RT for said frame, so using shader resources shouldn't matter much
ATI is back!.
ATI is back!.
ArtX is back !!
The article states that AMD designed Navi for consoles, which isn't the case.
We could argue that this is indeed the case.
The parts involved knows for years that this would the be GPU generation that would be used in the next video games, and both Sony and Microsoft participated in the development to make sure that their needs are need. We are hearing about this for years, specifically Navi being cooked for Sony.
I don't think is wrong to think this way, it's understandable if AMD started to design their GPU more to satisfy their last few customers.
It has been discussed before. Microsoft and Sony deal with the semi-custom department at AMD. Semi-custom guys get their hands on the IP blocks when they're ready to be implemented. Of course MS & Sony just like any other AMD customer can and do give input on what they would like to see and it could affect the architecture design choices, but they're not really involved in it.This has been discussed before i think. AMD didn’t design navi just for sony or ms, more likely they designed a gaming gpu which is being customized for each or both consoles.
You have been hearing conspircy theories with the navi for sony etc.