Plus one.I prefer Intel's "uncore" term.
Plus one.I prefer Intel's "uncore" term.
They don't do much outside of exploiting their generic patents for rumble motors. But they have patents for dual axis force feedback.Immersion also announces that Sony Interactive Entertainment will license IMMR's advanced haptics patent portfolio for Sony's gaming and VR controller.
Another possibility is to use Navi gpu chiplets, each with 36 cus, with 4 redundant. Lockhart would use one chiplet and Anaconda would use two.
Think you've answered it here.* What can Sony or Microsoft provide to developers on making multi-GPU scaling more effective in the console space, especially when Nvidia, AMD and Microsoft for decades haven't provided a 100% effective solution within the PC gaming space?
Another possibility is to use Navi gpu chiplets, each with 36 cus, with 4 redundant. Lockhart would use one chiplet and Anaconda would use two.
Think you've answered it here.
A console, not a pc.
A closed box with set components to target.
Everything is more stable and set from hardware to drivers.
Thats huge in regards to this from the get go.
Also the actual architectural implementation would be different. Wouldn't have multiple memory pools, etc
Not really. Even a multiple GPU chiplet design would still succumb to multi-path rendering scaling issues, even in a closed-box system (console). These issues don’t magically disappear, especially when synchronization, scheduling, frame-interpolation, MCFI, and all manners of post-processing effects can be adversely effected by the slightest hiccups such as one chiplet being more sensitive than the other to temp or voltage changes (even in an APU design) which can throttle the whole pipeline/render leading to more screen tearing, major framerate drops, and banding which looks like an interlacing-resolution is being used. Even if temps and voltage changes weren’t a concern, rendering stalls (i.e., micro-stutters, etc.) are pretty much a given when two chips are trying to schedule/coordinate/interpolate/render between each other. I don’t think the headaches of such a design simply aren’t going away, because of a closed-boxed design.
It does sound a bit like decoupling the front-end of the GPU, and the upshot is that GPUs care less about latency than CPUs. That would have some interesting implications for PCs, although I think the elephant in the room is dealing with the bandwidth required between hypothetical GPU chiplets (i.e. interconnects). I could be mistaken, but AMD doesn't seem to have as good of an analogue to Intel's EMIB & without resorting to TSVs.Is it not a practical solution ?
Wouldn't the chiplet design resolve the multi GPU rendering problem instead?
I mean.. Can the IO chip virtualise the GPU so that 2 gpu chiplet appear as 1 ? All the schedulding logic (and unified L2/L3cache) would be in the IO chip and the chiplet would just have the CU array and small L1 cache, same for the cpu chiplet with L1 cache...
In fact I wonder if the chiplet design would not also help to realise the full HSA vision with a fully unified memory pool.
One IO controller overmind chip to rule them all (cpu and gpu)..
Is it not a practical solution ?
render farms as I understand it, still have a lot of different challenges when it comes to texture sizes etc. GPUs need to work off their own memory pools at the moment, and the textures get so large they will max out memory quite quickly. There's a slew of challenges that render farms have to tackle that real-time don't and vice versa. That being said, it's why we still see CPU rendering being part of the equation in that industry.Even multi-million dollar render-farms with specialized I/O chips or boards coordinating between hundreds of GPUs still exhibit these issues. That’s why multipass rendering is such a thing within film and CGI/Offline workloads. Not just for visual improvement (i.e., clarity, additional detail, etc.), but more so rendering any missed data or frames which can [does] occur in multi-GPU setups. And these task (schedule/coordinate/interpolate/render) become more complex within a real-time environment (videogames), which has no multipass equivalent on assisting against such stalls/errors.
And don’t get me wrong, I love AMD just like the next person, but seriously, if NVidia hasn’t resolved any of these issues that comes along with multi-GPU designs and scaling, I can’t picture AMD resolving them within an APU design housing multiple GPU chiplets.
First off all thanks, kind of feedback I was hoping for.3 - As stated, power consumption would be expected to be bigger than having a single GPU with double the CUs. Also, according to AMD, you would not have the same performance as a single GPU. That is why AMD is not using this solution.
Not sure I understand double cost on dead area.1 - 8 non working CUs per chip (double cost on dead area). Double memory controllers, double buses, double infinity fabric, double everything. Basicaly you would have Two GPUs, with the cost of two full GPUs, and a power consumption of also two full GPUs.I guess this would kill the first Advantage.
render farms as I understand it, still have a lot of different challenges when it comes to texture sizes etc. GPUs need to work off their own memory pools at the moment, and the textures get so large they will max out memory quite quickly. There's a slew of challenges that render farms have to tackle that real-time don't and vice versa. That being said, it's why we still see CPU rendering being part of the equation in that industry.
We're at a point where the APis support mGPU much better with more freedom and creativity, we need a baseline like a console to move it forward properly in engines.
yea, I don't see it eitherI get all this. And as I stated before, the novelty of an APU housing multiple GPU chiplets sounds interesting, however, it doesn't make any sense when there are valid issues against such a design. If Sony or Microsoft (along with AMD) have resolved these issues and show the thermal/wattage advantages over a discrete GPU/CPU design, then more power to them. But I don't see it happening.
Depends on the amount of work and difficulties in implementing something.I think MS would want a solution that required no additional developer overhead. There's always going to be consideration about allocating work when some of your execution units and memory come with additional penalties.
These are also some of the reasons I dismissed it a while ago.yea, I don't see it either
I'm not big on the chiplet idea. Assembly and chip costs too high for a product too dirt cheap.