D
Deleted member 13524
Guest
Especially if it never materialized into an end-user product after 7 years of development.Sounds like an oxymoron to me. If something is "that" much better how can it not be a flop?
Especially if it never materialized into an end-user product after 7 years of development.Sounds like an oxymoron to me. If something is "that" much better how can it not be a flop?
Sounds like an oxymoron to me. If something is "that" much better how can it not be a flop?
Keep in mind the first 5 years were just R&D, only after 2017 did they put it into silicon. They had 10nm and 7nm tests chips and performace looked alright albeit not anything special.Especially if it never materialized into an end-user product after 7 years of development.
I have to admit, what is quite opaque to me is just what Samsung gets out of this deal.
Keep in mind the first 5 years were just R&D, only after 2017 did they put it into silicon. They had 10nm and 7nm tests chips and performace looked alright albeit not anything special.
For starters, Samsung doesn't want to depend on Qualcomm as much as they do, which is part of the reason why they keep funding S.LSI throughout the years. Same thing with Huawei and HiSilicon AFAIK.I have to admit, what is quite opaque to me is just what Samsung gets out of this deal.
What can AMD bring to the table that ARM/Qualcomm/et. al cannot in this market segment?
I guess (and hope) the latest Adreno 6xx GPUs have little to nothing in common with the ~12 year-old AMD Z430 / Adreno 200 that was sold to Qualcomm back in 2009.Why would the AMD RTG provide better IP for ultra low power GPUs than AMDs own low-power GPU group that they sold to Qualcomm that kept working on the problems and which has lived a successful and relatively well funded life since?
Both RTG and nVidia have very close relationships with game and application developers, and they both offer development optimization tools for their GPU architectures.So what exactly can AMDs RTG bring to the table that would provide a decisive advantage over players who have had long experience designing for mobile already?
You have a point, but the optimum for Samsung is to hold their own IP, not shift IP provider. We'll see how the newest Mali performs, I hope Nebuchadnessar will graciously provide us with data eventually.For starters, Samsung doesn't want to depend on Qualcomm as much as they do, which is part of the reason why they keep funding S.LSI throughout the years. Same thing with Huawei and HiSilicon AFAIK.
So in this context, AMD's first advantage is they're not Qualcomm.
As for ARM, the logical conclusion should be that their Mali GPUs haven't kept up Adreno in performance or efficiency.
And the one that has evolved to provide optimum power/performance/area characteristics for mobile applications is...?I guess (and hope) the latest Adreno 6xx GPUs have little to nothing in common with the ~12 year-old AMD Z430 / Adreno 200 that was sold to Qualcomm back in 2009.
Just like RDNA has very little in common with the DX10 Terascale 1 GPUs of that time.
Both architectures have evolved in parallel and should be very different at the moment.
I think you overstate this. Unity/Unreal Engine and so on is probably more important when it comes to the look of the games (and some are far beyond PS2 level.)Both RTG and nVidia have very close relationships with game and application developers, and they both offer development optimization tools for their GPU architectures.
Switch and Tegra/Shield optimized AAA games seem to have shown that if Android is ever going to step up the game on decent ports from PC and consoles, most devs need these tools. Otherwise they're stuck with 6th-gen (PS2, Xbox) era looking games.
It's the one that Samsung can't implement in their own SoCs. None of this would be happening if Qualcomm had Adreno IP for sale.And the one that has evolved to provide optimum power/performance/area characteristics for mobile applications is...?
Not look of the games, but rather performance. They should also have the capability of bringing down the clocks and power consumption when sufficient performance is reached, and stuff similar to AMD's Chill.I think you overstate this. Unity/Unreal Engine and so on is probably more important when it comes to the look of the games (and some are far beyond PS2 level.)
10 years more IP is a hell of a lot...What can AMD bring to the table that ARM/Qualcomm/et. al cannot in this market segment?
Why would the AMD RTG provide better IP for ultra low power GPUs than AMDs own low-power GPU group that they sold to Qualcomm that kept working on the problems and which has lived a successful and relatively well funded life since?
Maybe. AMD owns approximately half of all graphics IP in theory.Nvidia, even though they've thrown silicon at the problem with their Tegras, don't really impress particularly with their efficiency in mobile space at Maxwell/Pascal tech level (that is quite competitive with anything out of AMD in desktop space in terms of power efficiency). Intel...well...(*cough*). So what exactly can AMDs RTG bring to the table that would provide a decisive advantage over players who have had long experience designing for mobile already? Is it simply mostly about dodging patent litigation?
Well, it is in plain writing:"As part of the partnership, Samsung will license AMD graphics IP and will focus on advanced graphics technologies and solutions that are critical for enhancing innovation across mobile applications including smartphones."Maybe. AMD owns approximately half of all graphics IP in theory.
Possible leak of an AMD ULP SoC, Ryzen C7:
4 things that set me back on this leak:
1 - Samsung name not appearing anywhere, and AFAIK AMD can't do any chips that compete in the ultra low power market with Samsung. Also, 5nm TSMC so definitely not Samsung?
2 - 4 CUs at 700MHz means 358GFLOPs. This is supposed to be 45% faster than an Adreno 650? Sounds hard to believe.
3 - MediaTek 5G modem? Again, not from Samsung.
4 - Real time raytracing on a 358 GFLOPs GPU?
Yeah, I bet it's a poor fake.It also spells Gauguin wrong.
- Real time raytracing on a 358 GFLOPs GPU?
Nothing, of course. Theoretically.Fake slide aside, I can't figure out what theoretically speaks against implementing RT on a hypothetical 358 GFLOP GPU which is meant for a low power mobile SoC.
This is even more true in the mobile space. Why would I want a technology that makes my device cost more and draw more power, while only being useful to produce slightly more physically correct aspects of game rendering that my mind does its best to discard anyway?
Who wouldn't take a cheaper device with longer battery life instead, or spend those gates in better places for more generally applicable computing power?
Nothing, of course. Theoretically.
The question is whether it's a sensible idea or not.
My problem with RTRT in PC space is that although it makes sense for nVidia to attack the rendering market and let gamers foot the bill, the technology as such is costly, both in die area (money) and power. And while its may suit a producer of add in graphics solutions, it is really tone deaf in the PC market as a whole, which is shrinking consistently since 2011, and which gravitates away from desktop systems in favour of laptops (2/3s of the market), and the laptop segment in itself is gravitating towards lighter units with longer battery life.
RTRT fits this as a foot in a glove. It specifically emphasises aspects of PCs that the general market find undesirable, and very justifiably so.
This is even more true in the mobile space. Why would I want a technology that makes my device cost more and draw more power, while only being useful to produce slightly more physically correct aspects of game rendering that my mind does its best to discard anyway?
Who wouldn't take a cheaper device with longer battery life instead, or spend those gates in better places for more generally applicable computing power?