LG "NUCLUN" SoC - Any info?

  • Thread starter Deleted member 13524
  • Start date
D

Deleted member 13524

Guest
http://www.lgnewsroom.com/newsroom/contents/64743

Here's what the press release says:
NUCLUN (pronounced NOO-klun) was designed using ARM® big.LITTLE™ technology for efficient multi-tasking capabilities. The AP employs four 1.5GHz cores (ARM Cortex-A15) for high performance and four 1.2GHz cores (ARM Cortex-A7) for less intensive processing. The number of performing cores can be adjusted based on the requirements of the task for maximum processing power or maximum energy savings. NUCLUN is designed to support the next generation of 4G networks, LTE-A Cat.6, for maximum download speeds of up to 225Mbps while retaining backward compatibility with current LTE networks.

For now, all I can see is that it has a 4+4 big.LITTLE arrangement with Cortex A7 + Cortex A15. Max clock speeds seem a bit low for the A15 cores. This seems odd because the chip is going into a fairly high-end handset.
"Supporting" LTE doesn't say nothing about the modem being embedded or external (or does it?), and there's no word about the GPU or memory bandwidth.
 
That LTE-A Cat 6 modem can only be from Qualcomm or Intel, right? Intel's modem would be the XMM 7260, but I'm not aware of any shipping products that are using it right now.

EDIT: I just read that it uses an Intel X-Gold 726 modem, which I the one from Intel I just mentioned. That means it's an external modem chip. The GPU is an undisclosed IMG PowerVR design, probably something Rogue based.
 
Last edited by a moderator:
Very odd to use A15 when A17 is available. Looks like it was a very long development with bad timing...
single cluster of 4 x A53 should be a better choice !
 
Ah, not to confuse anyone with my comment, I was talking about the LG G3 Screen phone, which is the first LG phone to make use of their own SoC. That phone uses this LG NUCLUN SoC in combination with Intel's LTE-A modem, at least according to the article I read. That same article mentions the use of a IMG PowerVR based GPU as well. It's this article: http://tweakers.net/nieuws/99230/lg-presenteert-eigen-mobiele-soc.html (sorry, it's in Dutch) I consider that site to be pretty trustworthy BTW.

@xpea: A cluster of A53s wouldn't exactly be high end....
 
Driver overhead is astonishingly high in those results, it's around 1/8th the performance of iphone 5S with the same GPU.
 
Any ideas why the lower precision render quality is so much higher on the LG device than on the iPhone 5s, and why it doesn't fluctuate between lower precision and higher precision?

To give you a reason to ask a question no one can possibly answer and even if it wouldn't have any reasonable importance either. Is that PSNR-itis curable?
 

The iPhone5S G6430 should be clocked =/>450MHz, while the GPU in the Odin should be in the =/>300MHz region. Driver overhead scores though are worse on Odin than even on Intel SoCs with Rogues I've seen.

Other than that it uses the same driver as the Meizu MX4 here: http://gfxbench.com/device.jsp?benchmark=gfx30&os=Android&api=gl&D=Meizu+MX4&testgroup=overall

(MT6595, G6200@600MHz, 76.8 GFLOPs FP32)
 
To give you a reason to ask a question no one can possibly answer and even if it wouldn't have any reasonable importance either. Is that PSNR-itis curable?

My point is that, for whatever reason, the LG SoC GPU appears to be doing more work in comparison to the iPhone 5s SoC GPU based on the differences in GFXBench-measured render quality.
 
My point is the LG SoC GPU appears to be doing more work in comparison based on the differences in GFXBench-measured render quality.

While none of us has a clue what the test does? There isn't a point and if you don't mind stop pestering every second discussion with your PSNR nonsense, unless you can come up with some viable data to actually discuss what and how the test does.
 
While none of us has a clue what the test does? There isn't a point and if you don't mind stop pestering every second discussion with your PSNR nonsense, unless you can come up with some viable data to actually discuss what and how the test does.

What is it about data analysis do you not understand? If data points exist, then they should be analyzed and discussed, especially when there is something unusual shown in the data. If the GPU architecture is exactly the same between the LG SoC and the iPhone 5s SoC, then why in the world would GFXBench measure a significant difference in render quality? Given that the LG SoC has so much lower framerates in comparison to iPhone 5s for what is supposed to be the exact same GPU architecture, then based on the limited data we have so far, it can only be explained by a combination of lower GPU clock operating frequency, more driver overhead, and higher quality rendering for the LG SoC.
 
What is it about data analysis do you not understand? If data points exist, then they should be analyzed and discussed, especially when there is something unusual shown in the data.

Then why don't you enlighten us what the two quality tests in the Gfxbench3.0 suite do exactly and how? Without knowing what the test does you're analyzing or better attempting to analyze thin air.

If the GPU architecture is exactly the same between the LG SoC and the iPhone 5s SoC, then why in the world would GFXBench measure a significant difference in render quality?
Without an answer to the above I can't and you can't either answer your own question.

Given that the LG SoC has so much lower framerates in comparison to iPhone 5s for what is supposed to be the exact same GPU architecture, then based on the limited data we have so far, it can only be explained by a combination of lower GPU clock operating frequency, more driver overhead, and higher quality rendering for the LG SoC.
Higher quality rendering for what exactly?
 
Back
Top