Next-Gen iPhone & iPhone Nano Speculation

That's an impressive amount of work that went into the Apple A6 custom SoC. It also obscures direct comparisons as little is known at this point, other that it is based on the ARMv7 superset and can run from 800 Mhz to 1200 Mhz in its current implementation. Still waiting on Anand's take on the A6 with his forthcoming iPhone 5 review.

I especially like the following quote from ChipWorks, "This phone is full of brand new components…best Apple release since the first iPhone."

They sure do like to "play" with their "toys" heh.
 
Last edited by a moderator:
I wonder does the fact that they are now running their own custom CPU, and have their own microcode in the Soc, give them some new opportunities to making hacking more difficult.
 
NV and AMD improved by as much from 40G to 28HP because amongst other reasons there was coincidentially a 32nm node at TSMC cancelled. Do you think the distance between Fermi and Kepler would had been as big if TSMC wouldn't had cancelled 32nm? It's not like foundries cancel process nodes on a regular basis either.

I don't think the advances would have been as big going from 40nm to 32nm, no. But only shifting one process node instead of 1.5 would have still allowed a pretty major advantage.

I don't yet know what 22nm w/FINFETs vs 32nm at Samsung (presumably?) will be like competitively but I still expect it'll mean a lot for GPUs.

As for process advantages they are certainly a point, yet not a defining factor that would save any sort of architecture IMHO. Larabee would have had also Intel's process advantage and I didn't see any process liferaft for it either. Even better Knights-whatever (sorry I've lost track with all the nobel codenames there....) for HPC only is/will be competitive based on its own architectural merits and the most important factor Intel's manufacturing could play there, would be to better control the power consumption characteristics of the design itself.

Other than that did Intel roll it out before Kepler to respective customers and on top of that does it have only advantages against Kepler or some disadvantages too?

Yes, of course the design matters. Larrabee was never going to cut it for GPUs and Intel realized this pretty soon in. In that regard their Gen family has already proven its viability far beyond Larrabee. In perf/area I'd say it's doing very well against even Trinity, and perf/area is going to remain an important metric; more so as Intel goes further out with area utilization in Haswell.
 
I don't think the advances would have been as big going from 40nm to 32nm, no. But only shifting one process node instead of 1.5 would have still allowed a pretty major advantage.

28nm vs. 40nm still allows higher chip complexities and frequencies than 32 vs. 40nm. See your PM.

I don't yet know what 22nm w/FINFETs vs 32nm at Samsung (presumably?) will be like competitively but I still expect it'll mean a lot for GPUs.

Unless I'm missing something the only other GPUs manufactured at Samsung are blocks within SoCs both their own as for Apple. I don't see any small form factor SoCs yet from Intel under 22nm. What's Medfield manufactured on again?

Yes, of course the design matters. Larrabee was never going to cut it for GPUs and Intel realized this pretty soon in. In that regard their Gen family has already proven its viability far beyond Larrabee. In perf/area I'd say it's doing very well against even Trinity, and perf/area is going to remain an important metric; more so as Intel goes further out with area utilization in Haswell.

Design matters just as drivers/compilers and many other factors. Intel is definitely improving with its latest and upcoming GPUs, but Intel's manufacturing advantage is more of a tool to let improved sw and hw for their GPUs shine better. If hw and drivers wouldn't catch up there's not a lot any manufacturing process could save and that's all I'm saying.
 
Unless I'm missing something the only other GPUs manufactured at Samsung are blocks within SoCs both their own as for Apple. I don't see any small form factor SoCs yet from Intel under 22nm. What's Medfield manufactured on again?

Medfield's done on Intel's low power/SoC optimized 32nm process. They won't release 22nm until Silvermont.

I brought up Samsung's process because I thought the theory was that Apple would deploy an ARM laptop with one of their SoCs. This would include whatever CPU is in A6 (or a successor) and, as far as I could follow the speculation, Rogue GPU IP. Possibly using more area than what might be put in a tablet SoC. Or perhaps using the exact same SoC as what's deployed in tablets.

The reason why I compare this against Intel's 22nm is because Apple would be choosing to use this SoC in a laptop instead of Haswell.

Design matters just as drivers/compilers and many other factors. Intel is definitely improving with its latest and upcoming GPUs, but Intel's manufacturing advantage is more of a tool to let improved sw and hw for their GPUs shine better. If hw and drivers wouldn't catch up there's not a lot any manufacturing process could save and that's all I'm saying.

I don't think it's fair to demote process to "tool" instead of just another part of the design. Given that IB does pretty well against Trinity, and IMG has hardly proven their driver levels to be up to what AMD is offering (particularly in feature set) I wouldn't peg them to have a driver advantage vs Haswell for an upcoming laptop. Samsung's kind of a given for such an SoC, at least in the short term. It'd be a lot of work for Apple to port their CPU design somewhere else.
 
Medfield's done on Intel's low power/SoC optimized 32nm process. They won't release 22nm until Silvermont.

I brought up Samsung's process because I thought the theory was that Apple would deploy an ARM laptop with one of their SoCs. This would include whatever CPU is in A6 (or a successor) and, as far as I could follow the speculation, Rogue GPU IP. Possibly using more area than what might be put in a tablet SoC. Or perhaps using the exact same SoC as what's deployed in tablets.

The reason why I compare this against Intel's 22nm is because Apple would be choosing to use this SoC in a laptop instead of Haswell.

Why not Samsung 28nm then for a hypothetical low end Apple notebook/PC SoC? Not that I'd expect Apple to make any such move all that soon, but yes the process advantage for Intel is definitely there, yet a wee bit smaller then you're estimating since Haswell isn't shipping yet in actual products.

Besides Intel isn't as aggressive with process nodes for its small form factor stuff, probably for their own reasons.

I don't think it's fair to demote process to "tool" instead of just another part of the design. Given that IB does pretty well against Trinity, and IMG has hardly proven their driver levels to be up to what AMD is offering (particularly in feature set) I wouldn't peg them to have a driver advantage vs Haswell for an upcoming laptop. Samsung's kind of a given for such an SoC, at least in the short term. It'd be a lot of work for Apple to port their CPU design somewhere else.

Intel Clovertrail (SGX544MP2@533MHz) under win8 on an upcoming tablet: http://www.fudzilla.com/home/item/28840-intel-shows-us-clover-trail-performance-on-a-tablet

As for Rogue it can go all the way up to DX11.1 it just needs to be a GC6230 or 6430. The possible CPU custom design weaknesses worry me most for such a hypothesis, unlike other factors like the GPU.

No mortal can ever really know what Apple is really cooking. However for the time being I'm not completely rejecting the thought that Apple might eventually in the future dip a toe into the cold water with some humble Air stuff as some sort of experiment and see how it works from there. Besides the CPU headache, there's also the probably even bigger headache for Apple to secure more foundry resources than they will have for any time in the future where their i-gear volumes might even be higher.

Which of those big firms isn't silentely dreaming of entering higher end markets? Qualcomm would be a reasonable candidate, for which Lord knows what if the rather funky rumors attached to them are for real. At least they've admitted for one that they might consider investing in their own foundry in the less foreseeable future.

By the time Intel enters fields/markets and might shrink market share of any Apple, Qualcomm or whoever else, one reasonable reaction would be to fight back outside the small form factor market. Look at Intel from the opposite side; Medfield is the first somewhat decent attempt as a smart-phone SoC with how many years and several attempts before it exactly? What if I compare Medfield against Apple A6 or Qualcomm's MSM8960? Success can come only after a reasonable amount of time, with a lot of patience and step by step.

If ever in the case of Apple; it's just a thought.
 
Why not Samsung 28nm then for a hypothetical low end Apple notebook/PC SoC? Not that I'd expect Apple to make any such move all that soon, but yes the process advantage for Intel is definitely there, yet a wee bit smaller then you're estimating since Haswell isn't shipping yet in actual products.

What's the availability of Samsung 28nm? Have any future products using it been announced? Isn't even the quad core Exynos 5 slated for 32nm?

Besides Intel isn't as aggressive with process nodes for its small form factor stuff, probably for their own reasons.

We're still talking about Haswell - do you not expect ULV Haswell to be released until long after desktop Haswell?

If you really want to fixate on Atom it's true that it has lagged behind process-wise, but Intel plans to reduce that gap, completely eliminating it by 14nm.

Intel Clovertrail (SGX544MP2@533MHz) under win8 on an upcoming tablet: http://www.fudzilla.com/home/item/28840-intel-shows-us-clover-trail-performance-on-a-tablet

Not sure I understand the relevance.. are you trying to show driver quality from this? Windows 8 doesn't have very high requirements in and of itself..

As for Rogue it can go all the way up to DX11.1 it just needs to be a GC6230 or 6430. The possible CPU custom design weaknesses worry me most for such a hypothesis, unlike other factors like the GPU.

Sure, and SGX545 can handle DX10 but Intel still launched Cedar Trail without it. Therefore I don't have absolute faith in DX11 drivers being ready on demand for Rogue, while with Haswell we know for sure it'll be there.

No mortal can ever really know what Apple is really cooking. However for the time being I'm not completely rejecting the thought that Apple might eventually in the future dip a toe into the cold water with some humble Air stuff as some sort of experiment and see how it works from there. Besides the CPU headache, there's also the probably even bigger headache for Apple to secure more foundry resources than they will have for any time in the future where their i-gear volumes might even be higher.

Which of those big firms isn't silentely dreaming of entering higher end markets? Qualcomm would be a reasonable candidate, for which Lord knows what if the rather funky rumors attached to them are for real. At least they've admitted for one that they might consider investing in their own foundry in the less foreseeable future.

The big difference here being that Apple wouldn't be entering that market but changing their offerings for it. They could make as much as a couple hundred dollars more sale if they made the SoC, but that's assuming they can really sell such a thing for the same price (I'm skeptical).

I'm really not saying the whole Apple thing can't happen, there just isn't an awful lot of evidence right now that it will. And I prefer not to entertain Charlie's "I told you so" ;p
 
What's the availability of Samsung 28nm? Have any future products using it been announced? Isn't even the quad core Exynos 5 slated for 32nm?

No it's not available yet, but could be in early 13'.

We're still talking about Haswell - do you not expect ULV Haswell to be released until long after desktop Haswell?
Which isn't going to appear in products before Q1 13' or have I missed something?

If you really want to fixate on Atom it's true that it has lagged behind process-wise, but Intel plans to reduce that gap, completely eliminating it by 14nm.
Of course they will, but it stil won't change the fact that by that time Intel would have quite a number of attempts for SFF on its shoulders for a quite long timeframe.

Not sure I understand the relevance.. are you trying to show driver quality from this? Windows 8 doesn't have very high requirements in and of itself..
Windows is according to many users and websites quite sluggish on Cedartrail. I thought you were refering to that one.

Sure, and SGX545 can handle DX10 but Intel still launched Cedar Trail without it. Therefore I don't have absolute faith in DX11 drivers being ready on demand for Rogue, while with Haswell we know for sure it'll be there.
Lord knows what exactly happened with 545. Are there even SoCs available with the promised 640MHz for the GPU available or just the 400MHz variants? If the hw would be problematic we'd find out how exactly for example? Years ago if I would had told someone that Intel would improve its drivers and software to the Ivy bridge and/or Haswell level I'm not so sure they would had taken it all that seriously. In Cedartrail's case drivers were delivered by IMG and it's their responsibility for whatever went wrong,

The big difference here being that Apple wouldn't be entering that market but changing their offerings for it. They could make as much as a couple hundred dollars more sale if they made the SoC, but that's assuming they can really sell such a thing for the same price (I'm skeptical).
If they ever go as far I'd say that they'd be damn sure that they won't put their customers at a disadvantage and the overall gain over the years much bigger than a couple of hundred $. Otherwise it would be a rather dumb investment.

I'm really not saying the whole Apple thing can't happen, there just isn't an awful lot of evidence right now that it will. And I prefer not to entertain Charlie's "I told you so" ;p
http://semiaccurate.com/2012/03/28/qualcomm-has-imagination/ Where's the "I told you so" here? :devilish:
 
The latest reports put the A6 at 1.3GHz per the new Geekbench. Considering the idiotic frequency readout on Android, I wouldn't bet this being vastly accurate. I never owned an iPhone, do they have some Linux like sysinterfaces?
 
I've actually been waiting and expecting to learn of reports of a 1.3 GHz top speed for the A6 CPU (not that a figure derived from a read-back value instead of an actual measurement is definitive by itself, of course.) For the puzzle of deducing the clock speeds of the different processors in the A6, 1.2 and 1.3 GHz each had its own significance.

Most everything but an actual sighting of a 1.3 GHz value made me suspect that 1.3 GHz could be reached and is the actual maximum for the CPU, which is then consistent with the expected relation to the clock speed most suggestive of the measured fill rate of the GPU.

So, a 325 MHz 543MP3 it strongly appears to be, then.
 
No it's not available yet, but could be in early 13'.

Could it be ready at a competitive time for Haswell, or the next MBA?

Which isn't going to appear in products before Q1 13' or have I missed something?

So 28nm @ Samsung MAY allow a product to come out around the same time. I'll be optimistic and give you 28nm for this hypothetical laptop SoC. I don't think Apple is very aggressive with using new nodes with new designs though, hence A5X not being pushed onto 32nm. But a straight A6 shrink might see release on it.

I don't know Apple's exact release schedule but it seems like the laptops have been pretty aligned with when Intel releases new processors, I don't think they'll be aching for a new generation before Haswell is out.

Of course they will, but it stil won't change the fact that by that time Intel would have quite a number of attempts for SFF on its shoulders for a quite long timeframe.

What's SFF..?

Windows is according to many users and websites quite sluggish on Cedartrail. I thought you were refering to that one.

Two different comparisons can yield two totally different reactions depending on what they're actually comparing.. And Windows 7 vs Windows 8 is hardly apples to apples no matter what software you're looking at running in them. I expect that Win8 has a more optimized compositor, since it IS targeting things like Tegra 3, but either way what it does in Metro is still pretty different.

Lord knows what exactly happened with 545. Are there even SoCs available with the promised 640MHz for the GPU available or just the 400MHz variants?

Does N2800 not have the advertised 640MHz? It is available in products.. if no one's really looking into it it's because no one really cares, when competing AMD products have much better GPU performance. Intel's Atom netbook class is a dying breed, and all the more reason why it'll be hard for ARM SoCs to break ground here.

If the hw would be problematic we'd find out how exactly for example? Years ago if I would had told someone that Intel would improve its drivers and software to the Ivy bridge and/or Haswell level I'm not so sure they would had taken it all that seriously. In Cedartrail's case drivers were delivered by IMG and it's their responsibility for whatever went wrong,

I don't think something was wrong with the SGX545 implementation at a hardware level, or at least I don't know of any evidence that suggests that..

Sure, who knows what'll happen years from now, but I thought we were projecting things more like 8 months ahead. Now granted, when you just refer to "Rogue" that should be applicable for the next several years of what you get from IMG, but when I make my comments I'm specifically looking at first generation Rogue. What we could call the most immediate next generation of IMG's IP. Intel's driver and hardware improvement for Gen graphics definitely didn't happen over night in one generation, I don't think any single generational increase was a massive shocker, although still recognizably aggressive. Executing full DX11 on MacOS X in such a short span would be quite impressive. Not saying it can't happen, just with everything else equal I have more confidence that Haswell will execute the driver levels.

If they ever go as far I'd say that they'd be damn sure that they won't put their customers at a disadvantage and the overall gain over the years much bigger than a couple of hundred $. Otherwise it would be a rather dumb investment.

Why would it ever grow beyond a couple hundred dollars per unit, or did you think I was saying something else? That doesn't say anything about number of units sold. Surely using a custom CPU in A6 doesn't save Apple more than a tiny amount on their product BOMs even after they've recovered the R&D overhead, so they must think it'll help them sell more units or sell at a higher price. I don't know if either can be true for Apple laptops (I'd be really surprised, given how much they currently sell for even compared to iPads). $200 BOM savings from using your own processor instead of Intel's expensive ones is nice, but I'm being extremely optimistic here; Apple probably pays much less for Intel CPUs than their list prices.

Apple has transitioned their desktop OSes to different CPU ISAs before, but they've never had a long-term plan to support multiple simultaneously. I think that'd be a new challenge for them, and kind of a pain in the ass. For an ARM MBA to work, at selling prices even a bit below Intel ones, it's going to have to be more than just an iPad with a keyboard (especially considering you'll be able to buy laptop-conversion style keyboards for iPad). It's going to have to run at least some major portion of currently MacOS exclusive software, and doing it with x86 emulation is out of the question.

http://semiaccurate.com/2012/03/28/qualcomm-has-imagination/
Where's the "I told you so" here? :devilish:

What, you expect Charlie to post saying he was wrong? Without at least giving some major excuse that absolves him? ;)
 
Could it be ready at a competitive time for Haswell, or the next MBA?

When did Samsung's 32nm Exynos4 ship and have they been inconsistent with their releases lately?

So 28nm @ Samsung MAY allow a product to come out around the same time. I'll be optimistic and give you 28nm for this hypothetical laptop SoC. I don't think Apple is very aggressive with using new nodes with new designs though, hence A5X not being pushed onto 32nm. But a straight A6 shrink might see release on it.

Note that it's still on a hypothetical level since I don't expect Apple to make such a move any time soon. I wouldn't suggest that the current custom core would be a good idea for a higher performance SoC.

What's SFF..?

small form factor since I'm usually bored to write the whole thing...

Two different comparisons can yield two totally different reactions depending on what they're actually comparing.. And Windows 7 vs Windows 8 is hardly apples to apples no matter what software you're looking at running in them. I expect that Win8 has a more optimized compositor, since it IS targeting things like Tegra 3, but either way what it does in Metro is still pretty different.

Take it merely as a first indication that there aren't any signs for worries yet for IMG GPU IP for windows. Frankly if they still should have problems with windows drivers it would destroy quite a high number of their deals, from anything SGX544 to Rogue. They've already 10 licensees for the latter and ST Ericsson's A9600 f.e. shouldn't be just DX10 IMO.

Does N2800 not have the advertised 640MHz? It is available in products.. if no one's really looking into it it's because no one really cares, when competing AMD products have much better GPU performance. Intel's Atom netbook class is a dying breed, and all the more reason why it'll be hard for ARM SoCs to break ground here.

I've read but I'm not sure that there isn't a 640MHz variant available. No idea if it's true. If it should be though, the story might be more complicated than it looks like on first glance.

I don't think something was wrong with the SGX545 implementation at a hardware level, or at least I don't know of any evidence that suggests that..

Is there any indication anywhere that suggests what really was/is going on?

Sure, who knows what'll happen years from now, but I thought we were projecting things more like 8 months ahead. Now granted, when you just refer to "Rogue" that should be applicable for the next several years of what you get from IMG, but when I make my comments I'm specifically looking at first generation Rogue. What we could call the most immediate next generation of IMG's IP. Intel's driver and hardware improvement for Gen graphics definitely didn't happen over night in one generation, I don't think any single generational increase was a massive shocker, although still recognizably aggressive. Executing full DX11 on MacOS X in such a short span would be quite impressive. Not saying it can't happen, just with everything else equal I have more confidence that Haswell will execute the driver levels.

GC6x00 cores are DX10.1 and GC6x30 cores DX11.1 IMO for which you can scale at the moment up to 1 TFLOP. That IP is at least available, higher performance CPU ISA might be available (v8) but I'd say it's fairly impossible they've already got as far with it to have a window for an early 2013 release. Under the most ideal conditions I wouldn't expect Apple to have a higher performance custom CPU before NV's Denver (20nm which I wouldn't expect before H2 14') and even that sounds optimistic.

Why would it ever grow beyond a couple hundred dollars per unit, or did you think I was saying something else? That doesn't say anything about number of units sold. Surely using a custom CPU in A6 doesn't save Apple more than a tiny amount on their product BOMs even after they've recovered the R&D overhead, so they must think it'll help them sell more units or sell at a higher price. I don't know if either can be true for Apple laptops (I'd be really surprised, given how much they currently sell for even compared to iPads). $200 BOM savings from using your own processor instead of Intel's expensive ones is nice, but I'm being extremely optimistic here; Apple probably pays much less for Intel CPUs than their list prices.

Remember we're merely debating the possibility of Apple probably moving upwards in the future with their SoCs and not anything solid or granted.

Apple's A6 custom CPU initiative is ideal for them because from one side ARM based cores are outstanding for SFF integration and because when designing such a CPU they have the luxury to tailor the hw as good as possible to their own iOS' needs. The first part isn't valid for a hypothetical higher performance notebook or whatever else SoC since Intel has here a clear advantage with it's CPUs and processes. For the latter part it's hard to tell if they could achieve anything there with a weaker CPU.

What, you expect Charlie to post saying he was wrong? Without at least giving some major excuse that absolves him? ;)

It was merely one case example that no one can ever be always right, and that in all fairness doesn't stand for Charlie only but everyone who does a similar job and of course us users which can never have a complete picture of things unless you're an insider. No names mentioned but there is far worse out there.

In fact I wish there would be a more technical oriented website out there that would deal with SFF market affairs. B3D started out in a sense with Arun's excellent write-ups, but then IMG went out and snatched him and we're back to zero :p
 
Two excellent articles comparing the iPhone 5 screen to both the iPhone 4 and the galaxy s3...

The new retina display comes factory calibrated to hit full sRGB certification/standard...in fact...Bryan from anandtech states that the new display is the best calibrated display from factory that he has ever tested in the last few years....including all full Tele visions and pc monitors..with only a $20, 000 set comparable...

The other site compares it against galaxy s3 and iPhone 4s....and comes to similar conclusion...amazing..although that article says it can still improve...a bit.

Galaxy s3 suffers from average sunlight legibility...average brightness, terrible colour accuracy compared to the iPhone 5...and slightly worse colour shifts...

Wow, this amoled hd is awesome...I cent really believe the hullabo over this new screen...has anyone got one round here who can confirm??




http://www.displaymate.com/Smartphone_ShootOut_2.htm
http://www.anandtech.com/show/6334/iphone-5-screen-performance

Edit: another interesting look on that display mate article Oscar the power consumption numbers.

Interestingly the iPhone 5 consumers more power then the older iPhone 4..although that's too be expected given the larger more saturated screen..

The galaxy s3 which was something around 45% larger screen, and a different technology...consumed just under twice the power of the iPhone 5 (1.3w)..despite being significantly dimmer....
The article suggests that amoleds are very young in evolution and are currently about 4x less efficient at the same brightness and pixel density than LCD ips...

Saying that....amoleds seem to look a lot brighter than they are due to the amazing contrast ratio and eye popping over saturated colours...so they can often get by with lower brightness...hence when you compared the two screens...the amoled is competitive with the ips..considering it's bigger, carries more pixels, and is not as bright.

It would be great to see Samsung calibrate the amoleds to full sRGB standard, use full rgb sub pixels for better picture clarity and colour reproduction and perhaps integrate the touch controller like the sharp and LG panels..(or do they do that already?)..
 
Last edited by a moderator:
Two excellent articles comparing the iPhone 5 screen to both the iPhone 4 and the galaxy s3...

The new retina display comes factory calibrated to hit full sRGB certification/standard...in fact...Bryan from anandtech states that the new display is the best calibrated display from factory that he has ever tested in the last few years....including all full Tele visions and pc monitors..with only a $20, 000 set comparable...

The other site compares it against galaxy s3 and iPhone 4s....and comes to similar conclusion...amazing..although that article says it can still improve...a bit.

Galaxy s3 suffers from average sunlight legibility...average brightness, terrible colour accuracy compared to the iPhone 5...and slightly worse colour shifts...

Wow, this amoled hd is awesome...I cent really believe the hullabo over this new screen...has anyone got one round here who can confirm??




http://www.displaymate.com/Smartphone_ShootOut_2.htm
http://www.anandtech.com/show/6334/iphone-5-screen-performance

Confirm what? I can tell you the gamut is very clearly increased to the naked eye and the sunlight visibility is great. The colors look as good as my 95% gamut Clevo laptop screen.
 
Yes confirm that...have you handled the HTC one x and the galaxy s3 screens?...if so how do they compare?

Yes. I don't think the average person would notice a difference in quality versus the one x. I haven't done extensive side by side but on paper it says the gamut is better.

As for the s3, it's nice but over-saturated for my taste. A person with good eyesight can still detect the pentile too. The one x clearly beats it, and by extension, the new iphone screen does as well.
 
As for the s3, it's nice but over-saturated for my taste. A person with good eyesight can still detect the pentile too. The one x clearly beats it, and by extension, the new iphone screen does as well.
That's the bit that bothers me: No person out there has tried the "natural" mDNIe profile on the S3. All these talk about saturation and gamut is absolutely useless if they just compare the default setting of the screen.
 
That's the bit that bothers me: No person out there has tried the "natural" mDNIe profile on the S3. All these talk about saturation and gamut is absolutely useless if they just compare the default setting of the screen.

That's what a vast majority of users will see. It's Samsung's job to make the factory calibration the best one.
 
Back
Top