And here's a iPhone 7 Plus for comparison. Not much higher than the 7 Plus for single core but looks like 2/3 higher multicore score?

Is iOS optimized for multi-core operations?
That's a weird question. What do you have in mind? Generally speaking iOS uses extra resources quite well, leveraging the GPU, image processor, and now AI coprocessor for instance.
from this video https://developer.apple.com/videos/play/fall2017/604/ A11 GPU texture filtering rate 2 time greater than A10. PowerVR Furian maybe?
esumua.png
What are you fishing after?
There aren't any Apple employees here leaking company secrets, unfortunately. We only have access to the information that is in the public domain. The phone hasn't been available for independent experimentation or investigation. Since you have access to Apple developer material you should be well aware of this.
 
Legit geekbench result according to John Poole (of PrimateLabs). Link
Comparing closed source benchmark results across different architectures with no real idea of frequency or power-consumption during the benchmark, or of working set sizes, probably makes me a moron, but I feel like the integer score (~4700) especially is potentially impressive. It seems to have perf comparable to a 2017 MacBook Pro 13" on this benchmark. Does GB4 take any pains to measure steady-state performance, or is it just measuring max-turbo performance for everyone?
 
6-8GB RAM would encourage software bloat and would risk make apps run like crap on just about all other iOS devices.

Besides, refreshing DRAM that hardly ever goes used for most people is a drain on battery power.

The reason you stick more memory in a phone is reduce app tombstoning+restarts, it saves power. It also improves the user experience.

Cheers
 
Does GB4 take any pains to measure steady-state performance, or is it just measuring max-turbo performance for everyone?

They don't. To add insult to injury, the individual subtests are so short (<1s) that processors, especially older ones, don't rev up until the subtest is over.

With Speedstep and turbo enabled my Westmere Xeon at home never gets above 2.8GHz in a GB4 run, - on a 3.46GHz base/3.73GHz turbo processor !!

Cheers
 
The reason you stick more memory in a phone is reduce app tombstoning+restarts, it saves power.
It'd have to save many millions of times the power of refreshing 8GB of RAM to be worth it. How often does apps have to restart for average users, and is it meaningful in the big scheme of things? Doesn't the battery last all day anyway for most of us? (Not counting the incessantly scrolling corner-case maniacs who can't keep their hands off their phones no matter what they're doing or what situation they're in here...)

It also improves the user experience.
User experience is just fine on my 2GB iP7... :p
 
Comparing closed source benchmark results across different architectures with no real idea of frequency or power-consumption during the benchmark, or of working set sizes, probably makes me a moron, but I feel like the integer score (~4700) especially is potentially impressive. It seems to have perf comparable to a 2017 MacBook Pro 13" on this benchmark. Does GB4 take any pains to measure steady-state performance, or is it just measuring max-turbo performance for everyone?
The working set sizes are the same for all platforms and detailed in the white paper. Also companies licensing the benchmark have full source code access.

They don't. To add insult to injury, the individual subtests are so short (<1s) that processors, especially older ones, don't rev up until the subtest is over.

With Speedstep and turbo enabled my Westmere Xeon at home never gets above 2.8GHz in a GB4 run, - on a 3.46GHz base/3.73GHz turbo processor !!

Cheers
That's an issue of Speedstep and turbo on your machine. The tests are plenty long enough to represent proper interactive workloads. Mobile devices ramp up at down to 10ms response times.
 
It'd have to save many millions of times the power of refreshing 8GB of RAM to be worth it.
LPDDR4 refresh power is between .6 to ~2 mw/GB. Alas, the difference between 3GB and 6GB is 2-6mw.

Your iPhone 7 has a 7.45Wh battery (1960mAh @ 3.8V), going from 3GB to 6GB would eat 0.048 to 0.144Wh per day, or .6 to 1.8% of your battery capacity, - in the noise.

Cheers
 
That's an issue of Speedstep and turbo on your machine. The tests are plenty long enough to represent proper interactive workloads. Mobile devices ramp up at down to 10ms response times.

But Geekbench isn't an interactivity benchmark, if it was, the choice of subtests would be downright bizarre (LLVM code generation and ray tracing). Geekbench is a single/multi threaded CPU benchmark.

The individual subtests are less than a second each with a two second pause between them, "to avoid thermal issues". This skews results in two ways:
1. Lowers benchmark scores for system with slow ramp of operating frequency (>1s on Westmere, but still 110ms on Haswell CPUs).
2. Artificially inflates benchmark scores for systems with limited power dissipating capability (every mobile platform out there). In effect average power is one third of the actual power usage when doing stuff.

Cheers
 
But Geekbench isn't an interactivity benchmark, if it was, the choice of subtests would be downright bizarre (LLVM code generation and ray tracing). Geekbench is a single/multi threaded CPU benchmark.

The individual subtests are less than a second each with a two second pause between them, "to avoid thermal issues". This skews results in two ways:
1. Lowers benchmark scores for system with slow ramp of operating frequency (>1s on Westmere, but still 110ms on Haswell CPUs).
2. Artificially inflates benchmark scores for systems with limited power dissipating capability (every mobile platform out there). In effect average power is one third of the actual power usage when doing stuff.

Cheers
True, but then that is an issue with all benchmarking, isn't it?
You always have to understand the tool.
I wouldn't say that pausing between subtests inflates the scores of mobile devices, they really do perform at the level shown under these conditions. And those conditions (bursty workloads) are pretty relevant for many devices. If you make runtimes really long (say half an hour or so to ensure thermal equilibrium is reached) you end up with the same problem at the other end of the spectrum - does this really represent the workload you want to model? Add that running in a thermally throttled setting generally makes the results less repeatable. Devices don't reach thermal equilibrium equally fast, so just how long/short should a benchmark run be to be "fair"? Is the device held in hand? Is the weather warm?

Having short runs helps both repeatability, and simply makes the benchmarking more convenient. Plus, it is probably a better model for actual use than the other repeatable extreme, the thermal equilibrium runs. Not a good model for 24/7 server use, but then, what iPhone does that....

Geekbench 4 is actually a pretty good benchmark for what it is. You just have to be aware of its limitations.
(An issue cropped up with the earliest geekbench runs for instance, where a subtest was scheduled to a slower core, because the tasks run at normal rather than maximum priority. Results always have to be checked for sanity.)
 
But Geekbench isn't an interactivity benchmark, if it was, the choice of subtests would be downright bizarre (LLVM code generation and ray tracing). Geekbench is a single/multi threaded CPU benchmark.

The individual subtests are less than a second each with a two second pause between them, "to avoid thermal issues". This skews results in two ways:
1. Lowers benchmark scores for system with slow ramp of operating frequency (>1s on Westmere, but still 110ms on Haswell CPUs).
2. Artificially inflates benchmark scores for systems with limited power dissipating capability (every mobile platform out there). In effect average power is one third of the actual power usage when doing stuff.

Cheers
On a Galaxy S8 / A73 the shortest subtest is 1.3s and the longest one 8s.

Show me a real world sustained workload scenario where loads are longer than that period. There is none. You don't do compiling or encoding on mobile devices. The tests selected in GB4 are a very fair selection of various workloads that would be representative of the real world. The LLVM test for example might be used a representation of runtime compilation which does happen.

You don't care about mobile devices what the performance is for a constant 15 minute workload, you care about performance when you open/load an app or open a website. This is what 95% of use-cases are about. It's also the way to measure peak performance of the chip and architecture.

If Westmere is that slow then it's also utterly shit and also misrepresented in all JS benchmarks out there because that performance will never be reached in the real world for example. The benchmark also has to be realistically usable on lower-end devices and the lowest common denominator here takes several minutes to go through a benchmark run. The workload complexity is absolutely fine and it's the best jack-of-all trades benchmark there is out there. But of course there's people who'll request SPEC2006 on a phone without realizing that that takes hours to run.
 
This new Apple GPU should be roughly in the same performance range as the Intel GPU in the macbook line, and about the 30-40% of the Intel GPUs in the Macbook Pro line. I wonder if we'll ever see Apple supplying their own GPUs for their own devices, with the exception of the cases where people are using AMD, Nvidia options.
 
On a Galaxy S8 / A73 the shortest subtest is 1.3s and the longest one 8s.

Show me a real world sustained workload scenario where loads are longer than that period. There is none. You don't do compiling or encoding on mobile devices. The tests selected in GB4 are a very fair selection of various workloads that would be representative of the real world. The LLVM test for example might be used a representation of runtime compilation which does happen.

You don't care about mobile devices what the performance is for a constant 15 minute workload, you care about performance when you open/load an app or open a website. This is what 95% of use-cases are about. It's also the way to measure peak performance of the chip and architecture.
Wouldn't most 3D game sessions or augmented reality apps which are extremely popular at the moment count as a sustained workload scenario well past 8 seconds. An 8 second test does not represent what a very large amount of users do on phones. Those users would care about what performance is for a constant 15 minute workload.
 
6-8GB RAM would encourage software bloat and would risk make apps run like crap on just about all other iOS devices.

Besides, refreshing DRAM that hardly ever goes used for most people is a drain on battery power.
I'm not sure about that, apple makes these legacy software issues in the first place by cheating out on ram in what is routinely the most expensive phone bar boutique brands.
they offer years worth of software updates which is commendable, but on phones with 512mb and 1gb ram it has caused no end of issues.
the idea that they would be averse to seeing industry standard amounts of ram because they cheaped out on their old phones is daft.
I think the whole more ram = less battery life has been disproven, one plus 5 has 8gb, offers respectable battery life in line with phones with 3/4gb ram, has a moderate sized battery.
most of the ram will go unused, the stuff you do use its more efficient to have in ram than keep loading up the app from cold, in no way do I think 8gb is useful for a smartphone in 2017, but history tells us future software gobbles up ram faster than forward thinking projections, plus if a 450$ smartphone has 8gb, I want at least 6gb in a 1000$ one, apple has been pulling these tricks for years, intentionally setting their phones up to run slow in the future = buy new iPhone due to lock in.
 
I really wonder about that 4K video recording capability.

I haven't used it at all on my 6S Plus because that was the first implementation of 4K video on iOS.

Now if I record 4K60 videos, will I be able to Airplay those videos to Apple TV 5 (announced today) connected to a 4K TV?

But does it even record HDR? Actually only iPhone X has HDR rendering (not recording) while iPhone 8, which has the same A11 SOC, does not render or record HDR?
Good questions, not sure about that, display is likely a Samsung amoled (or LG?) and supports HDR10, not sure if that's full 10 bit or mobile version (8bit?), but I would think it would be some major cudos to apple if it could record HDR, no matter 4k 60hz is pretty mind blowing it's self.
not sure air play would have the bandwidth for that, but I don't know the specs in any case.
 
I looked at the spec pages. It doesn't record video in HDR but of course has HDR photo modes.

iPhone X has HDR display, iPhone 8 does not, though there's no reason they can't have HDR on an LCD display. Won't be as good as OLED but it would be better than no HDR LCD.
 
Wouldn't most 3D game sessions or augmented reality apps which are extremely popular at the moment count as a sustained workload scenario well past 8 seconds. An 8 second test does not represent what a very large amount of users do on phones. Those users would care about what performance is for a constant 15 minute workload.
3D games are not CPU heavy and there's no workload which maxes out even what is now mid-range devices. AR apps are hopefully mostly GPU accelerated.
 
I looked at the spec pages. It doesn't record video in HDR but of course has HDR photo modes.

iPhone X has HDR display, iPhone 8 does not, though there's no reason they can't have HDR on an LCD display. Won't be as good as OLED but it would be better than no HDR LCD.
HDR photo modes uses multiple exposures. That is not possible when recording video, and the native dynamic range of the sensor is likely modest. That said you can still map the tones you do capture to a HDR format if you want.

Do you want reasons to buy, or to abstain? ;-)
 
Back
Top