Does that mean part of RAM is effectively unusable for CPU?The A5X has a Quad-Channel 128-bit memory controller, although he concludes that only the SGX543MP4 has full access to it.
Does that mean part of RAM is effectively unusable for CPU?
...MULTI CORE Rogue!!....
So, AnandTech does their usual well-rounded reviews and then they drop this nitty-gritty.
The A5X has a Quad-Channel 128-bit memory controller, although he concludes that only the SGX543MP4 has full access to it.
Admittedly hairsplitting but: http://www.pocketgamer.co.uk/r/Multiformat/PowerVR+Series+6/news.asp?c=39223
2 dedicated MC that are not even connected to the CPU? Why would you want to do that for an SOC? Seems like an incredible waste of potential BW when you're not doing GPU intensive tasks.
But even if it is as they say it is, it's weird that CPU perf doesn't increase one bit. You'd expect at least some improvement if the GPU traffic has been off-loaded to a different MC? That would make for an interesting separate bencmark.
Anand mentioned the bottleneck in Cortex A9 designs is the L2 cache controller. I'm not sure the details of this, but he said A15 corrects things.2 dedicated MC that are not even connected to the CPU? Why would you want to do that for an SOC? Seems like an incredible waste of potential BW when you're not doing GPU intensive tasks.
But even if it is as they say it is, it's weird that CPU perf doesn't increase one bit. You'd expect at least some improvement if the GPU traffic has been off-loaded to a different MC? That would make for an interesting separate bencmark.
The A5X achieves 12.8GB/s of bandwidth by using LPDDR2-800 and 4 32-bit memory controllers while Exynos 5250 appears to reach 12.8GB/s of bandwidth using LPDDR3-1600 and 2 32-bit memory controllers. Apple presumably did it their way because it's easier to just double the existing memory controllers than design a new one for LPDDR3. As Anand mentions, you need a sufficiently large die with a sufficiently large perimeter to have the space to put 4 memory controllers along the edge. With Exynos 5250 being 32nm and Apple being known for their unusually large dies, it's safe to assume the Exynos 5250 will be quite a bit smaller than the A5X, meaning their likely isn't enough room to place 4 memory controllers.Yea i did read something like that before, its getting difficult to pick out just what is a core and what isn't
As Anands review states that A5X has 12.8gb/s of bandwidth..and Sammy states Exynos 5250 also has 12.8gb/s...with Sammy of course making both...does that mean we will see a similar design in Exynos? (quad channel? or LPDDR3?)
Anand mentioned the bottleneck in Cortex A9 designs is the L2 cache controller. I'm not sure the details of this, but he said A15 corrects things.
As Pressure mentioned it's not likely the GPU and CPU have different connections to the 4 memory controllers. Instead all 4 memory controllers are attached to the GPU. Attaching the memory controllers to the GPU and having the CPU hang off the GPU in the embedded space seems like a very console-like design. I'm guessing the interface between the GPU and CPU is 64-bit, which explains why the memory bandwidth the CPU sees is unchanged.
As an aside wasn't the + in the Vita's SGX543MP4+ for the GPU having it's own memory controller? Of course, the CPU doesn't share the Vita GPU memory. Does any know the Vita's GPU memory bandwidth for comparison?
The A5X achieves 12.8GB/s of bandwidth by using LPDDR2-800 and 4 32-bit memory controllers while Exynos 5250 appears to reach 12.8GB/s of bandwidth using LPDDR3-1600 and 2 32-bit memory controllers. Apple presumably did it their way because it's easier to just double the existing memory controllers than design a new one for LPDDR3. As Anand mentions, you need a sufficiently large die with a sufficiently large perimeter to have the space to put 4 memory controllers along the edge. With Exynos 5250 being 32nm and Apple being known for their unusually large dies, it's safe to assume the Exynos 5250 will be quite a bit smaller than the A5X, meaning their likely isn't enough room to place 4 memory controllers.
But don't both the CPU and the GPU have access to the full 1GB of memory and therefore need make use of all memory controllers? Can you actually separate the memory controllers into CPU-focused and GPU-focused functions?Needless to say, throwing the CPU requests into this mix can be pretty detrimental. I can easily see having separate, asymmetrical memory controllers being beneficial in this case. Since the ones servicing the CPU's can send requests as soon as they get them, minimizing latency and the ones servicing the GPU can do a lot of request combining and queuing, thus efficiently using bandwidth.
But don't both the CPU and the GPU have access to the full 1GB of memory and therefore need make use of all memory controllers? Can you actually separate the memory controllers into CPU-focused and GPU-focused functions?
The A5X achieves 12.8GB/s of bandwidth by using LPDDR2-800 and 4 32-bit memory controllers while Exynos 5250 appears to reach 12.8GB/s of bandwidth using LPDDR3-1600 and 2 32-bit memory controllers. Apple presumably did it their way because it's easier to just double the existing memory controllers than design a new one for LPDDR3.
When the resolution gap was relatively small because the iPhone had moved up to its high pixel density display yet the iPad still hadn't, the two product lines could reasonably share the same SoC. Now that the iPad has moved up to its own high density display, I assume it'll get the "X" variant of each SoC going forward.
So, the next iPad will get an A6X while the iPhone will get the A6. While a small possibility exists that this year's iPhone gets the 32/28nm G64xx based A6, it's more likely that it gets a process shrunk A5 this year as speculated in the Anandtech review. The high powered, tablet focused A6X should then debut with the 2013 iPad followed by the phone targeted A6 in the 2013 iPhone.
When the resolution gap was relatively small because the iPhone had moved up to its high pixel density display yet the iPad still hadn't, the two product lines could reasonably share the same SoC. Now that the iPad has moved up to its own high density display, I assume it'll get the "X" variant of each SoC going forward.
So, the next iPad will get an A6X while the iPhone will get the A6. While a small possibility exists that this year's iPhone gets the 32/28nm G64xx based A6, it's more likely that it gets a process shrunk A5 this year as speculated in the Anandtech review. The high powered, tablet focused A6X should then debut with the 2013 iPad followed by the phone targeted A6 in the 2013 iPhone.