Next-Gen iPhone & iPhone Nano Speculation

Fibble has been a fun action based puzzle game these past few days on my iPad 2. Crytek lists some of the technical details:

Gamma correct rendering pipeline (lighting and post processing effects are calculated in linear space)
HDR rendering (adaptive range)
Tone mapping Film like post-process effects: Vignetting & Filmgrain
Camera motion blur
Depth of Field
Irradiance volume based lighting for dynamic objects

http://www.youtube.com/watch?v=QZpcrMIcUeU

The iPad 2 hasn't dropped any frames in my time with it, so it's obviously been nicely polished. The real-time cut scenes really show off some of the high quality post processing effects.

The game could benefit from some strong image anti-aliasing and a little more detail in the models and textures, but it's quite impressive overall, especially the lighting. And while I'm not a big fan of DoF and motion blur, they're nice as subtle touches within the levels at times.
 
Pretty pricey for a puzzle game. Does it produce graphics as good as Infinity Blade?

Or almost as good graphics but more game content?
 
Where Fibble excels with its lighting and cinematic camera effects, UE3 games like Inifinity Blade II can boast some exceptional model and texture detail using offset, specular, and normal maps to great effect. Both can be equally impressive; I tend to prefer the overall visual balance in Inifinity Blade II myself.

Fibble is actually a little light on content in its present, launch state, but I've still gotten enough play and enjoyment out of it to justify the price already.

A few shots I captured of Fibble:







d81be841.jpg
 
One of the users at NeoGAF.com mentioned that his company just released a game called Gunman Clive.

http://www.youtube.com/watch?v=PMDfq8PLfLM

I was impressed with the video, so I gave it a shot. Plays fluidly, and the controls are tight considering it uses an on-screen control pad.

The game has a decent amount of variety in the platforming challenges and enemy attacks across a fair amount of levels. Highlighted by its sketchbook art style and fittingly Old Western background music, it's a gem of a game.
 
One of the users at NeoGAF.com mentioned that his company just released a game called Gunman Clive.

http://www.youtube.com/watch?v=PMDfq8PLfLM

I was impressed with the video, so I gave it a shot. Plays fluidly, and the controls are tight considering it uses an on-screen control pad.

The game has a decent amount of variety in the platforming challenges and enemy attacks across a fair amount of levels. Highlighted by its sketchbook art style and fittingly Old Western background music, it's a gem of a game.

Stunning;)

Originally Posted by ltcommander.data
The A5X achieves 12.8GB/s of bandwidth by using LPDDR2-800 and 4 32-bit memory controllers while Exynos 5250 appears to reach 12.8GB/s of bandwidth using LPDDR3-1600 and 2 32-bit memory controllers. Apple presumably did it their way because it's easier to just double the existing memory controllers than design a new one for LPDDR3

So, that means then A6 should get the same set up...12.8gb/s using LPDDR3 2 32 bit....
A6X however..if they follow the same design layout...would have a console beating 25.6gbs! thats 4 x 32 bit LPDDR3 @ 1600mhz ....a quad rogue...at least 2 A15's + A7.....1-2 gb ram @ 32nm HKMG.....maybe they will start sticking in some dedicated video memory when they start getting to that stage..and enable open GL ES 3.0 Haiti & open CL with IOS 6.....goodbye Wii U...
 
Last edited by a moderator:
True multi-core (where each duplicated core could work independently if it were by itself) almost inherently implies overhead/redundancy; the benefit to using it is that the GPU IP designers didn't already have to spend the R&D to design a custom core with that specific number of pipelines.

While PowerVR's approach does allow for true multi-core, they first try to offer custom cores with a scaled number of ALU constructs (call them pipelines in Series5 and clusters in Series6, I suppose) for each of the most common performance levels they anticipate their customers wanting. I'm guessing the G64xx is just one core with four ALU clusters rather than a quad-core GPU, and the G62xx is a single core too but with two clusters.

I can only imagine the A6X using an actual multi-core Rogue if IMG didn't already have a custom core available with the required number of ALUs/TMUs.
 
I can only imagine the A6X using an actual multi-core Rogue if IMG didn't already have a custom core available with the required number of ALUs/TMUs.
user_offline.gif
I can only imagine the A6X using an actual multi-core Rogue if IMG didn't already have a custom core available with the required number of ALUs/TMUs.

Yea i suppose...but they will be going to a new architecture...aside from the ALU's/TMU's efficiency will play a part....
 
True multi-core (where each duplicated core could work independently if it were by itself) almost inherently implies overhead/redundancy; the benefit to using it is that the GPU IP designers didn't already have to spend the R&D to design a custom core with that specific number of pipelines.

While PowerVR's approach does allow for true multi-core, they first try to offer custom cores with a scaled number of ALU constructs (call them pipelines in Series5 and clusters in Series6, I suppose) for each of the most common performance levels they anticipate their customers wanting. I'm guessing the G64xx is just one core with four ALU clusters rather than a quad-core GPU, and the G62xx is a single core too but with two clusters.

None of us obviously have any details on Rogue yet, but so far from the ring of the few sparse details announced it could be that each cluster contains 2 TMUs, meaning it's one of those things where you ask yourself if there's any significant difference between calling it multi-core or multi-cluster. Compared to Series5XT most likely yes, since the geometry ratings announced for the A9600 are quite high; I suspect some serious advancements in the geometry department in Series 6 which also would had been a reasonable design goal considering DX11/tessellation.

Another blank spot is how ALUs exactly look like. The only thing I can figure out with backwards speculative math from the A9600 numbers are in typical desktop marketing parlance 160SPs divided over the four compute clusters (40/cluster) of the GC6400. Let's call them "cores" since counting ALU lanes as cores is quite fashionable for marketing lately :devilish:

I can only imagine the A6X using an actual multi-core Rogue if IMG didn't already have a custom core available with the required number of ALUs/TMUs.

No idea; the relevant announcement about GC6200/GC6400 gave me the impression that both had been available at the same time.
 
Yeah, quite interesting.

Nearly half size at 69mm2 compared to 122mm2.

Perhaps they did try an SGX543MP2 at 500Mhz for the new iPad before realizing it needed a 128-bit memory controller?

The 70% larger battery with this 32nm A5 would have given a nice battery life, well, even though the backlight would still eat the most of it.

That could make it suitable for an iPod Touch refresh.
 
That could make it suitable for an iPod Touch refresh.
A 32nm A5 for a new iPod Touch does make the most sense. I wonder what this all means for their iOS support cycle though? Currently, each generation of device gets 3 iOS versions equivalent to 3 years support. With iOS 6, the 2009 iPhone 3GS and 3rd gen Touch will presumably get dropped and the A4 will be the minimum. For iOS 7, the A4 will presumably be dropped leaving the 2011 non-refresh iPod Touch with only 2 years of software support. Giv en Apple refers to these as iPod Touch 2010, it isn't unexpected, and they would consider it similar to buying an iPhone 4 now that the iPhone 4S is released. The question is what happens during iOS 8, which will likely drop the A5. Does the 2012 32nm A5 iPod Touch get dropped with the other A5 devices solidifying a 2 year OS support period for iPod Touch? Does the 2012 32nm A5 iPod Touch retain iOS 8 support maintaining the 3 year support period while the other A5 devices get arbitrarily dropped? Do all A5 devices, 45nm and 32nm, remain supported, meaning iPhones and iPads move to a 4 year support cycle?
 
Looks like Sammy's 32nm+ HKMG is getting a test run on Apple tv's A5...
http://www.anandtech.com/show/5740/apple-tv-a5-soc-is-32nm-harvested-dualcore-a5

Apple's revision for the ipad 2 is also using this new proccess, so battery life gains will be interesting to look out for...

Yep I would very much like to know how the battery life is on that thing. I just bought the new iPad and even though the screen is great, the battery situation is quite a step backwards. The new pad has shorter battery life and it takes over 50% longer to charge it. It is also noticeably heavier and you notice each of those things in use. I was debating whether I should buy used iPad 2 or the new one and I'm not regretting my choice, but it's not all roses. Even the screen causes some issues because you run in to blurry upscaled stuff quite often, but still the display is very nice.
 
Yep I would very much like to know how the battery life is on that thing. I just bought the new iPad and even though the screen is great, the battery situation is quite a step backwards. The new pad has shorter battery life and it takes over 50% longer to charge it. It is also noticeably heavier and you notice each of those things in use. I was debating whether I should buy used iPad 2 or the new one and I'm not regretting my choice, but it's not all roses. Even the screen causes some issues because you run in to blurry upscaled stuff quite often, but still the display is very nice.

Yea, sometimes you can't just 'big battery' your way out of it!:smile:
Maybe they could have included some spcialised upscaling hardware...
 
Considering that the new iPad is roughly the same size as iPad 2, but with a 70% larger battery, while at the same time maintaining roughly the same battery life, I think it's safe to assume that the new iPad consumes power 70% more than iPad 2.

So from the thermal image, the ambient temperature looks like to be around 21 ~ 22 degrees. That means the hottest point of iPad 2 is around 6 ~ 7 degrees above ambient. The new iPad is 12 ~ 13 degrees so that's roughly 85% ~ 100% more heat. Not much more than the estimated 70%.

It may be possible to spread the heat more evenly to reduce "hot spot," but it looks like that the new iPad is already quite evenly heated.

Furthermore, I wouldn't put all the blame on the A5X. Yes, it's possible if using a better process that the SoC can be more power efficient, but the elephant in the room is still the new screen and the required back light. In general, a screen with smaller pixels block more light, so it needs better back light to maintain the same brightness. However, macro images from Anandtech here shows that iPhone 4S's screen may actually be more efficient than the new iPad. So maybe there's still some hope.

Sharp 's new IGZO panel will hopefully fix that. It is suppose to consume only 20% of the current new ipad display.
 
I was anticipating the new display so much but my enthusiasm has been muted by the modest SOC improvements and the fact that they had to put in a bigger, heavier battery which takes longer to charge.

May still pick it up at some point but these rumors of a new iPad later this year instead of next spring isn't helping.
 
The new iPad is a lesson in brute force. Non IGZO display with double the power. Huge SoC on old process. Huge GPU that has a successor technology months away. If this iPad had been built next year, it would be totally different and the battery would be much closer to iPad 2 size.
 
The new iPad is a lesson in brute force. Non IGZO display with double the power. Huge SoC on old process. Huge GPU that has a successor technology months away. If this iPad had been built next year, it would be totally different and the battery would be much closer to iPad 2 size.

Well, the 4th generation iPad sure has some things going for it then.
  • 32nm or lower lithography
  • Lower power panel (still at 2048 x 1536 pixels)
  • New generation of GPU and CPU for vastly improved performance
  • Most likely better battery life
 
I wonder though if in an effort to improve the power consumption of the "retina display," they cut back on the LEDs and it improves efficiency but the image quality of the display is diminished.
 
Back
Top