Tegra 3 officially announced; in tablets by August, smartphones by Christmas

And on top of that, doesn't the Transformer Prime use a higher resolution display than the iPad 2? To maintain relatively high frames per second at relatively high resolution all with relatively low power consumption is very nice. I don't see how one could be disappointed with GPU performance after seeing video like this: http://www.youtube.com/watch?v=2U2r3yKg0Ng and this: http://www.youtube.com/watch?v=beW44983Rx8
GL Benchmark does offscreen benchmarking so all devices render the same screen at the same resolution. All those games would run just fine on year old PowerVR GPU's. In fact some of them like Shadowrun already do. The problem for some people is for a 2012 SoC Tegra has little GPU power being possbilly the slowest of 2012 chips.
 
A SGX543MP2 is perfectly capable of driving just about any content on a 1280x720 screen. I would prefer for them to focus more on lowering the power consumption.

Please define what you mean by "any content" because I don't thing you really mean "any content".

Otherwise, the PSVita wouldn't need a SGX543MP4 with dedicated video memory for a 960*540 screen.
 
Please define what you mean by "any content" because I don't thing you really mean "any content".

Otherwise, the PSVita wouldn't need a SGX543MP4 with dedicated video memory for a 960*540 screen.

I believe the correct nomenclature is "SGX543MP4+" for the graphic part in the PS Vita.

One would think Sony is aiming for PS3 kind of visuals on that 960 x 540 resolution screen at 60 FPS or something along those lines.

But I digress, I find the biggest let-down of the Tegra 3 its GPU performance and that will be the Achilles heel of the platform.

It will still take ICS to make the animations seem smooth. Overall I still think a package, like the Apple A5, is more desirable.
 
Quite honestly, I think -- without some sort of new use-case for phones -- that we've hit the saturation point for performance needed.
Just one year more and we have enough performance to run 1:1 console ports at 720p. As a game developer I would really appreciate that, since it would make porting (high quality) console games to the mobile phones a lot more straigthforward. You can not spend millions of dollars to develop a single game just for mobile phone... But if it would be possible to port those games at 1:1 quality (graphics, physics, gameplay, AI, online features, etc) we could have massive improvement in the quality of games on mobile devices.
 
Agreed, although I think my 'good enough' point is probably higher than yours.

So when is that ¨good enough¨ point reached for a significant number of consumers? Now? 28nm? 20nm?
My iPad2 already has slightly better performance than the 17¨-lamp iMac my elderly mother uses for all her computing and web-surfing needs. By 20nm we are down to the same game players and video transcoders that have justified all advancements on the desktop for the better part of the last decade.

People may want to connect to a bigger screen for a lot of reasons, and to other input devices for efficiency of input obviously. Connecting to a bigger screen is the issue basically, since input can be taken care of via bluetooth already if need be. Most people just don't/won't need the laptop/desktop computer system for the sake of its computing power, just for the sake of backing up their mobile units. And that task can be taken over by the cloud, (and possibly dedicated copy stations/disks).

My contention is this - the upcoming generation of mobile devices will have sufficient computing resources to be the sole computing device of their owners. Inertia (on many fronts) is the only thing keeping x86 PCs from a much sharper decline in volume than what we are seeing.
 
I wonder whether it'll be a case of OS'es starting to bloat as Socs start to get more powerful, mimicking what happened on the desktop.

Also, there's probably many application areas still waiting for the right performance point before they can be mature technologies in the handsets. In a few years time we might be laughing at the limitations of Siri, and I know that TI is big into gesture recognition using the front facing camera, the latter might be something that can benefit from GPU power via OpenCL.

As smartphones is one of (possibly THE) major leading edge segments that is driving innovation, it might not be good for the industry overall if there is a maturity point perhaps only 2 or so years away from onw.
 
As smartphones is one of (possibly THE) major leading edge segments that is driving innovation, it might not be good for the industry overall if there is a maturity point perhaps only 2 or so years away from onw.

Oh, you should never underestimate the power of inertia as a driving force!
But it's a good idea to recognize it for what it is.
 
Please define what you mean by "any content" because I don't thing you really mean "any content".

He probably means any current or upcoming tablet/smart-phone target content.

Otherwise, the PSVita wouldn't need a SGX543MP4 with dedicated video memory for a 960*540 screen.

The PSVita is not only a handheld console, meaning that it has to handle console ports as Pressure already mentioned, but also has to last probably at least half a decade. Are there any =/>5 years design cycles in the smart-phone/tablet markets? More like on a yearly basis I'd say.

In terms of pure paper specs the upcoming TI OMAP5 and Apple's next SoC override the theoretical performance of the 543MP4+ in PS Vita. The first will drive =/>720p screens, for the latter the rumors about a very high resolution screen have been circulating forever.

But to come back to metafor's actual point: the PSVita SoC is being manufactured under Samsung's 45nm. Take a guess how large it is and whether you could today cram something that consumes a handful of Watts into a smart-phone.
 
Just one year more and we have enough performance to run 1:1 console ports at 720p. As a game developer I would really appreciate that, since it would make porting (high quality) console games to the mobile phones a lot more straigthforward. You can not spend millions of dollars to develop a single game just for mobile phone... But if it would be possible to port those games at 1:1 quality (graphics, physics, gameplay, AI, online features, etc) we could have massive improvement in the quality of games on mobile devices.

Are there such ports being worked on or contemplated?

Yeah given the prevailing ASPs of mobile games, nobody will spend millions on a mobile game.

People may want to connect to a bigger screen for a lot of reasons, and to other input devices for efficiency of input obviously. Connecting to a bigger screen is the issue basically, since input can be taken care of via bluetooth already if need be.

That would be the one trend that would support continued development of more powerful SOCs, that more people rely on their mobile devices as their main computing platform, which they could hook up to bigger displays and keyboards or game controller for general computing or entertainment purposes.
 
What would it take to pair a bluetooth controller to a device?

Could a third party write the libraries and profiles for a common BT controller like the DS3?

Or would there have to be support at the OS-level only? Towards the end of the PS2, some companies came out with some peripherals and announced free SDKs that games developers could use to integrate support for these peripherals -- I think it was a headset and maybe some kind of keyboard on a controller.

Of course Apple could put a kibosh on any game app. that tried to use such tools or maybe they don't allow full access to the Bluetooth stack on iOS?

You would think Google would welcome all comers, though developers and publishers have been more wary of console-quality games on Android.
 
But then they'd be sued by Apple for copying their ideas ;) (sorry, couldn't resist - I do feel it is noteworthy that the iPhone 4S has the slowest Cortex-A9 implementation to date despite their Intrinsity acquisition though [which is not clear if they used yet] - and that comes with the battery life benefit you'd expect). I don't think bigger screens hurt battery life though since they allow you to fit a correspondingly bigger battery.

You'd think so. But as with laptops, somehow, the battery size just doesn't scale with screen size. And looking at comparable Android phones with differing screen sizes, that seemed to have remained the case. I agree that they should be able to pack bigger batteries with bigger screened phones but alas....

I'm mostly thinking of WiFi web browsing here. I think the 'good enough' level is higher than you claim because everyone has a desktop computer and so experiences on a daily basis how much faster it can be. And web pages are still increasing slightly in complexity (even before considering Flash) so a bit of future proofing can't hurt.

But I obviously agree that the initial ~2X improvement over my iPhone 4S that you're describing (0.8GHz->1.5GHz) would be a lot more visible than the further ~2x improvement I'm describing (1.5GHz->2GHz & +50% IPC). This kind of thing always suffers from diminishing returns.

I really don't know of too many websites which suffers from non-near-instantaneous load times due to something other than the network. Engadget may be the most complex example I can think of and most of the time, that has to do with the response time from the ad servers.

WebGL games may be another example but again, that's more of a GPU limitation, which I agree, can always be better.

I think it's also very important that 'good enough' depends not just on performance but also pricing. The CPU is a small part of the bill of materials (as long as it doesn't force you to do anything too fancy to dissipate its heat) so it makes sense to overspec it - not just for marketing reasons, but also because that small difference in user experience really is worth those extra few dollars in the high-end. You could make a point that there aren't enough killer features on the horizon to make ultra-high-end smartphones compelling in the 20nm generation (as opposed to upper mid-range smartphones) and that might be true, but it's not a CPU-specific problem.

My point isn't so much cost as in per-die cost. It's more of cost as in opportunity cost. That is, instead of making such a high-powered CPU on a high-powered process with a beefy power distribution grid to support it, one could have designed a light, low-power, CPU and used the rest of the die area for more GPU pipelines.

Active power is always a problem though - I remember that one of the first Snapdragon phones had an user settings for maximum clock speed so you could make your own trade-off for speed vs power consumption. I wouldn't be very surprised if we saw the same thing again in the Cortex-A15 generation with the default setting not being the maximum.

I think those kind of things will be more automated in the future, with something similar to how Nehalem regulates its core freq/voltage settings.
 
Please define what you mean by "any content" because I don't thing you really mean "any content".

Well no, not "any content" as in go-wild-with-ray-tracing. But for titles that can be released on a mobile platform -- that is, you're targeting a ~4.3" screen -- I don't think developers are that hard-pressed for more compute power. I could be wrong, of course.
 
The Dark Meadow is a pretty UE3 game for iOS, and I adored the smoothess of the rock-solid 60 fps on iPad 2, especially as the player is constantly whipping the camera view around to explore the environments.

Moderate CPU advances are fine, but focusing on the evolution of the GPU will be very much appreciated for future SoC development, even if it were only for games.
 
My point isn't so much cost as in per-die cost. It's more of cost as in opportunity cost. That is, instead of making such a high-powered CPU on a high-powered process with a beefy power distribution grid to support it, one could have designed a light, low-power, CPU and used the rest of the die area for more GPU pipelines.

I'd expect as much from a company like NVIDIA where its core business is actually graphics. My guess is that it's easier and cheaper that way. Granted NV is not the only that is carrying the same architecture over several SoCs (from Tegra1 to 3), but that base architecture doesn't at least to me scream like a whole of a lot of resources spent in the GPU direction.
 
I'd expect as much from a company like NVIDIA where its core business is actually graphics. My guess is that it's easier and cheaper that way. Granted NV is not the only that is carrying the same architecture over several SoCs (from Tegra1 to 3), but that base architecture doesn't at least to me scream like a whole of a lot of resources spent in the GPU direction.

I highly doubt Nvidia deliberately chose to de-prioritize the GPU. I would be shocked if they knowingly designed an uncompetitive product.

I speculate that their forecasts expected much weaker GPU competition and thus chose instead to keep the chip small to save on costs. About a year and a half ago they realized they needed a full GPU architecture refresh to compete, which requires a multi-year lead time to get to market. They need something else to sell, and ARM provides turn-key solutions to put down more CPU cores in much less time than the GPU refresh.

The result is that the "CPU is dead" company sings the virtues of CPU speed. Sell what you have, downplay what you don't.
 
It'll be interesting to see if they come up with a bombshell GPU in the future. They certainly have significant graphics technology to leverage against everyone else. They seem to be getting pretty serious about games on these devices as shown by Tegra Zone for example. But I'm sure there is a limit as with desktop IGPs where it won't make sense to add more GPU power because few users would appreciate it and so the value is nil.
 
I highly doubt Nvidia deliberately chose to de-prioritize the GPU. I would be shocked if they knowingly designed an uncompetitive product.

I speculate that their forecasts expected much weaker GPU competition and thus chose instead to keep the chip small to save on costs. About a year and a half ago they realized they needed a full GPU architecture refresh to compete, which requires a multi-year lead time to get to market. They need something else to sell, and ARM provides turn-key solutions to put down more CPU cores in much less time than the GPU refresh.

The result is that the "CPU is dead" company sings the virtues of CPU speed. Sell what you have, downplay what you don't.

Well yes and no. Architectures like SGX had been announced and analyzed years ago and IMG wasn't the only one that went multi-core. I even think that ARM was the first to announce multi-core GPU IP before Series5XT.

http://imgtec.com/News/Release/index.asp?NewsID=449

First announced in March 2009; a tad less than 3 years ago.

I won't say that CPU power is unnecessary, rather the exact opposite. The only thing I'd personally wish for, is a finer balance between CPU and GPU resources.

It'll be interesting to see if they come up with a bombshell GPU in the future. They certainly have significant graphics technology to leverage against everyone else. They seem to be getting pretty serious about games on these devices as shown by Tegra Zone for example. But I'm sure there is a limit as with desktop IGPs where it won't make sense to add more GPU power because few users would appreciate it and so the value is nil.

I'm less worried about any competitor's hw in that regard; what might become a major headache for them is rather sw, support and devrel down the line. Resources and experience are already there at NV and I'd dare to claim at this point that they're the only embedded contender at the moment that doesn't face any win8 driver considerations.
 
Back
Top