PowerVR Series 6 now official

http://www.imgtec.com/News/Release/index.asp?NewsID=412

Dated 24/11/2008 and the actual deal might have been closed even earlier than that. At that time (and it's not that much different with console deals) IHVs sell what they have available then and not what they'll have later on their road-map. No one is buying designs "on paper" these days.

Aren't 3 years way-too-much-time to decide such an important and fast-evolving component as the GPU?
AFAIK, the X360 took little more than 2 years from nothing to the end product in the market shelves (it must be some kind of record, though, not the best example).
 
Aren't 3 years way-too-much-time to decide such an important and fast-evolving component as the GPU?
AFAIK, the X360 took little more than 2 years from nothing to the end product in the market shelves (it must be some kind of record, though, not the best example).

How do you think a SoC gets developed? In a couple of weeks? NV's T2 sampled in late 2008 and went into production in 1Q2010 and that doesn't even state the time the SoC was under development before it was sent to the fab.
 
How do you think a SoC gets developed? In a couple of weeks?
I have friends in companies developing custom SoCs with similar specs to smartphones (SoCs for arcade\casino systems) and I'm pretty sure they do it in a lot less than 3 years.

Sure, the power\heat constraints are completely different, but we're talking ARM architecture with OpenGL ES 2.0 GPUs.

Nonetheless, I was just asking if it wasn't too much time for a decision like that. I didn't judge any decision.

NV's T2 sampled in late 2008 and went into production in 1Q2010 and that doesn't even state the time the SoC was under development before it was sent to the fab.
Well, execution time will have to get a lot faster than that, if they don't want to fail miserably to Intel's top-notch execution (assuming they'll actually focus their efforts in bringing Atom to handhelds).
 
I have friends in companies developing custom SoCs with similar specs to smartphones (SoCs for arcade\casino systems) and I'm pretty sure they do it in a lot less than 3 years.

Sure, the power\heat constraints are completely different, but we're talking ARM architecture with OpenGL ES 2.0 GPUs.

Nonetheless, I was just asking if it wasn't too much time for a decision like that. I didn't judge any decision.


Well, execution time will have to get a lot faster than that, if they don't want to fail miserably to Intel's top-notch execution (assuming they'll actually focus their efforts in bringing Atom to handhelds).

Note that in the case of Nvidia, they are the designer of the graphics IP as well as the designer of the Soc, which surely helps to reduce the design time.
 
Aren't 3 years way-too-much-time to decide such an important and fast-evolving component as the GPU?

Maybe the companys designing the GPUs are designing them to be competitive with what will be on the market in 3 years time, rather than what is on the market now?
 
Maybe the companys designing the GPUs are designing them to be competitive with what will be on the market in 3 years time, rather than what is on the market now?

My doubts are not in GPU design, but rather in SoC design.
If the NGP's SoC had its specs decided too soon, weren't they risking to lose the performance advantage too soon, also?

Truth be told, the NGP is coming to market just some 6 months before A15 solutions are out there.
 
My doubts are not in GPU design, but rather in SoC design.
If the NGP's SoC had its specs decided too soon, weren't they risking to lose the performance advantage too soon, also?

Truth be told, the NGP is coming to market just some 6 months before A15 solutions are out there.
The SOC of NGP is a very aggressive design. The quad-core A9 consume 1W @ 800 MHz in 40 nm CMOS. Even in 28 nm CMOS, a 4-core A9 may still consume 0.7W at the same frequency (not to mention that it will consume 0.8~0.9 W if the 4 core A9 targets at 1 GHz or above). Few handheld devices can provide 0.7~1W just for CPU.

I don't know how much power the quad-core A15 consumes but it is reasonable to assume that it consumes more power than quad-core A9.
Therefore it's not a proper choice for 2011 release.
 
Truth be told, the NGP is coming to market just some 6 months before A15 solutions are out there.

6 months is what TI claims, but they're a leading licensee and it sounds optimistic.

Sony probably figures that they must not miss this December for launch. Giving a holiday season to a competitor can be highly damaging, especially when that competitor already holds the cards.

Remember, developers need to have something close enough to NGP very far in advance in order to get launch and near launch titles out. This is more involved than something like the announced A15 SoCs where the device manufacturer needs samples of the SoC but not an entire device prototype up many months in advance.
 
The SOC of NGP is a very aggressive design. The quad-core A9 consume 1W @ 800 MHz in 40 nm CMOS. Even in 28 nm CMOS, a 4-core A9 may still consume 0.7W at the same frequency (not to mention that it will consume 0.8~0.9 W if the 4 core A9 targets at 1 GHz or above). Few handheld devices can provide 0.7~1W just for CPU.

I don't know how much power the quad-core A15 consumes but it is reasonable to assume that it consumes more power than quad-core A9.
Therefore it's not a proper choice for 2011 release.

On the other hand, a 2GHz dual-core A15 should be both faster and easier to develop for than a 4-core A9 @ ~1GHz. I don't know about power envelopes, though.
 
I have friends in companies developing custom SoCs with similar specs to smartphones (SoCs for arcade\casino systems) and I'm pretty sure they do it in a lot less than 3 years.

Sure, the power\heat constraints are completely different, but we're talking ARM architecture with OpenGL ES 2.0 GPUs.

Well then challenge your friends with their magic wands to develop a handheld console in far less than 3 years, I'm sure SONY or Nintendo would be interested.

Of course are their power constraints and not just because "it's an ARM architecture" and "OGL_ES2.0 GPU" it doesn't mean that all measures are equal either. Unless you're trying to convince me that a NV for instance has it as difficult or easy to get a low end chip back from the fab as a high end high complexity chip. GF100 should probably ring a bell here.

Do you see SimonF's sarcastic comment above? Don't trust me ask him since one of the most experienced engineer's out there and I think but am not sure that his leads go way back to INMOS.

Well, execution time will have to get a lot faster than that, if they don't want to fail miserably to Intel's top-notch execution (assuming they'll actually focus their efforts in bringing Atom to handhelds).

Wake me up when Intel finally gets back in the game. Up to now they didn't manage to get even one embedded SoC into a smart-phone and if they should this round it'll be what one and with what quantities?

With GMA500 they got integration worth of less than 90 devices or something and with GMA600 you can still today count the announced devices with one hand. Let Intel understand graphics finally better and they might eventually get a chance. One the likeliest reasons GMA600 fails today are most likely the horrendous drivers of GMA500.
 
On the other hand, a 2GHz dual-core A15 should be both faster and easier to develop for than a 4-core A9 @ ~1GHz. I don't know about power envelopes, though.
A dual-core A9 eats only 0.5W@ 800MHz while consumes 1.9W@ 2GHz in 40nm CMOS. I can't imagine how to utilize a more powerful 2-core A15 in NGP considering the awful power consumption...
 
Well then challenge your friends with their magic wands to develop a handheld console in far less than 3 years, I'm sure SONY or Nintendo would be interested.

www.samsung.com
www.htc.com

You can forward these companies' websites to Nintendo and Sony. They both make handhelds in less than 3 years. I know of a few more, in case you're interested. I'll provide the names free of charge.

(I thought it'd be more fun if we both use sarcasm)

Of course are their power constraints and not just because "it's an ARM architecture" and "OGL_ES2.0 GPU" it doesn't mean that all measures are equal either.
Let me check again...
Yeah, I said just that.


Unless you're trying to convince me that a NV for instance has it as difficult or easy to get a low end chip back from the fab as a high end high complexity chip. GF100 should probably ring a bell here.

Do you see SimonF's sarcastic comment above? Don't trust me ask him since one of the most experienced engineer's out there and I think but am not sure that his leads go way back to INMOS.
I'm not trying to convince anyone of anything. I'll state this again: I was just asking.
See Exophase's reply? That was eloquent enough and mostly solved my doubts.



Wake me up when Intel finally gets back in the game. Up to now they didn't manage to get even one embedded SoC into a smart-phone and if they should this round it'll be what one and with what quantities?

With GMA500 they got integration worth of less than 90 devices or something and with GMA600 you can still today count the announced devices with one hand. Let Intel understand graphics finally better and they might eventually get a chance. One the likeliest reasons GMA600 fails today are most likely the horrendous drivers of GMA500.

Well, Intel won't be back in the game because Intel never entered this game in the first place.
I can wake you up when a Medfield handheld is formally announced or enters the market. Just give me a phone number and I'll do it. I promise ;)

Nonetheless, wasn't that Moorestown demo of Quake 3 doing ~100fps a sign that the awful driver issue was being addressed?
 
Aren't 3 years way-too-much-time to decide such an important and fast-evolving component as the GPU?
AFAIK, the X360 took little more than 2 years from nothing to the end product in the market shelves (it must be some kind of record, though, not the best example).
It was actually closer to 1.5 years from when Microsoft signed IBM and ATI as customers to launch though a lot of leg work was done before that. Plus, Microsoft cut it way too close and paid a price in the end. 3 years is probably a good time frame if you want a lot of custom work done. Projecting performance targets out 3 years isn't a big deal. Moore's Law has pointed the way for much longer than that.

Mobile devices probably need to be ready earlier than a living room console too since you can't as easily hook a controller up to a PC for game testing.
 
www.samsung.com
www.htc.com

You can forward these companies' websites to Nintendo and Sony. They both make handhelds in less than 3 years. I know of a few more, in case you're interested. I'll provide the names free of charge.

(I thought it'd be more fun if we both use sarcasm)

You can however understand the concepts of higher vs. lower chip complexity. A smart-phone SoC isn't on the same level as a hand-held SoC.

Let me check again...
Yeah, I said just that.

See above.

Well, Intel won't be back in the game because Intel never entered this game in the first place.

Intel is trying from day 1 to enter the smart-phone market but in vain so far.

I can wake you up when a Medfield handheld is formally announced or enters the market. Just give me a phone number and I'll do it. I promise ;)

SoC introduction vs. integration success are two totally different chapters. GMA600 is anything but a bad SoC, rather the contrary. The reason why it hasn't seen any integration success is mostly because of Intel's strategic decisions so far and there's not guarantee that it'll change significantly either.

Nonetheless, wasn't that Moorestown demo of Quake 3 doing ~100fps a sign that the awful driver issue was being addressed?

See again above. I'm anything but against Intel. There are just a couple of recent facts that should make Intel itself look over its shoulder.

Intel has a manufacturing process advantage that one should be a fool to not acknowledge it; albeit a very bad example for this case, did the manufacturing process save their Larabee project?

Back to the embedded space: a good SoC needs a good balance between sw and hw. Whether Intel insisting on MeeGo will bring them anywhere remains to be seen.

I should note that Intel most likely is amongst the bulk of top tier Series6 licensees, so it's not really off topic.

I don't agree always with Charlie and more specifically with all of his points in the following rant: http://www.semiaccurate.com/2011/02/15/atom-dead-strangled-slowly-intel

....rant aside there are a couple of things that are food for thought in that article, especially if you try to keep an open mind.
 
Do you see SimonF's sarcastic comment above? Don't trust me ask him since one of the most experienced engineer's out there and I think but am not sure that his leads go way back to INMOS.
Sarcastic? Me?
No, I've not worked for Inmos, but have used transputers.
 
Sarcastic? Me?
No, I've not worked for Inmos, but have used transputers.

I apologize then since I must have confused you with someone else. But I can call you one of the fathers of the Dreamcast GPU right?

I'll take the sarcasm back too. Would cynical work? ;)
 
Though he made defining contributions to the feature support and PowerVR VQ TC of Dreamcast, he laughed off the suggestion that he was the "Father of Dreamcast" when someone jokingly put it to him back on a messageboard called the Dreamcast Technical Pages. He probably thinks that word is a little too strong! I think the company was well over a hundred strong back then.
 
Last edited by a moderator:
Back
Top