ARM Mali-400 MP

A bit of digging around and I found an article that caught my attention in 2005:

http://www.hardocp.com/article.html?art=ODAyLCwsaGVudGh1c2lhc3Q=

Now Falanx wasn't part of ARM back then and they of course marketed themselves their IP. In any case I even back then didn't see anything weird in some of the marketing claims, but if we're going to get into any marketing hyperboles let's just pick two incidents:

When questioned about Mail200 performance we were told that Mali200 could be offering much more efficient integrated graphics by the second half of 2006 that would be "on par" with the current add-in board graphics processing units of today.

1123007615PtcKMOrr3N_1_9_l.jpg
 
It's likely that ARM's priorities were different to Falanx's in terms of roadmap and while the claim was probably true at the time (remember we were only really seeing Gen 1 of "proper" shader architecture on the PC, i.e. DX9c/SM2 and not reg. combiners and Mali200 was full shader programmable) attacking that higher end market was probably a low priority for ARM.

I guess it's one of the things they (Falanx) traded for financial stability. They may want to revisit that in the face of the Intel's charge into ARM space with Atom though (ARM have a big software and performance hill to climb there on the CPU side). A symetrically (both VS and FS) scalable core would have given them the ability to scale much more easily. Mali 400 was rooted in a good idea, but like I said it feels like a rush job.

Speaking of Atom I'd have to feel a bit concerned if I was PowerVR as well... Intel have their own GPU play with LRB now, how long before they get that to a point where it could be scaled into an Atom core? They don't have to worry about the inconvenience of other peoples fabrication tech, they can leverage every last inch of their own. That coupled to the coming of heterogeneous computing without a proper CPU play PowerVR (and Nvidia on the desktop for that mater) could be out in the cold.

Makes you wonder if they'll get bought out along the way. Come to think of it that does beg the question why weren't they bought when Falanx, BitBoy's and Hybrid were snapped up? AMD and ARM certainly had the financial resource, they are an openly traded company. They look a little expensive based on revenues (and the management is probably arrogant enough to demand a huge premium for a controlling interest).

Like we discussed earlier though there isn't always technical reasoning to these things - ATI buying BitBoys for portable seemed crazy (G40 was not the best tech by any means), but if you just think about it from a "buying the bodies" point of view it makes total sense.

However I digress this is a graphics tech forum not and investment one.
 
I know that you know where I'm getting at. Apart from that why another thread? It's perfectly fine here and I don't feel it's off topic either.




The 3D history is full of such stories and not just one 3D Graphics company or just one scandal. You're obviously as long around as I and I doubt you're that much older than me. Difference being I'm a simple user with no interests whatsoever nor ulterior motives. Now that's then truly material for another thread.

In all seriousness you've prompted me to start work on an (already pretty long)article which discusses the relative merits of different approaches and how that pans out for the user in terms of performance.

I'm also looking at releasing some work I did on designing a more robust benchmark for GPU's. It looks at the premise that its not a case of "one figure to rule them all" but a balance of curves, like a dyno read out for a car.

Might take a while, but seems to be a need for something like that... I've actually been approach about writing up some of this as a book, but a book sounds like a lot of work to me. Would be a bit niche as well, hardly likely to be a best seller!

I could throw some Jackie Collins moments in to spice it up a bit I guess - "His attempts to get ambient occlusion to light the contours of her ample bossom amussed her. She knew she had to have him...."
 
Speaking of Atom I'd have to feel a bit concerned if I was PowerVR as well... Intel have their own GPU play with LRB now, how long before they get that to a point where it could be scaled into an Atom core?
I could scale anything down to the size of a single transistor, doesn't mean it's a good idea. Intel's arguments for why they should in theory have lower bandwidth per frame are the same as those of PowerVR and Falanx; they don't have anything magical on their side to go even farther than that.

Furthermore they have some disadvantages, such as the apparent lack of real compression hardware; this will bite them when, for example, reading back from a shadowmap. They could try to improve things slightly through even more clever software, however doing any form of compression on a programmable x86 system will always take more power than dedicated hardware. Therefore it is hard to see how they could be competitive on power with handheld hardware (I'm not willing to make a clear statement versus PC hardware), and this is just one reason among many.

They don't have to worry about the inconvenience of other peoples fabrication tech
In Kucinich's eternal words to Neel Kashkari: "That statement that you just made, you will hear
about for the rest of your career." ;) (I kid, but I wish it was so simple)

they can leverage every last inch of their own. That coupled to the coming of heterogeneous computing without a proper CPU play PowerVR (and Nvidia on the desktop for that mater) could be out in the cold.
CPU? Is that the thing I buy along with the rest of my groceries at Walmart? Or was that Walmarm or perhaps Ibmart? :)

Makes you wonder if they'll get bought out along the way.
Well Apple is certainly putting them at the very core of their massively lucrative handheld business, so if things go south for any reason they'll always have a helping hand on that side... Not that I think it will, their design pipeline seems extremely solid and they surely must have been investing in longer-term R&D for some time.

However I digress this is a graphics tech forum not and investment one.
Feel free to discuss that in the 3D & Semiconductor Industry forum, I'm sad it has died so badly in recent months... :(
 
TheArchitect said:
"Come to think of it that does beg the question why weren't they bought when Falanx, BitBoy's and Hybrid were snapped up?"

We can hardly view much of what AMD/ATI did in the mobile graphics space with much authority given their quick exit.

ARM bought Falanx. why not IMG. That is indeed a really intriguing question. Was Falanx available for a comparative song because of the dearth of design wins ? If in buying IMG would ARM be closing so many doors to potential graphics licencees (INTEL,Apple spring to mind) that the company they bought would no longer be worth what was paid.

I'd also be intrested to know the exact order of events. Did IMG halt the co-sell they had with ARM (where Arm would sell IMG IP and get a share of the royalities) because ARM bought Falanx, or did ARM buy Falanx because IMG halted the agreement.
 
I could scale anything down to the size of a single transistor, doesn't mean it's a good idea. Intel's arguments for why they should in theory have lower bandwidth per frame are the same as those of PowerVR and Falanx; they don't have anything magical on their side to go even farther than that.

Furthermore they have some disadvantages, such as the apparent lack of real compression hardware; this will bite them when, for example, reading back from a shadowmap. They could try to improve things slightly through even more clever software, however doing any form of compression on a programmable x86 system will always take more power than dedicated hardware. Therefore it is hard to see how they could be competitive on power with handheld hardware (I'm not willing to make a clear statement versus PC hardware), and this is just one reason among many.

Now you see you are applying technical reason and I got told off for that ;)

Independance from a third party vendor who could be at risk of being purchased by someone with different goals to yours (such as consolidation) is a valid reason to pursue your own home grown tech in this instance.

You are also comparing the scant details released about the current LRB platform with a potential yet to be designed PPA optimised mobile version. So yes at the moment that is the case, but you've gotta think they'd realise that before building a mobile version.

In Kucinich's eternal words to Neel Kashkari: "That statement that you just made, you will hear
about for the rest of your career." ;) (I kid, but I wish it was so simple)

You have to admit though AMD and Nvidia's reliance on foundries has not exactly paid off recently. Maybe thats them paying for the folly of pushing un-proven tech. on un-proven fabrication and playing around in the margins of its characterisation.

CPU? Is that the thing I buy along with the rest of my groceries at Walmart? Or was that Walmarm or perhaps Ibmart? :)

But when the world goes hetro and its all on the same die... what then? AMD, ARM and Intel will more than likely pull in that direction (or if they have any sense they will, it plays to their strength and position to do that). Closer coupling between the GPU and CPU is enevitable for lots of reason (not least of which is the Albatros that is PCIe on the PC and distributed non-coherent caching on mobile).

Well Apple is certainly putting them at the very core of their massively lucrative handheld business, so if things go south for any reason they'll always have a helping hand on that side... Not that I think it will, their design pipeline seems extremely solid and they surely must have been investing in longer-term R&D for some time.

Feel free to discuss that in the 3D & Semiconductor Industry forum, I'm sad it has died so badly in recent months... :(

Apple still have other options at the moment - Mali, Vivante, etc. if there is another buying spree or consolidation move then this may happen, but Apple like to stay somewhat independant of any one tech vendor in my experience (hell they bought their own ARM design team pretty much).

If I was a betting man I'd actually say TI has more invested PowerVR than Apple from that stand point. TI's revenue stream is probably worth alot more to PowerVR $ for $ as well.
 
TheArchitect said:
"Come to think of it that does beg the question why weren't they bought when Falanx, BitBoy's and Hybrid were snapped up?"

We can hardly view much of what AMD/ATI did in the mobile graphics space with much authority given their quick exit.

I'll give them credit for winning Qualcomm though. That's a big chunk of the mobile units shipped. However, there was a lot of channel conflict between the two from what I heard, which basically didn't bode well (romour was ATI sales force were encouraging Qualcomm customers to take a cheap non-GPU enabled Qualcomm part and then adding an adjunct from ATI, becuase they'd get more revenue and the sales guys numbers would look good so they'd get their bonus).

The rapid exit was I suspect a co-opratition issue, enabling their partners who they would/could eventually be competing against wouldn't have sat well with AMD management post merger.

ARM bought Falanx. why not IMG. That is indeed a really intriguing question. Was Falanx available for a comparative song because of the dearth of design wins ?

There maybe some truth to that, but not from that perspective though. PowerVR is part of the Imagination Technologies group of companies which has a whole host of baggage. No doubt the Imagination majority shareholders would have wanted to off load the whole lot in one go making it less attractive to ARM who don't need a DAB radio factory or a DSP. Some of the video stuff might have been useful, but ARM hasn't got a great track record with taking on other peoples architecture (they didn't do much with that Philips off shoot they bought, which was a shame, looked like it had promise).

If in buying IMG would ARM be closing so many doors to potential graphics licencees (INTEL,Apple spring to mind) that the company they bought would no longer be worth what was paid.

I've had a quote from ARM's CTO repeated to me before - goes something like "Now we've establish we are whores its just a mater of dealing with the price", basically, they'll sell to anyone if the price is right and the customer has some morality. So the door to Intel, Apple etc. would still be open I think.

Very much in the spirit of Sir Robin, a great character from the industry. The mans a legend!

I'd also be intrested to know the exact order of events. Did IMG halt the co-sell they had with ARM (where Arm would sell IMG IP and get a share of the royalities) because ARM bought Falanx, or did ARM buy Falanx because IMG halted the agreement.

I expect we will never know the truth, the press statement was fairly non descript. Just like Hollywood, Vegas wedding, no fault quicky divorce. Wonder if PowerVR signed a pre-nup?
 
It's likely that ARM's priorities were different to Falanx's in terms of roadmap and while the claim was probably true at the time (remember we were only really seeing Gen 1 of "proper" shader architecture on the PC, i.e. DX9c/SM2 and not reg. combiners and Mali200 was full shader programmable) attacking that higher end market was probably a low priority for ARM.
The only thing that has changed for Mali 200 since ARM aquired Falanx is that it got slower and bigger than Falanx originally claimed. Incedentally for someone who claims to know so much about graphics I'm surprised that you don't know that SM3.0 was common placed when Mali200 was first announced, and shaders where well beyond the first gen of such architectures.

...

Speaking of Atom I'd have to feel a bit concerned if I was PowerVR as well... Intel have their own GPU play with LRB now, how long before they get that to a point where it could be scaled into an Atom core? They don't have to worry about the inconvenience of other peoples fabrication tech, they can leverage every last inch of their own. That coupled to the coming of heterogeneous computing without a proper CPU play PowerVR (and Nvidia on the desktop for that mater) could be out in the cold.
If you weren't so tied up in putting PowerVR down, you would have looked at how LRB works and thought about how exactly you scale that down to produce something that is competitive in performance power and area space PowerVR sells into and would realise that it isn't entirely sensible.

Makes you wonder if they'll get bought out along the way. Come to think of it that does beg the question why weren't they bought when Falanx, BitBoy's and Hybrid were snapped up? AMD and ARM certainly had the financial resource, they are an openly traded company. They look a little expensive based on revenues (and the management is probably arrogant enough to demand a huge premium for a controlling interest).
What a rediculous comment, at the time ARM purchased Falanx I think IMG where valued at over ÂŁ200M on the open stock market compared to a few milion quid for the other two, that is the answer to your question.

Like we discussed earlier though there isn't always technical reasoning to these things - ATI buying BitBoys for portable seemed crazy (G40 was not the best tech by any means), but if you just think about it from a "buying the bodies" point of view it makes total sense.

However I digress this is a graphics tech forum not and investment one.

Yes you have digressed, and I think its time to come clean about exactly who you are rather than claiming to be a neutral observer.

Having done a little search on linked in involving the the keywords ARM, ex employee, architect, and coupled to a rather familier approach to negative marketing I'm pretty certain I know who you are, the question is do you have the integrity to come clean?

John.
 
Independance from a third party vendor who could be at risk of being purchased by someone with different goals to yours (such as consolidation) is a valid reason to pursue your own home grown tech in this instance.
Obviously, but then again someone the size of Intel has the leverage to negociate contracts that make such things of relatively little concern in the short & mid-terms. There are certainly plenty of advantages to doing things in-house, but the real question is they're worth the extra costs and especially the risk of simply designing an inferior solution. And then what, you just spent $50M in R&D, do you just throw it to the garbage bin? Very few companies have the guts to do that... :)

You are also comparing the scant details released about the current LRB platform with a potential yet to be designed PPA optimised mobile version. So yes at the moment that is the case, but you've gotta think they'd realise that before building a mobile version.
The real question then is what technical advantages such a solution could have over the competition. In theory, you have all the cost of maximum flexibility along with the cost of fixed-function hardware. Therefore, for such an approach to be superior, the base 'cores' must be supremely executed and *more* efficient per transistor than the competition's shader processors. This is far from impossible, but I think you'll have to agree that it's normal for me to be very skeptical that it is the most likely outcome...

You have to admit though AMD and Nvidia's reliance on foundries has not exactly paid off recently. Maybe thats them paying for the folly of pushing un-proven tech. on un-proven fabrication and playing around in the margins of its characterisation.
I disagree, it has paid off just fine. RV670 or G80 are textbook examples of how a great foundry relationship can go. Yes, there are hiccups from time to time, but that is not necessarily related to the foundry model! (*cough* AMD/65nm/Barcelona *cough*)

But when the world goes hetro and its all on the same die... what then? AMD, ARM and Intel will more than likely pull in that direction (or if they have any sense they will, it plays to their strength and position to do that). Closer coupling between the GPU and CPU is enevitable for lots of reason
In the specific case of Larrabee, the inclusion of a MIMD aspect on each core makes a single-chip solution with OoOE cores rather redundant in my mind. By definition, the OoOE cores are only useful for tasks that are not highly parallel; therefore, the data transit in an optimized software algorithm should not be a real problem.

Ideally everything would always be a single chip, but ideally I'd also have all of my DRAM on-chip. Sadly a little thing called 'reality' tends to get in the way of such things happening, at least in successful products :devilish:

Apple still have other options at the moment - Mali, Vivante, etc. if there is another buying spree or consolidation move then this may happen, but Apple like to stay somewhat independant of any one tech vendor in my experience (hell they bought their own ARM design team pretty much).

If I was a betting man I'd actually say TI has more invested PowerVR than Apple from that stand point. TI's revenue stream is probably worth alot more to PowerVR $ for $ as well.
Historically you would be correct. But TI will never license PowerVR's VXD or VXE; Apple, however, did. My guesses for their upcoming SoCs are:
0) 90nm/ARM11/MBXLite/In-House or Samsung Audio&Video
1) 65nm/ARM11/SGX520/VXD330/In-House Audio&ISP
2) 45nm/Cortex-A9/SGX540/VXD380/VXE280/In-House Audio&ISP
*If* this is correct, it's pretty clear that the amount of PowerVR IP into future Apple products is extremely high. And when you depend on 3 separate pieces of IP from a company, it becomes less desirable to switch to someone else for just one of the three...

I've had a quote from ARM's CTO repeated to me before - goes something like "Now we've establish we are whores its just a mater of dealing with the price"
What is it with UK semiconductor CTOs that make them so cool? I really like all I've seen/heard from the top technical guys from ARM, CSR, Icera, etc. - I especially liked this story: http://www.electronicsweekly.com/bl...ctor-blog/2008/07/the-late-simon-knowles.html - as Ailuros once said, maybe it's the tea! :)

JohnH said:
If you weren't so tied up in putting PowerVR down, you would have looked at how LRB works and thought about how exactly you scale that down to produce something that is competitive in performance power and area space PowerVR sells into and would realise that it isn't entirely sensible.
To TheArchitect's credit, he was likely thinking of a much longer-term horizon than you are; say, 22nm or so... It would make little sense to scale Larrabee down to less than 1 core/16 ALUs, but such a level of performance is perfectly sensible for handhelds in that technology generation. Therefore the real question is more what its real efficiency is, and that's quite a debate in itself to say the least!

Of course if you are willing to criticize Larrabee's overall efficiency publicly/on the record and start a catfight about it here, be my guest - I'm all for good television! ;) :D
JohnH said:
Having done a little search on linked in involving the the keywords ARM, ex employee, architect, and coupled to a rather familier approach to negative marketing I'm pretty certain I know who you are, the question is do you have the integrity to come clean?
Hah! This is getting pretty hot - just a quick comment from a moderation POV: please don't force people to come clean about their RL identities publicly and bringing RL animosities in if they exist. However in certain circumstances, I would obviously find it entirely appropriate/sensible (and certainly desirable from your POV) to come clean in private via PMs.

In case PMs are disabled for you because of your low post cost, please just let me know and I'll take care of it.
 
That would certainly seem to be the case Arun, PM's are not enabled due to low post count it would seem (must be a high threshold, how many posts do you need?!?).
 
It's likely that ARM's priorities were different to Falanx's in terms of roadmap and while the claim was probably true at the time (remember we were only really seeing Gen 1 of "proper" shader architecture on the PC, i.e. DX9c/SM2 and not reg. combiners and Mali200 was full shader programmable) attacking that higher end market was probably a low priority for ARM.

20GFLOPs out of 2-3 square millimeters at 90nm? It didn't and doesn't sound realistic to me. Incidentally the article above at HardOCP went public merely 5 days after Eurasia/SGX had been announced. My only other point was that marketing can play nasty tricks from all sides.

I guess it's one of the things they (Falanx) traded for financial stability. They may want to revisit that in the face of the Intel's charge into ARM space with Atom though (ARM have a big software and performance hill to climb there on the CPU side). A symetrically (both VS and FS) scalable core would have given them the ability to scale much more easily. Mali 400 was rooted in a good idea, but like I said it feels like a rush job.
I've no idea how the human resources look like at ARM's graphics department these days. Had it not grown significantly though since the Falanx days, I have the feeling though that they might need some serious enforcements.

Speaking of Atom I'd have to feel a bit concerned if I was PowerVR as well... Intel have their own GPU play with LRB now, how long before they get that to a point where it could be scaled into an Atom core? They don't have to worry about the inconvenience of other peoples fabrication tech, they can leverage every last inch of their own. That coupled to the coming of heterogeneous computing without a proper CPU play PowerVR (and Nvidia on the desktop for that mater) could be out in the cold.
Let's separate IP from fabless semiconductor markets when it comes to graphics otherwise we'll get lost in the longrun.

Even if IMG wanted, could (add whatever feels more comfortable) try to touch the CPU market it would end up with conflicting interests with their most significant partners. The above most certainly is a consideration, but you seem to forget that "now" when it comes to LRB is more likely a first sample release in somewhere early 2010 and it's not like it takes a snip of your hand to scale down from the high end into the small form factor either. At least for the lifetime of SGX IMG is safe; and here's the trick for the next generation. They need to secure other markets with SGX already to build on in order to further stabilize them with their next generation. One good example would be the handheld console gaming market, for which for the next generation consoles I can see only two contenders; one of them being announced indirectly already.

Makes you wonder if they'll get bought out along the way. Come to think of it that does beg the question why weren't they bought when Falanx, BitBoy's and Hybrid were snapped up? AMD and ARM certainly had the financial resource, they are an openly traded company. They look a little expensive based on revenues (and the management is probably arrogant enough to demand a huge premium for a controlling interest).
AMD had the financial resources back then, but we all know how their story with the PDA/mobile market ended. Besides there's a quite a difference in cost between say the Bitboys and IMG.

So despite your disgress as you said (and I'm more than well aware after all these years what Beyond3D stands for), shall we cut a bit deeper into things and get a wee bit more specific?

You don't strike me like someone that doesn't have anything to do on a professional level with the markets debated here, nor do I think that you never ever had any direct or indirect ties with ARM. Prove me wrong, expsose your identity and I will have the dignity for a full public apology.

That said:

I'll give them credit for winning Qualcomm though. That's a big chunk of the mobile units shipped. However, there was a lot of channel conflict between the two from what I heard, which basically didn't bode well (romour was ATI sales force were encouraging Qualcomm customers to take a cheap non-GPU enabled Qualcomm part and then adding an adjunct from ATI, becuase they'd get more revenue and the sales guys numbers would look good so they'd get their bonus).

The rapid exit was I suspect a co-opratition issue, enabling their partners who they would/could eventually be competing against wouldn't have sat well with AMD management post merger.
Ironically there was somebody in the past that worked for said company above that had a very similar reasoning as you seem to have. He was very vocal by that time how ATI/NVIDIA and others will recap for the 2nd generation and all they have now is that they're left dry with very little to battle with.

Didn't I mention before that some IHVs cannibalized prices to gain some deals? Just don't tell me that it wasn't the case here. I myself had been fooled in the past when reading of a "mini-Xenos" that the result would be as impressive as its larger XBox360 brother; it turns out it had very little in common after all. 2nd case of the usual marketing stunts and other participants being "innocent" in the usual marketing circus.

There maybe some truth to that, but not from that perspective though. PowerVR is part of the Imagination Technologies group of companies which has a whole host of baggage. No doubt the Imagination majority shareholders would have wanted to off load the whole lot in one go making it less attractive to ARM who don't need a DAB radio factory or a DSP. Some of the video stuff might have been useful, but ARM hasn't got a great track record with taking on other peoples architecture (they didn't do much with that Philips off shoot they bought, which was a shame, looked like it had promise).
PowerVR is the heart of IMG and their other subdivisions had been created during the years based on the experience collected from the first. It's a whole web interconnected patents and I don't think it would make any sense for IMG to sell of PowerVR especially since the latter is the subdivision with the highest revenue.

ARM had an agreement with IMG to sell MBX IP. IMG during the process saw that they can channel/sell their IP themselves too and decided to carry on by themselves for the 2nd generation. Meaning that for the first generation and as long as ARM was selling MBX on IMG's behalf, IMG might have in the "good book" for some and became afterwards the rotten company that has flooded the market during the decades of its existence with lies, smokes and mirrors.

If I ever had been employed by ARM for relevant matters I'd too today might have similar feelings, but I'd never go out in public with them and not in that sense by all means.

I've had a quote from ARM's CTO repeated to me before - goes something like "Now we've establish we are whores its just a mater of dealing with the price", basically, they'll sell to anyone if the price is right and the customer has some morality. So the door to Intel, Apple etc. would still be open I think.
And I happen to have heard from someone that works at a large company that enquired for an license that they nowadays get some extra IP virtually for free if they'd also buy a CPU. Does that also sound familiar?

There's beyond a constant price war for any contenders in this market. When one of them though tries to get rid of them for free just because at least some market penetration after all is necessary, then some folks shouldn't point fingers. I wonder do they have any mirrors by any chance?

I expect we will never know the truth, the press statement was fairly non descript. Just like Hollywood, Vegas wedding, no fault quicky divorce. Wonder if PowerVR signed a pre-nup?
You mean to say you weren't there? I start to feel terrible now.....*snicker*
 
To TheArchitect's credit, he was likely thinking of a much longer-term horizon than you are; say, 22nm or so... It would make little sense to scale Larrabee down to less than 1 core/16 ALUs, but such a level of performance is perfectly sensible for handhelds in that technology generation. Therefore the real question is more what its real efficiency is, and that's quite a debate in itself to say the least!

Of course if you are willing to criticize Larrabee's overall efficiency publicly/on the record and start a catfight about it here, be my guest - I'm all for good television! ;) :D


ROFL :D *now it's my turn to fetch popcorn if it ever gets as far...*
 
...
To TheArchitect's credit, he was likely thinking of a much longer-term horizon than you are; say, 22nm or so... It would make little sense to scale Larrabee down to less than 1 core/16 ALUs, but such a level of performance is perfectly sensible for handhelds in that technology generation. Therefore the real question is more what its real efficiency is, and that's quite a debate in itself to say the least!

What level of perfromance do you think will be acceptable for handhelds in the time scales of a 22nm? Obviously can't say much, but I think people may be surprised. Wrt LRB I think the approach being used fits the high end where the scale of the devices being built is close to a couple of orders of maginitude greater than typical handheld graphics cores, the drop to 22nm just about gets you one order of magnitude, personally I'm not convinced that a 10x scaled back LRB would be competitive.

Of course if you are willing to criticize Larrabee's overall efficiency publicly/on the record and start a catfight about it here, be my guest - I'm all for good television! ;) :D
Nah, not something I feel inclined to debate, although I will say do like their aproach.

Hah! This is getting pretty hot - just a quick comment from a moderation POV: please don't force people to come clean about their RL identities publicly and bringing RL animosities in if they exist. However in certain circumstances, I would obviously find it entirely appropriate/sensible (and certainly desirable from your POV) to come clean in private via PMs.

In case PMs are disabled for you because of your low post cost, please just let me know and I'll take care of it.

Hey I'm not going to out anyone on a public forum, but given the nature of some of the PowerVR slamming being done I think its fair that people should be aware of current/recent affiliations so that the correct level of salt can be taken with stated opinion.

John.
 
Hold on a minute, how many embedded systems do you know that actually have enough room to store a 1920x1080x32 texture (8 MB for the top MIP level), let alone have the need to zoom into it by nearly 16x?????
i believe the whole iphone family has a heafty chunk of memory dedicated to video, but i can't quote any figures..

Well I suppose, viewing JPG stills maybe with some zoom, but then you can do a partial decode on them to limit the source texture size so its not a problem.
i take it you have not been acquainted with apple's doings on the iphone: kinetic scrolling of surfaces, 'core animation' fx over those, including, but not limited to, auto-rotating views, etc.
a scenario where a jpg is viewed in portrait, zoomed in, and then the device is roated to landscape and the view follows suit smoothly is a trivial use case on the iphone.

actually, right now, i have a rather intriguing GraphViz-produced 1,985 x 1,064, png on my ipod touch, and i entertain myself by rotating the device and watching the view rotate, like a good texture on a quad would.. left, then right. then left again. nice.

ed: i'm not suggesting the png surface is 2k x 2k, but it's fairly high resolution, nevertheless.
 
i believe the whole iphone family has a heafty chunk of memory dedicated to video, but i can't quote any figures..

That possibly isn't for perfromance reasons, some systems require static allocation of the Gfx memory pool for various reasons (OS security/stability, SoC implementation constrains, etc.).

i take it you have not been acquainted with apple's doings on the iphone: kinetic scrolling of surfaces, 'core animation' fx over those, including, but not limited to, auto-rotating views, etc.
a scenario where a jpg is viewed in portrait, zoomed in, and then the device is roated to landscape and the view follows suit smoothly is a trivial use case on the iphone.

actually, right now, i have a rather intriguing GraphViz-produced 1,985 x 1,064, png on my ipod touch, and i entertain myself by rotating the device and watching the view rotate, like a good texture on a quad would.. left, then right. then left again. nice.

ed: i'm not suggesting the png surface is 2k x 2k, but it's fairly high resolution, nevertheless.

Yeah I've seen the Iphone do its thing, very nice (still the best browser expereince on a mobile device at the moment, mind I haven't got my hands on a G1 yet, apparently that gives Iphone a run for its money). Like I said though you can do clever things with a JPG because its constructed of macro blocks.

When looking at the entire image you are down sampling to the screen size so I would have thought large amounts of sub pixel accuracy are not an issue (you are sampling groups of whole pixels to create a single pixel mostly), planar rotation when you tilt the device is done on the screen sized version most of the time so again no sub pixel accuracy issues and when zoomed in you only need to access the macro block groups from the JPG that your zoomed target area is made of limiting the U,V range and subsiquently the requirement for high levels of sub pixel precision.

(btw - if this isn't the way they are doing it I call dibs on the IP :D )

I'm not sure how you'd do it on a PNG file, but probably taking a texture sub image which again would limit the section your accessing when zoomed to a "manageable precision" (I know Apple desktop solution providers have to supply a number of required extensions, perhaps an extension attaching a sub region of a texture to a texture is one of them).
 
What level of perfromance do you think will be acceptable for handhelds in the time scales of a 22nm? Obviously can't say much, but I think people may be surprised. Wrt LRB I think the approach being used fits the high end where the scale of the devices being built is close to a couple of orders of maginitude greater than typical handheld graphics cores, the drop to 22nm just about gets you one order of magnitude, personally I'm not convinced that a 10x scaled back LRB would be competitive.

Albeit it's not that relevant, I doubt NV's APX2500 has that much in common with their current G8x/GT2x0 high end architectures either. It's not an absolute necessity that if Intel purely hypothetically wanted to continue from point X in the PDA/mobile market with their own technology that they couldn't. And no whatever Intel decides in the future it doesn't have to necessarily make sense either, since we've seen weirder things happening.


Nah, not something I feel inclined to debate, although I will say do like their aproach.

But you could answer a quick OT question to a layman: given the nature of the hardware did they have an alternative choice?


Hey I'm not going to out anyone on a public forum, but given the nature of some of the PowerVR slamming being done I think its fair that people should be aware of current/recent affiliations so that the correct level of salt can be taken with stated opinion.

As I said in my former post I've seen another similar attempt in the past here. Eventually I could understand that guy because he was amongst the ones that convinced his company to get license X and nowadays his past efforts didn't bring the results he had wished for.

Anyway if you want to chew on some further material, here you go:

http://www.design-reuse.com/article...ntages-of-the-mali-graphics-architecture.html

When I first heard of the original Mali I said back then that if a small IP company can integrate single cycle 4xMSAA then it's high time we see it in standalone GPUs too and we did in 2006, albeit many before that considered that it doesn't make "sense".

Two things for the above apart from the long list of obvious misconceptions: when it takes 4 cycles for 16xMSAA someone might also enable 4x Supersampling instead. Before anyone says that 16xMSAA delivers far better polygon edge/intersections AA than 4xSSAA (and be of course right), I'll say that with the latter you usually get also a -1.0 LOD offset which is near 2xAF quality.

The 2nd thing is something that might be neglected by most if all not all IHVs in this market: albeit I understand the importance of having 2x,4x or more Multisampling on devices with small screens, it does sound to me that none of them has the power to add at least some portion of anisotropic filtering.

In my head Multisampling compared to Supersampling is a sort of "performance optimization", yet the first should also come with at least 2xAF to be comparable. And yes I understand that an advanced analytical AF algorithm along with the required TMU strength is a tough cookie to break for now when die area is so limited.

To me though it takes a bit more than good polygon edge/intersection antialiasing (which is what nowadays 5-10% of the total screen space?) and blurry bilinear filtering for the majority of the screen.
 
When looking at the entire image you are down sampling to the screen size so I would have thought large amounts of sub pixel accuracy are not an issue (you are sampling groups of whole pixels to create a single pixel mostly), planar rotation when you tilt the device is done on the screen sized version most of the time so again no sub pixel accuracy issues and when zoomed in you only need to access the macro block groups from the JPG that your zoomed target area is made of limiting the U,V range and subsiquently the requirement for high levels of sub pixel precision.

I can't really comment on the 3d pipeline stuff (I am a video rather than 3d guy), but I can say that accessing a subset of blocks in a jpeg is not that straightforward. You would generally decode the whole thing and then view sub-sections of it, and then it is just a really big image.

CC
 
Back
Top