For example, LG’s H13 is the first SoC to feature PowerVR Series6 graphics; aiming at high-end smart TVs, this platform offers over 100 GFLOPS of compute performance, while providing a comfortable fillrate for 4K resoultions.
http://withimagination.imgtec.com/i...-connected-home-took-center-stage-at-ces-2013
While I had read that the LG H13 contains a dual cluster Rogue, I lived under the impression that LG integrated it at relatively modest frequencies. Those claimed >100GFLOPs would suggest anything but a modest frequency, else what exactly am I missing here?
PowerVR Series6 GPU cores are designed to offer computing performance exceeding 100GFLOPS (gigaFLOPS) and reaching the TFLOPS (teraFLOPS) range enabling high-level graphics performance from mobile through to high-end compute and graphics solutions.
Well, 554 is 12.8 GF per core @ 200 MHz. Go quad core and up to 400 MHz and you're at 102.4 GF.
It should be a pretty wild marketing trick, however didn't they add somewhere with the same set of functionalities? Baseline Series6 is DX10 and the only Series5 member above DX9L3 is the SGX545.All I've seen are wild claims that series 6 is 20x as efficient as series 5.
Look above. It isn't too hard to estimate that in order to reach/exceed for a dual cluster G6200 100 GFLOPs and for a quad cluster G6400 200 GFLOPs you'd need =/>600MHz.However, smallest 554 implementation is on 32nm and that's a mobile device. In a TV environment I see no problem running the 554 at 400 MHz. So question is how good is Rogue? One "cluster" = 2 554 cores isn't so outlandish, is it?
However, the 100 GF figure is regurgitated from their own pressers:
IMG add a single cluster G6100 rogue to the family.
http://www.imgtec.com/News/Release/index.asp?NewsID=730
Marketing claim smallest implementation of openGL es3.0.
Much more is revealed in the blog posting.
http://withimagination.imgtec.com/index.php/powervr/powervr-g6100-small-is-powerful
Of note is for the first time we have a frequency/core performance comparison between Series5 and Series6. Graph suggests that G6100@300Mhz, has 50% more "graphics and compute Gflops performance" than SGX544MP2@250Mhz. (I'm seeing around low 20s for the SGX544MP2 V's Low 30s for the rogue).
Can anyone give an interpretation of what the X and Y axis are supposed to mean in the overall core map on that blog. Why is the G6100 rightmost ? Why is the G6200 leftmost, and only ever so slightly higher than the G6100 ? Or is there no logic to the relative placement whatsoever ? Is the Y axis just the timeline of product announcements ?
Also, and its a bit off-topic, I don't know why they continue to include SGX520 in such diagrams, my understanding is that it was never implemented in production silicon.
I'm gettting 18 GFLOPs with the 9nth FLOP per ALU lane counted.
2 cores * 4 ALUs * 9 FLOPs/ALU * 0.25 = 18.0
I don't see anywhere where a question was asked.I could have sworn I've answered the above, but never mind.
I updated the wikipedia page. Can someone please check my work?
http://en.wikipedia.org/wiki/PowerVR#Series_6_.28Rogue.29
What does under config core the 2nd digit stand for? TMUs?
I have no idea. I simply assumed 1 based on the others.
That page is all over the place for many descriptions of the cores (not just Rogue). Just talking about Rogue, the compute rates are wrong, but because we've never formally described what the hardware can do in public I can't correct it.
I'll see what I can do about that.
That page is all over the place for many descriptions of the cores (not just Rogue). Just talking about Rogue, the compute rates are wrong, but because we've never formally described what the hardware can do in public I can't correct it.
I'll see what I can do about that.