PowerVR Series 6 now official

I just looked at the first post in this thread and it's 3 years old!

Woah, it looks like it'll take well over 3 years for IMG to get a GPU family from announcement to a shipping product.

Was this delay really planned or were they counting on ST-Ericsson to bring a SoC to market earlier this year?
 
I just looked at the first post in this thread and it's 3 years old!

Woah, it looks like it'll take well over 3 years for IMG to get a GPU family from announcement to a shipping product.

Rogue started shipping early this year... in TVs which is maybe a bit less noticable.

K-
 
Interesting, I didn't think they were using such high-performance cores for these smart TVs or Blu-Ray players.

But for what, fancy menus and overlays and overkill for things like Netflix?
 
Nobody knows and if there are people who do know, you won't get answer from them. Asking about it is pretty much pointless.

I think one very important factor is power dissipation. We have almost zero information about this so we can't know whether a GPU will consume too much power for smart phones or not. This is very strange because we know power dissipation of most GPU of PC.

Without information of power we can't even compare which mobile GPU is more power-efficient. How can we say a GPU is "better" without such important data? For example, is a mobile GPU with 300Gflops/5watt "better" than another with 50Gflops/0.5Watt?
 
http://withimagination.imgtec.com/index.php/powervr/powervr-gpu-the-mobile-architecture-for-compute

Here's one simple example that goes against your assumption: the PowerVR G6x30 GPUs add incremental area for features such as image lossless compression. For render targets, this provides a typical 2:1 compression rate, but it can be much higher, depending on the frame being compressed. The idea of adding more silicon in this case is to actually save on power consumption by reducing memory bandwidth.
It's been previously reported that lossless compression was added to the G6x30 over the G6x00 and IMG seems to officially confirm this in reply to someone's question in this article. They claim 2:1 compression rate or better for render targets.
 
Based on the info released, 6XT doesn't appear to provide anywhere near the same boost in upper performance ceiling to 6, as 5XT did to 5.

Was a single-core SGX 5XT really more than 50% faster than a SGX5 in for example GLBenchmark?
 
Was a single-core SGX 5XT really more than 50% faster than a SGX5 in for example GLBenchmark?

If memory serves well clock for clock it was rated by IMG at 40% (I'm just too bored to dig out the initial 5XT announcements). Just for the record's sake a 543 against a 540 has 2.5x times the FLOPs/clock/ALU, 1.75x times the triangle throughput and twice the Z check units (at the same frequencies both).

For S6XT I don't expect any as signficant changes in terms of arithmetic throughput (if any at all) but mostly in the triangle throughput, z/stencil department amongst others.

Besides that ULP GPU IP announcements from the big players has become these days so non telling and dry that they could just announce a couple of codenames for new core variants and call it a day. Before I can see a benchmark result of a 6XT core I couldn't for the world guess what exactly could have changed and I severely doubt they haven't added any hw.
 
The performance improvements in 6XT are primarily focused on the compute hardware.
 
Back
Top