PowerVR Series 6 now official

http://www.imgtec.com/News/Release/index.asp?NewsID=666

But the press release said that all Series6 GPUs will support OpenGL 4.x which includes tessellation and other capabilities that are also part of DX11 and otherwise exceeds a basic DX10 design. As you point out, if they're supporting DX11, they might as well go DX11.1 for those GPUs that require it. But if all Series6 GPUs already support OpenGL 4.x, is there still a huge transistor investment to go DX11?

How do you understand OpenGL3.x/4.x in that sentence you're quoting? Why mention 3.x if they'd be all 4.x compliant in the first place? A GPU that supports 4.x obviously covers 3.x too. It probably could had been worded more accurately with a "from - to" like for D3D, but apart from that I don't read anything strange out of it.
 
How do you understand OpenGL3.x/4.x in that sentence you're quoting? Why mention 3.x if they'd be all 4.x compliant in the first place? A GPU that supports 4.x obviously covers 3.x too. It probably could had been worded more accurately with a "from - to" like for D3D, but apart from that I don't read anything strange out of it.
I read it as a list of "new" APIs that Series6 adds that typically weren't seen in Series5 with OpenGL 3.x broken out from OpenGL 4.x for completeness. I was expecting that if OpenGL 4.x isn't truly supported in all Series6 it would not have been in the initial list and would have been mentioned alongside DX11.1 as an optional extended feature. It probably was a misinterpretation or a typo since it'd make sense for the base Series 6 to be DX10/OpenGL 3.x generation with DX11/OpenGL 4.x reserved for certain models.
 
Last edited by a moderator:
I read it as a list of "new" APIs that Series6 adds that typically weren't seen in Series5 with OpenGL 3.x broken out from OpenGL 4.x for completeness. I was expecting that if OpenGL 4.x isn't truly supported in all Series6 it would not have been in the initial list and would have been mentioned alongside DX11.1 as an optional extended feature. It probably was a misinterpretation or a typo since it'd make sense for the base Series 6 to be DX10/OpenGL 3.x generation with DX11/OpenGL 4.x reserved for certain models.

It'll most likely get worded clearer in their future Series6 whitepaper. For Series5 and desktop APIs the whitepaper states:

DirectX9 (SGX535/545) and DirectX10.1 (SGX545)
OpenGL 2.1 (SGX535/545) and 3.1 (SGX545)

Past marketing rubbish for Series5 might have also stated OGL2.x/3.x; I'm just too bored to dig it out at the moment. Could be the reason why I understood what the poet was trying to say ;)
 
If MS flagged Mali/Adreno as "TBDRs" for DX11.1 it most certainly isn't for their benefit LOL :LOL:
Specifies whether a rendering device batches rendering commands and performs multipass rendering into tiles or bins over a render area. Certain API usage patterns that are fine TileBasedDefferredRenderers (TBDRs) can perform worse on non-TBDRs and vice versa. Applications that are careful about rendering can be friendly to both TBDR and non-TBDR architectures. TRUE if the rendering device batches rendering commands and FALSE otherwise.
There's nothing there about depth sorting, just tiling.
 
Yeah, automatic depth sorting at the API level sounds like a really bad feature. You'd usually want to do that at a much more coarse level, like per-model, where you know more about the high-level organization of your scene than DirectX does.

Still pays to read a flag about it though. There should be one for both early-Z and tiling hints.
 
Depth sorting at API level is bad. But I think this flag will be taken to mean that hw will remove overdraw when many won't, confusing the hell out of developers initially and later on establish the meme of Adreno and Mali being TBDRs.

It's badly worded.
 
Ok that bloke with the beard (gee he looks familiar now where have I seen him before? :p ) managed to convince me. The Novathor A9600 contains most likely a single G6400.
 
Framebuffer compression nearly 50%. That's nice. But nothing an IMR can't copy.

But then again, I guess they wouldn't talk about optimizations that an IMR can't copy at all.
 
The best part for me is when he says that it can go up into the Teraflop range at mobile power consumption levels.

I really want to see the PowerVR Series 6 in a Next Gen Console for some reason.

I actually had a dream about it lol
 
I want to see a monster PowerVR GPU in an Apple STB.

Any specific reason you want it in a set-top box? Apple could enter the console market in a profound way, the casual gaming library is already huge, thus removing the "hobbyist" status from the Apple TV.
 
Set top boxes are a lost case for Apple. Thinking in the AppleTV direction makes times more sense.
 
Yeah, between the implication that Series6 cores start in a range around 100 GFLOPS and that the focus is on offering a wider range of cores with scaled amounts of clusters as opposed to just relying upon multicore makes a single G6400 seem like a good guess for the A9600.

I was a little alarmed at the possibility that one of the first Series6 implementations was potentially already absorbing the overhead and using a couple or more cores. While it wouldn't necessarily be suboptimal, and while the A9600 should still be considered quite high-end due to its clock speed target, it suggested the low end of Series6 may have been aiming too low for the demands of the market like with SGX510.

With the mistarget of MBX Pro at the high end for the first time around and SGX510 at the low end the second time around, they probably got it just right this time.
 
Given the announcement of so far two Rogue cores G6200 (2 clusters, >100GFLOPs) and G6400 (4 clusters, >200GFLOPs) and the note that performance can scale in the 1 TFLOP range, I'd assume it's currently a mix of multi-cluster (up to >200GFLOPs) and multi-core (up to 1 TFLOP).
 
Certainly, and the even higher scalability of cores this time means multiple teraflops are well within range for higher end markets, were it ever to be adopted at those configurations.

Having the first two cores in the mobile family equip four and eight TMUs, respectively, would be a solid advancement and a reflection of how high the demands for resolution (HD/retina) have become.
 
Certainly, and the even higher scalability of cores this time means multiple teraflops are well within range for higher end markets, were it ever to be adopted at those configurations.

Anyone would have a damn hard time naming any realistic candidates for such licenses.

Having the first two cores in the mobile family equip four and eight TMUs, respectively, would be a solid advancement and a reflection of how high the demands for resolution (HD/retina) have become.

More useful AF is what interests me far more.
 
Any specific reason you want it in a set-top box? Apple could enter the console market in a profound way, the casual gaming library is already huge, thus removing the "hobbyist" status from the Apple TV.

The main reason is that most people will not buy a new TV just to be able to have an integrated iOS subsystem. It's too much money to put down for a whole television especially when most people are perfectly happy with their existing TVs.

The only other option is a STB like the AppleTV box but much more powerful and flexible eg able to run iOS Apps directly. For example a STB that has 10 times the processing power of the most powerful iPad at time of launch.
 
Back
Top