NVIDIA Tegra Architecture

When companies say that their new SoC will be X times more powerful and Y times more power efficient they never mean those two things at the same time. If they actually imply this it's some marketing person screwing it up. Usually they're careful not to.

The claims also tend to be pretty vague, especially the power consumption ones. And often end up being just plain wrong. You shouldn't take them that seriously.

NV's claims I've read stated that T4 will be between 3x and 4x times faster compared to T3 in graphics applications, which sounds quite realistic given the unit amount increase.

I expect that all four of Exynos 5 Octa's Cortex-A15 cores running at even 1.7GHz will be too much for a phone to bear. That'll probably use over 5W just for the CPUs, unless the bins have much better characteristics (I doubt the 28nm process has strikingly better power consumption vs the 32nm one). So I don't think you'll see that clock speed unless some cores are turned off, maybe all but one.

Samsung's upcoming SoC is obviously OT here, but for smartphones I wouldn't expect all that much difference in GPU performance between the octacore and AP40 (the winner by a slight margin would be rather which of the two has the higher final frequency). What'll be truly more interesting will be power consumption characteristics and that more on the CPU than on the GPU side of things.

In a 4+4 big.LITTLE config it'll come down where exactly the four A7 cores come into play exactly.
 
What we do know is adreno 320 on very early drivers is still the most powerfull smartphone gpu out there.. (something I predicted would be the case last summer naysayers;) )
So with the performace increase of the gpu core...combined with likely new drivers for that new uarch and increased bandwidth (snapdragon 600 will be interesting for this very reason....is the adreno 320 also bandwidth starved?...we get to find out!)
Im guessing at least untill apple A7/valleyview qualcomm will hold its lead on smartphone gpus....a massive achievement of that turns out to be a year :)...simila claims can also be made on its cpus (s4 pro) and modems..

Wait it's been on shelves for how many months now and you call its drivers still early? Early in my book is at worst a month or two after release. Peak performance is at the moment slightly over 3800 frames for the fastest Adreno320.

Qualcomm's own roadmap claims 30% more performance for the Adreno330.

Samsung has released details bof its series 5 octa which im guessing you would have known thst on day one....they claim..again quite believable on low end tasks...that thier big-little uarch consumes up to 75% less power than I assume 4 quad....woth an on stage demo pointong this out showing each core usage...very impressive snd much more believable than nvidia seeing as how exynos 4 quad turned out to be so bloody good.
Their gpu tech in the series 5 octa is rumoured to be sgx 544 mp3 @ 500MHz or so...very dissapointing of true...I was hoping for some next gen hard ware like mali t604 or even better t654....although that last one may be jumping the gun somewhat...still they claim gpu power is twice that of exynos 4 quad...again very believable and lower power consumption on the gpu is also believable as they thst would spread the performance across more alus and at a better process.

The former is impressive and a MP3 yielding higher performance than the 604 you're wishing for is disapppointing? For reference the MP3@325MHz yields in GL2.5 3290 frames, the T604 slightly under 4200 and the T654 for which you obviously mean one of the 8 cluster cores would be a power consumption disaster for a smartphone platform for today's standards. Would you squeeze an 8 cluster Rogue TODAY into a smartphone? I guess not.

We'll probably have a number of smartphone SoC platforms being give or take with only minor differences in the same ballpark from Samsung, Qualcomm and NVIDIA oh and there will be also the obvious outsider which typically competes more against itself with new generation SoCs in due time and we'll start all over again the mm2 vs mm2 debates to justify that the top spot in public benchmarks wasn't for all that long after all....same old same old as in the past.
 
Wait it's been on shelves for how many months now and you call its drivers still early? Early in my book is at worst a month or two after release. Peak performance is at the moment slightly over 3800 frames for the fastest Adreno320.

Qualcomm's own roadmap claims 30% more performance for the Adreno330.



The former is impressive and a MP3 yielding higher performance than the 604 you're wishing for is disapppointing? For reference the MP3@325MHz yields in GL2.5 3290 frames, the T604 slightly under 4200 and the T654 for which you obviously mean one of the 8 cluster cores would be a power consumption disaster for a smartphone platform for today's standards. Would you squeeze an 8 cluster Rogue TODAY into a smartphone? I guess not.

We'll probably have a number of smartphone SoC platforms being give or take with only minor differences in the same ballpark from Samsung, Qualcomm and NVIDIA oh and there will be also the obvious outsider which typically competes more against itself with new generation SoCs in due time and we'll start all over again the mm2 vs mm2 debates to justify that the top spot in public benchmarks wasn't for all that long after all....same old same old as in the past.

Look at what happened with the adreno 2 series and the drivers...new drivers were released way longer down the road than a few months that vastly increased performance. ..sometimes over 50%...im suprised you have not remembered this topic as we had a number of interesting discusions on it ailiros :)

New better written drivers benefit at any time..especially if said gpu is just released..snd also if previous uarch had suspected compiler issues which benefitted from improved drivers a long time down the road.

Yes I would be disappointed with a last gen 5 series XT...because I would be investing in a platform on an 18-24 month contract. ..so I would obviously want the best apis and latest technology...
An 8 core mali midguard gpu would fit into smartphones quite nicely on 28nm...dont forget clocks would be adjusted and performance per watt would be better spreading the load across more execution units and lowering the clocks...isnt that apples strategy for A series SOCs and also intels for its intergrsted gpu strategy. ..especially with haswell coming up.

Also the advsntage of going all ARM is obviously ARM had designed and optimised all its chips of thst generation to work alongside each other....A15+A7+midguard+cache coherency in a big little setup was obviously ARMs design intention...so should work best together no?

Of all the next gen SOCs coming up...now that samsung has likely dropped mali...I am most excited about snapdrsgon 800..with valleyview also catching my eye.
 
Look at what happened with the adreno 2 series and the drivers...new drivers were released way longer down the road than a few months that vastly increased performance. ..sometimes over 50%...im suprised you have not remembered this topic as we had a number of interesting discusions on it ailiros :)

Still nothing or nowhere near the real potential of the hw itself unless shader complexity is quite high like in GL2.5.

New better written drivers benefit at any time..especially if said gpu is just released..snd also if previous uarch had suspected compiler issues which benefitted from improved drivers a long time down the road.
All GPU IHVs improve drivers over time NVIDIA not excluded.

Yes I would be disappointed with a last gen 5 series XT...because I would be investing in a platform on an 18-24 month contract. ..so I would obviously want the best apis and latest technology...
There's still an explanation missing what full OGL_ES compliance will be good for.
An 8 core mali midguard gpu would fit into smartphones quite nicely on 28nm...dont forget clocks would be adjusted and performance per watt would be better spreading the load across more execution units and lowering the clocks...isnt that apples strategy for A series SOCs and also intels for its intergrsted gpu strategy. ..especially with haswell coming up.
No one said that Samsung won't use further GPU IP from Mali; it just remains obvious that an 8 cluster GPU is way too much for today's smartphones either way you want to twist it. ARM's own GPU roadmap has them projected for 2014 and not earlier. If Samsung could have they would have.

Yes you can adjust frequencies theoretically, but if you go too far with that adjustment it somewhere defies the point of having that many clusters in the end. A T604 in the Nexus10 has what kind of peak power consumption? Adjust frequency to peak smartphone heights and you end up with quite a bit less in performance. Haswell isn't yet for smartphones and Intel doesn't have any intention for quite some time afaik to change it's current strategy there.

Also the advsntage of going all ARM is obviously ARM had designed and optimised all its chips of thst generation to work alongside each other....A15+A7+midguard+cache coherency in a big little setup was obviously ARMs design intention...so should work best together no?
Obviously no. All GPU vendors adjust their IP for integration.

Of all the next gen SOCs coming up...now that samsung has likely dropped mali...I am most excited about snapdrsgon 800..with valleyview also catching my eye.
Again Samsung hasn't dropped Mali; it's merely using each of its two GPU IP licenses wherever it makes more sense; for the time being Exynos5250 in tablets like the Nexus10, the upcoming octacore with IMG GPU IP and I wouldn't expect that Samsung won't continue to use further T6x0 variants for its upcoming tablets. There's no indication yet that Samsung has licensed Rogue; even if they have struck such a license and it hasn't leaked yet it still doesn't mean that they intend to abandon ARM's GPU IP entirely.

With regard to their octacore it sounds like the platform will deliver in a smartphone give or take what their 5250 delivered in a high end tablet no too long after its release, and that's not uncommon for many out there Apple included.

Again Valleyview is a tablet SoC; I'm constantly losing track with Intel's codenames, but if that's their first generation tablet with their own GenX GPU technology colour me unimpressed since they themselves claim a rough 3x times performance increase compared to their current tablets. 3x times a SGX545@533MHz sounds far from any "killer tablet" in terms of GPU performance.
 
No developer is going to target 30 million OGL 3.0 devices over 600-800 million devices with old APIs
True if thats the thinking...but do you not think we will get to a point like pc gaming? Where you will have a settings option to enable extra effects?? Things like this have already been in effect for a couple of years now on smartphones...draw distance, textures, etc have been settings in certain games such as grand theft auto 3...thats not taking into consideration many devs baking in customisations in to the same game that can be utilised by having tegra drivers can they not?

Smartphones are like pcs of like 5 years ago...was it not possible to adjust settings from dx9 to dx10 on certain games then? What about AA on supported socs? Af? Higher res texture downloads, its not hard to see a scenario in the very near future where nearly all games were designed from the ground up with halti features baked in from the start...with options to turn features on or off in supported hardware. ...that is a very logical way to do it considering games will be built with engines that utilise these features anyway ..so it would be fairly easy and economical for the devs to do it.
 
Ailiros...your not answering my points...new drivers on adreno 320 will obviously make a considerable difference...more so perhaps becsuse of qualcomms previous history in this regard.
Adreno 320 is a very very good smartphone chip...qualcomm doesnt change clocks on their smart phone gpus in comparison to tablets. ..what they claim will happen with x soc (ie S4 pro)will be the same for either smartphone or tablet..unlike nvidia who put out ridiculous performance figures which will never see the light of day in a smartphone..

Take the slide I presented earlier. ..nvidia slide CLEARLY states tegra 4 = 6x tegra 3....they didnt seperate smartphone from tablet..they also didnt say 3x in anything ive read or seen..and I also saw the ces unveiling...6x was defo the point they put across.
Also anandtech was clearly told by nvidia that released hardware will best ipad 4...again no differentiation between tablet or smartphone....do you really think that will show up in a smartphone? Do you even think that is possible full stop in a tablet anyway??..

Qualcomm has clearly stated its snapdragon 800 chip will be in smartphones....and said that performace will be 75% better cpu...twice the bandwidth, and 50-200% better gpu. (Workload dependant)...they clearly state smartphone and have a track record of accurate performace predictions....I believe them. :)

Its reasonsble to presume that qualcomms predictions are more truthfull/accurate than nvidia going by past history..especially so in smartphones where nvidia produces a lower powered part for smartphones...

With this in mind along with the likely massive improvement to adreno drivers..-actually coming soon so I read on these threads.-..adreno is going to be the top smartphone gpu at least untill A7 from apple...I think that is a fair prediction. ..something I said last year which also came to fruition despite you saying other wise :).

Also although I will be honest and say im not aware of the full api features of halti over say sgx 544 can produce, im willing to bet there are some that will be very usefull in games...maybe open cl? For physics?Ai? I dont know to be honest...what I do know you wouldnt get a millionaire buying a Ferrari over lamboughini if the lambo had newer features and a more powerfull engine and cost the same..never mind the fact he/she would be dawdling along to the shops and back and would never see any benefit from such features.

In short we all want the best, most latest technology in our smsrtphone if price is going to stay the same...sgx 5 series..whilst very good is old tech now...we want halti on every next gen soc..it increases the likelihood of us actually getting games with those features.

Smartphones are the lambos snd ferrari s of the handheld world...why settle for second best?

Edit- content, punctuation/grammer.
 
Last edited by a moderator:
True if thats the thinking...but do you not think we will get to a point like pc gaming? Where you will have a settings option to enable extra effects?? Things like this have already been in effect for a couple of years now on smartphones...draw distance, textures, etc have been settings in certain games such as grand theft auto 3...thats not taking into consideration many devs baking in customisations in to the same game that can be utilised by having tegra drivers can they not?

Smartphones are like pcs of like 5 years ago...was it not possible to adjust settings from dx9 to dx10 on certain games then? What about AA on supported socs? Af? Higher res texture downloads, its not hard to see a scenario in the very near future where nearly all games were designed from the ground up with halti features baked in from the start...with options to turn features on or off in supported hardware. ...that is a very logical way to do it considering games will be built with engines that utilise these features anyway ..so it would be fairly easy and economical for the devs to do it.

Before you can add features to games you need the performance to use them, do you really think mobile GPU's have performance to spare when running demanding content? Further do you really beleive that mobile/handheld content has reached the point where they are being significantly held back by the features in ES2.0, in particular ES2.0 with extensions that take it very close to ES3.0?

Edit, and wasn't this thread supposed to be about Tegra ;-)
 
True if thats the thinking...but do you not think we will get to a point like pc gaming? Where you will have a settings option to enable extra effects?? Things like this have already been in effect for a couple of years now on smartphones...draw distance, textures, etc have been settings in certain games such as grand theft auto 3...thats not taking into consideration many devs baking in customisations in to the same game that can be utilised by having tegra drivers can they not?

Smartphones are like pcs of like 5 years ago...was it not possible to adjust settings from dx9 to dx10 on certain games then? What about AA on supported socs? Af? Higher res texture downloads, its not hard to see a scenario in the very near future where nearly all games were designed from the ground up with halti features baked in from the start...with options to turn features on or off in supported hardware. ...that is a very logical way to do it considering games will be built with engines that utilise these features anyway ..so it would be fairly easy and economical for the devs to do it.

And do you see this happening this year? Do you see it happening even next year? By the time OGL 3.0 is an API that devs will actively support Galaxy S4 will be old news

So how exactly is new APIs going to help T604 when 544MP3 at 533 MHz could be almost 40-50% faster in raw performance?
 
Before you can add features to games you need the performance to use them, do you really think mobile GPU's have performance to spare when running demanding content? Further do you really beleive that mobile/handheld content has reached the point where they are being significantly held back by the features in ES2.0, in particular ES2.0 with extensions that take it very close to ES3.0?

Edit, and wasn't this thread supposed to be about Tegra ;-)

Yes john I do feel the higher end mobile gpus do have performance to spare...do you think mobile games are maxing all socs??
 
Last edited by a moderator:
And do you see this happening this year? Do you see it happening even next year? By the time OGL 3.0 is an API that devs will actively support Galaxy S4 will be old news

So how exactly is new APIs going to help T604 when 544MP3 at 533 MHz could be almost 40-50% faster in raw performance?
Is that really going to be true?..even accounting for higher clock's and likely ultra low thermal limits on nexus 10?
I think t604 os capable of much more performance than what we have seen...off the top of my head 32fps at gl 2.5?... judging as how that is a heavy shader bound benchmark and t604 is unified with around 60 or so gflops of compute power at decent frequencies. ..I feel there is more to come from that chip...besides I would happily take a rogue design over a t604 any day..john is rogue released this year publicly? :)

Mali t658 was announced one year after t604...so would it not be feasible to see that chip show up in devices one year after also?

Put it this way, IF the three big SOC manufacturers released next gen gpus with halti...(apple, qualcomm, samsung) I would expect to see those features baked in pretty soon yea..
just like I said with settings to turn on off like other graphics options already on smartphones such as high quality textures and draw distance.

Now tegra 4 has not got halti...we have at least got adreno 320/305/330 devices along with nexus 10...in 6 months you would expect some serious market penetration of halti by then...not taking into account a small outside chance of series 5 octa getting it and of course apple.

Tegra 4 so people have told me supports some more features over and above tegra 3...and there will be opimised tegra 4 games you can be sure of that.

Qualcomm and samsung have also demonstrated similar game optimisations with their SOCS at CES this year....smartphone games are already topping over 1.5-2+ gb install....is it really that much of a stretch then to see games developers trying to get a grip on this fragmentation by enabling every high end game to have advanced graphics settings baked in? Would that not be more financially viable for studios than individually optimising different SOCs for bespoke features?

I think so and ill bet we will be seeing something similar to a pc hardware/software setup in the near future.
 
Last edited by a moderator:
Ailiros...your not answering my points...new drivers on adreno 320 will obviously make a considerable difference...more so perhaps becsuse of qualcomms previous history in this regard.

It's still a fact that it's not an exclusive advantage for Qualcomm since all GPU providers deliver from time to time performance improvements through drivers. For the record's sake the Google Nexus 4 is stuck at 3800+ frames since I can remember it.

Adreno 320 is a very very good smartphone chip...qualcomm doesnt change clocks on their smart phone gpus in comparison to tablets. ..what they claim will happen with x soc (ie S4 pro)will be the same for either smartphone or tablet..unlike nvidia who put out ridiculous performance figures which will never see the light of day in a smartphone..

Neither Adreno320 nor 330 are likely to surpass even by a slightly margin the iPad4's GPU performance, since the distance from 3800 of the 320 in GL2.5 to almost 5900 for the iPad4 is not that small. NV claims that it's Tegra4 tablet SoC will be ahead of the iPad4 both in games and benchmarks and it's obvious that you cannot cram just yet that easily as much performance into a smartphone without killing battery life in no time. NV's AP40/smartphone variants will most likely end up somewhere in the Samsung octacore, Qualcomm 330 etc. league or easier all in the >4000 frames ballpark.

I don't see anything ridiculous in that regard in NV's smartphone SoC policy; if you'd ask me however where NV plans to get the design wins for it for high volume deals I'd be at my wits end.

Take the slide I presented earlier. ..nvidia slide CLEARLY states tegra 4 = 6x tegra 3....they didnt seperate smartphone from tablet..they also didnt say 3x in anything ive read or seen..and I also saw the ces unveiling...6x was defo the point they put across.
Also anandtech was clearly told by nvidia that released hardware will best ipad 4...again no differentiation between tablet or smartphone....do you really think that will show up in a smartphone? Do you even think that is possible full stop in a tablet anyway??..

Marketing is marketing no matter where you see it. It's supposed to paint the most optimistic picture to phrase it mildly and there are no exeptions there irrelevant of which company you're going to chose. And please don't tell me that there isn't marketing involved when IMG's marketing throws around with hundreds of FLOPs in its Rogue marketing material. The trick is in such cases to be able to filter out the marketing hyperboles and try to estimate real time efficiency.

Qualcomm has clearly stated its snapdragon 800 chip will be in smartphones....and said that performace will be 75% better cpu...twice the bandwidth, and 50-200% better gpu. (Workload dependant)...they clearly state smartphone and have a track record of accurate performace predictions....I believe them. :)

Qualcomm's core business is with smartphone SoCs mostly; while they obviously also have tablet design wins IMHO its quite understandable why they mostly concentrate on the first even more so with the huge amount of smartphone design wins they yield per year. Qualcomm itself claims in its own roadmap that Adreno330 will be clocked at 450MHz with the same TMU amount as on 320 and estimates in that very same roadmap 30% higher performance compared to Adreno320. Naive math = 3800 + 30% = ~4900 frames GL2.5. Excellent ballpark for an upcoming smartphone SoC and give or take within AP40's reach. Now honestly whether AP40 yields even higher smartphone performance do you really think it would be a serious threat for Qualcomm's business?

Its reasonsble to presume that qualcomms predictions are more truthfull/accurate than nvidia going by past history..especially so in smartphones where nvidia produces a lower powered part for smartphones...

Those predictions though don't give me any significant difference to what I estimate NV's smartphone T4 will have.

With this in mind along with the likely massive improvement to adreno drivers..-actually coming soon so I read on these threads.-..adreno is going to be the top smartphone gpu at least untill A7 from apple...I think that is a fair prediction. ..something I said last year which also came to fruition despite you saying other wise :).

Because I remember exactly what I said I claimed in a similar debate that the iPhone5 will be competitive to other solutions; it was and it is since not all Adreno320s yield the Nexus4 GPU performance nor do they share the same power throttling problem the latter only seems to have. You still haven't shown me where Adrenos had any "massive" driver improvements I seem to be missing, let alone that I don't expect the 330 to come along with 7k frames f.e. What I exactly expect is up there in the paragraphs above.

As for Apple's A7 I don't have a single clue yet; do you?

Also although I will be honest and say im not aware of the full api features of halti over say sgx 544 can produce, im willing to bet there are some that will be very usefull in games...maybe open cl? For physics?Ai? I dont know to be honest...what I do know you wouldnt get a millionaire buying a Ferrari over lamboughini if the lambo had newer features and a more powerfull engine and cost the same..never mind the fact he/she would be dawdling along to the shops and back and would never see any benefit from such features.

The point is not the usefulness of the API features themselves but when developers are going to use them. Adreno320 is how long available in devices yet and there are how many mobile games out there that are supporting even a fraction of OGL_ES3.0 functionalities? I haven't even seen any OGL_ES3.0 techdemo from Qualcomm yet, which doesn't mean that they might not exist, but there's nothing available for the public yet.

In short we all want the best, most latest technology in our smsrtphone if price is going to stay the same...sgx 5 series..whilst very good is old tech now...we want halti on every next gen soc..it increases the likelihood of us actually getting games with those features.

SFF mobile GPUs have the "ridiculous" tendency to last over several years, most of the time 4-5 years. No it doesn't increase any chances for ES3.0 to appear sooner in games since developers have the equally "ridiculous" tendency to concentrate mostly on the lowest common denominator which is still ES2.0.
 
Is that really going to be true?..even accounting for higher clock's and likely ultra low thermal limits on nexus 10?
I think t604 os capable of much more performance than what we have seen...off the top of my head 32fps at gl 2.5?... judging as how that is a heavy shader bound benchmark and t604 is unified with around 60 or so gflops of compute power at decent frequencies. ..I feel there is more to come from that chip...besides I would happily take a rogue design over a t604 any day..john is rogue released this year publicly? :)

LG has announced a dual cluster Rogue (G6200?) at unknown frequencies at CES. The theoretical peak for T604 should be at 72 GFLOPs if memory serves well but probably also counting some SFU FLOPs which isn't uncommon in latest marketing trends. However SGX is mostly ALU bound in GL2.5; who guarantees that T604 is also ALU bound in that one and the bottleneck might not be elsewhere exactly?

Mali t658 was announced one year after t604...so would it not be feasible to see that chip show up in devices one year after also?
It's no longer on ARM's GPU roadmap nor can I find it anywhere as a dedicated GPU page on ARM's homesite anymore. It's been most likely replaced by an 8 cluster T678: http://www.arm.com/products/multimedia/mali-graphics-plus-gpu-compute/index.php
and will appear in devices according to ARM itself in 2014: http://www.arm.com/images/graphics-and-GPU-Compute-roadmap.jpg

Now tegra 4 has not got halti...we have at least got adreno 320/305/330 devices along with nexus 10...in 6 months you would expect some serious market penetration of halti by then...not taking into account a small outside chance of series 5 octa getting it and of course apple.

Tegra 4 so people have told me supports some more features over and above tegra 3...and there will be opimised tegra 4 games you can be sure of that.
According to JPR in a spring 2014 statistic NV had a GPU market share of 3.2%, Qualcomm 33% and IMG at 50%. For GPU IP singled out IMG was at 80%. Now tell me why NV would make any significant difference with ES3.0 in T4 even if its market share doubles this time? It's not that NV should be excused for not including it, rather the contrary. But realistically seen developers concentrate on where the volume lies. And that 33% of Qualcomm in spring 2012 is obviously with only a tiny fraction of ES3.0 devices encounted if any at all.

Qualcomm and samsung have also demonstrated similar game optimisations with their SOCS at CES this year....smartphone games are already topping over 1.5-2+ gb install....is it really that much of a stretch then to see games developers trying to get a grip on this fragmentation by enabling every high end game to have advanced graphics settings baked in?
Yes. Because early API compliance serves mostly (even for the PC or any other market) for developers to have tools to develop for the future. By the time the resulting applications get released available hw will be times more powerful and most likely ES4.0 compliant in the given case.
 
The point is not the usefulness of the API features themselves but when developers are going to use them. Adreno320 is how long available in devices yet and there are how many mobile games out there that are supporting even a fraction of OGL_ES3.0 functionalities? I haven't even seen any OGL_ES3.0 techdemo from Qualcomm yet, which doesn't mean that they might not exist, but there's nothing available for the public yet.
The delay in OGL ES 3.0 games is probably because OGL ES 3.0 drivers aren't ready and that is probably because OGL ES 3.0 conformance tests haven't been ready until recently. You can't have OGL ES 3.0 games if we haven't fully figured out what can be considered "OGL ES 3.0". The original OGL ES 3.0 announcement in August 2012 said conformance tests would be ready in 6 months. Coincidentally, IMGTech just announced yesterday they've submitted OGL ES 3.0 conformance tests for Rogue. But yes, I wouldn't expect OGL ES 3.0 to really start to take off until 2014 and probably not be required even in high end mobile games until 2015.

http://www.electronicsweekly.com/Ar...ts-opengl-es-3.0-conformance-with-khronos.htm

I haven't seen any OGL ES 3.0 listings on Khronos' conformant products page yet, although of interest there is an OES ES 2.0 listing for the Mali-T624, a OCL 1.1 full profile GPU listing for the Mali-T604, and a OCL 1.1 GPU listing for the Adreno 320.

http://www.khronos.org/conformance/adopters/conformant-products

EDIT: And that IMGTech announcement also said Rogue is already shipping in end user product. I'm guessing they mean TVs or other electronics since I haven't heard shipping smartphones or tablets with Rogue.
 
Last edited by a moderator:
Yes they probably mean the LG smartTV announced at CES. Besides that is anyone aware or better can anyone tell when Kishonti will release GL2.7 & GL3.0? With a reasonable timeframe in between them or at the same time?
 
Yes john I do feel the higher end mobile gpus do have performance to spare...do you think mobile games are maxing all socs??

Most mobiles games aren't but that's more to do the casual nature of most mobile gaming or the different cost model employed, it has absolutely nothing to do with available support for the latest and greatest API's.

Maybe you could explain how you think ES2.0 is limiting content?
 
Last edited by a moderator:
french toast, with all due respect, do you even understand OpenGL ES 3's new features? Can you think of concrete ways in which current games would greatly benefit from it? Or are you merely assuming it'll bring a great advantage?

I'm not saying there isn't value in them but you shouldn't be advocating it as aggressively as you have without a decent understanding of that value..
 
Are there any games on PC with the lowest end GPUs that are payable with Dx11 specific stuff enabled? (Tessselation etc?)

If not, why should we expect these kind of features for GPUs that are X times slower and driving similar resolutions?
 
Are there any games on PC with the lowest end GPUs that are payable with Dx11 specific stuff enabled? (Tessselation etc?)

If not, why should we expect these kind of features for GPUs that are X times slower and driving similar resolutions?

We're lucky if DX10 stuff works on low end desktop GPUs and that even under the light that DX11 GPUs exist for around 3 years now.
 
Back
Top