That's funny, why then Vizio demoed a T4 tab at CES, I am pretty sure Asus will show one at MWC toowonder how long that is going to take with only toshiba publicly signing up?
That's funny, why then Vizio demoed a T4 tab at CES, I am pretty sure Asus will show one at MWC toowonder how long that is going to take with only toshiba publicly signing up?
You are right. I was in a hurry reading that, apologies for confusion
There's a huge difference between a MP8 Mali450 (I meant) and a MP8 MaliT678 which you meant.
Its more advanced api wise (slightly) ...im sure its a higher performance? (Slightly) and its still on early release drivers..
Although iphone 5 can be considered a real rival for top spot...in android devices nothing comes close
Fair enough but if you look at at least one of my posts I suggested a variant like that (didnt know the correct nomenclature )
Still it does seem that my original doom and gloom about sgx being series 5 octa is likely unfounded...with newly added extensions and more drivers theres plenty of life in the old dog yet
Edit: on a more general note..this was one of the first threads that I put up..and im quite astonished by the feedback on it...thanks folks
Nexus4 and iPhone5 appeared on shelves more or less at the same timeframe. Both side's drivers should be more or less of the same age and since I can sense the reply already you better should double check if the iPhone5 GPU driver is unique or the same between a number of devices.
No matter what you want to consider it, iPhones are one single device selling with an insane profit margin, exactly because it's as expensive as it is. The Nexus4 sold something 1Mio devices up to now afaik; how many iPhone5 sets since it's introduction? It might not sound relevant to the above, but:
1.) Performance per se is not always the most defining factor as long as the difference is as close at it is in that case.
2.) The Adreno320 is by a margin faster in GL2.5, it's not in 2.1, not in effective fillrates, not in geometry throughput etc. according to the results here: http://www.glbenchmark.com/compare.jsp?D1=Apple+iPhone+5&D2=Google+Nexus+4&cols=2
3.) Are all Adreno320 smartphones performing on the same level as the Nexus4? I don't think so: http://www.glbenchmark.com/compare.jsp?D1=HTC+Butterfly&D2=Apple+iPhone+5&cols=2 How about about those Adreno320 cases where the performance is even way below 3k frames?
After some point it starts getting silly debating the same things all over again and despite that am I documenting my claims you're not willing to move a single inch from your conception. Now I'll lean out of the window and claim that Apple could if they wanted to to surpass through perfectly legitimate driver optimisations by a healthy margin even the Nexus4. What is not guaranteed in such a case is that the iPhone5 will have the same GPU and in extension device stability as it has right now.
Given that the Nexus4 poses on top of all 320 powered smartphones but comes with thermal throttling problems (which as I noted I'm not willing to generalize over all S4/Adreno320 device implementations until further data is known) but if a serious website like Anand mentions "freezer" in its benchmark results for the Nexus4 and has mentioned in detail a throttling problem that affects that particular device (and probably to some extend the LG Optimus G) I wouldn't get so stuck in your place on that result, but would try to think of the most stable device/platform behaviour with a Adreno320 GPU and then make performance comparisons.
And comparing to the other Adreno 320 phones that are all scoring somewhere between 29-31 fps, it puts Adreno 320 neck to neck with SGX543MP3. The difference is that the latter is clocked much lower, we will see what happens when Samsung releases series 5 chip that is clocked closer to the competition
According to the leaked Qualcomm slide (http://forum.beyond3d.com/showpost.php?p=1700375&postcount=308), the Adreno 320 GPU used in the S4 Pro SoC has a GPU clock operating frequency of 400 MHz. The SGX 543MP3 GPU used in the A6 SoC should have a GPU clock operating frequency of at least 325 MHz (and possibly even higher than that). So the differences in clock speed between these two GPU's is significant but not so huge.
Hmm i must have missed this. I was under the assumption that it was clocked at 267 MHz based on Anandtechs preview
You really think that ~30fps on GLBench2.5 isn't struggling?The adreno 320 doesnt struggle with gl benchmark john..see above.
Not wanting to dwell on this, but you did quote ARM marketing bullshit about GPU CPU integration...Im buying into any arm marketing "bullshit"..
ALL games or just "generic" mobile content that is written for the lowest common (performance) denominator?I mearly stating that all current games can run smooth on an Adreno 205/sgx 540 class of hardware...we are well beyond that perfomance now...
Yes agree, and the first thing you need for this, before throwing esoteric features at the problem, is performance, this is particularly true within an extremely limited power budget.like i said, if all games can run on smarphones with those gpus...whay couldnt there be in game options to enable more features and IQ?? Of course there can be.
On your point about sgx 5 series enabling extentions that take it near parity with halti..your right..ive just read an article that also states this...so perhaps 5 series is still a good option.
You really think that ~30fps on GLBench2.5 isn't struggling?
Not wanting to dwell on this, but you did quote ARM marketing bullshit about GPU CPU integration...
ALL games or just "generic" mobile content that is written for the lowest common (performance) denominator?
Yes agree, and the first thing you need for this, before throwing esoteric features at the problem, is performance, this is particularly true within an extremely limited power budget.
;-)
Wait, are you saying that current mobile games don't scale graphics settings based on the hardware that's available? Just because they don't reveal those settings to the user doesn't mean they aren't present and being used by the developer to optimize for each device. GTA III/VC enables higher polygon models, textures, lighting effects on A5/A6 Apple devices compared to A4 devices. The Android version, of course, gets a full user configurable graphics settings menu. Infinity Blade I/II optimizes different combinations of overall display resolution, texture quality, anti-aliasing, and shaders to make the most out of each supported device. Real Racing 2, NOVA 3, Modern Combat 4, etc similarly have differing graphical quality depending on device.Edit..just to clarify I have already outlined the scenario I think we will hit (and already have in alot of cases) where games will be built with options to scale to any smartphone hardware of say 2 years or so...that takes out the lowest doniminator scenario....studios can develop like pc for multiple soc/ performace/resolutions with minimal cost.
Ok point taken..I still stand by my claims but we await anandtechs take on droid dna before jumping for the stars.
I dont want to rehash old ground again also...but I still do feel there is some grounds for pessimism on adreno compilers..and good drivers to make up for it...who knows whether that carried over to new gen or not...time will tell.
http://withimagination.imgtec.com/i...taking-texture-compression-to-a-new-dimensionBesides I dont see how clock speed is relevant here?...they are 2 different uarchs how can we start saying 'well if this had this clock or this had this clock' the phones and soc have already been released...so we take it at those clocks...this isn't between imagination vs qualcomm adreno...
The only thing you can say after release that may affect performance ..imho is drivers...but that depends on who you talk to ..
They are both neck and neck..with iphone 5 better on open gl es 2.1 and adreno slightly better on 2.5.
Once both have had another round of driver updates which seem to be incoming from both gpu manufacturers then we will likely have a better idea..although wouldnt be suprised to see them dead level again
I only say adreno 320 because I feel (not everyone granted ) that they newer slightly more advanced api of the adreno..along with the newer uarch means more might be able to be extracted from it...my opinion only.
Edit; to clarify what im saying..obviously to everyone here is things such as power consumption and heat/yeilds also play a part on gpu design and final clocks...so we cant just say one soc isnt performing properly because it isnt clocked quite as high as a competing soc with a completely different uarch and likely execution units...just saying.
Firstly, at no point have I said there is performance to spare on current hardware, that's your claim. I have specifically stated that current generation mobile GPU's stuggle with _demanding_ content, e.g. ~30fps on glbench2.5 IS struggling, not that GLB2.5 is particularly strenuous by modern standards.John with respect..we seem to be going back over the same ground....on one hand you agree there is perfomance to spare with last gen hardware on current games, on the other you keep saying "we need performance before IQ/features" ?
No. We agree that titles written for lowest common denominator could have IQ improvements if optimimised for higher performance platforms. However those IQ improvements will still primarily be capped by performance not features.We both agree there is obviously performance headroom even with last gen adreno 220/sgx 543 class hardware on smartphones...obviously when talking about smartphones we can only be referring to generic lowest denominator type games such as the ones described. .aka gta, mc4 etc....soo with that being said....it is not out of the stratosphere to extrapolate that up to next generation and expecting more IQ and features is it not??
No, I've repeatedly pointed out that old hardware does not have the scope to run _demanding_ content, as is clearly illustrated by GLBench2.5. However yes, the disparity between low and high end is likely to grow in the near term and of course, adding extra features at the high end would only make that disparity worse.You keep saying "we need performance first,"..yea as stated we already have it with old hardware...with next generation hardware performance disparity with available software is going to be massive if we carry on as we are.
Obviously.There is clearly room for more IQ at smartphone resolutions using next gen hardware.
Battlefield 3 does support DX10 GPUs. I thought therefore DirectCompute was optional rather than falling back to Compute Shader 4.x?If you look at Battlefield 3, it makes very good use of tesselation, but there's only one real reason why it's a DX11-only game: DirectCompute shaders which they use for their excellent deferred rendering implementation. OpenGL 4.2 now includes compute shaders inside the API rather than requiring the developer to use OpenCL (and all the problems that can cause) but unfortunately that's not the case in OGL ES3.0 yet. Personally I completely agree with the idea that graphics is the killer app for compute, but unfortunately handhelds are not ready for it either in terms of API support or architecture (too many incompatible optimisations for different architectures e.g. shared/local memory bank conflicts). That's a much bigger problem than ES2 vs ES3 support in my opinion.
P.S.: On a separate note, please let me know when any handheld game is anywhere as awesome as TW2. Except for some adventure game ports, I haven't seen any good story-based game, and nothing as immersive (possibly because handheld games are more optimised towards shorter gameplay sessions).