Tegra 3 officially announced; in tablets by August, smartphones by Christmas

Kindle Fire is essentially on Gingerbread which has some bizarre problems with its garbage collection activating at the strangest times and causing stutters. The whole 2.x line is like this. Poor memory management.
 
Reviews say the Fire isn't that smooth. But if it sells well, especially after Xmas, then Amazon has no need to chase high-end SOCs, do they? They are trying to hit a $200 price target so they can stay low-end, especially if the customers who buy the Fire have low expectations and don't care.

But do we really think that this is because of the hardware? It's maddening that you can't get smooth operation with 2 fck'ing 1GHz+ cores with the only fix seemingly throwing more hardware at it when even the first iPhone was pretty good at it. ICS seems to be better at it: I hope it's not only because of better HW but also because of software efficiency fixes. It will be interesting to see how it performs on older 2 core phones/tablets.

I've read from owners that the Fire got a firmware update in the last week that really improved the UI issues. I've also been hearing that with the launch over the server-side performance problems with Silk and Amazon's Instant streaming have mostly gone away. With how much can be updated in software, day one reviews mean less and less every day. It would be interesting to see some second looks done like a month after launch to see if Amazon has sorted things out. On the day it came out the Fire may have been a crappy product, but I'm not sure that's true today.
 
Is the Android GUI written in Java? I am skeptical.
From what I understand the problem is Gingerbread GC tries to tidy up RAM very often and it is also triggered by things you'd never expect (like scrolling thru lists) The GC seems to semi block the system causing stuttering in animations. People often whine about the GUI acceleration when they see the stuttering but it is actually the GC.

However some apps seem to not suffer from it as much as others. Opera Mobile for example is very smooth. And since Amazon is using their own UI who knows what's up.. But it sounds like they have improved performance.
 
From what I understand the problem is Gingerbread GC tries to tidy up RAM very often and it is also triggered by things you'd never expect (like scrolling thru lists) The GC seems to semi block the system causing stuttering in animations. People often whine about the GUI acceleration when they see the stuttering but it is actually the GC.

However some apps seem to not suffer from it as much as others. Opera Mobile for example is very smooth. And since Amazon is using their own UI who knows what's up.. But it sounds like they have improved performance.

That's what I am asking. I thought the core OS was written in native code.
 
Nope, the vast majority of the UI framework code is Java.
 
GC induced sttuters can't be fixed with GPU acceleration.

I am, frankly, quite surprised that GC in this day and age is choking up HW.
 
HW acceleration is orthogonal to the language the code is written in, too. It just uses OpenGL from the Java side (and the native side too if you want).
 
As said the problem lies in the GC. Why on earth would anyone even want to consider a VM-based and GC'd language to write the UI layer in an environment where efficiency is priority #1 is beyond me.
 
As said the problem lies in the GC. Why on earth would anyone even want to consider a VM-based and GC'd language to write the UI layer in an environment where efficiency is priority #1 is beyond me.

Portability?
 
Portability between what platforms exactly? As a professional c++ developer that has worked on pretty low-level stuff on various mobile platforms I can say with half-decent libraries portability is a non-issue and most definitely doesn't require changing the language.
 
x86 and arm

[edit] I realize that first post might have come across as elitist, but it was meant to convey a mere guess (not to insult you). I don't know if x86/arm compatibility is the real reason, but it's what first came to mind.
 
All it takes is a simple recompile. You'll going to have to do a whole lot of modifications to your programs/libs anyway to support all the different screen sizes and HW features. That recompiling is the last of your problems.
 
All it takes is a simple recompile. You'll going to have to do a whole lot of modifications to your programs/libs anyway to support all the different screen sizes and HW features. That recompiling is the last of your problems.

I believe that if you don't use NDK, all the features of Android are exposed via background services. The API is standard and the service takes care of the windowing/hardware feature.

To the program, it's an abstract audio/network/window system. I could be wrong though.
 
Amazon has the luxury to define their own market segment, where pure performance comparisons with other tablets are not all that important. Consumers don't care. I don't know the difference between their current SOC and T3, but if it's simply large enough to feel a difference in day to day usage, that should be sufficient.
They don't need to have the absolute winner of the day.

Under that reasoning they won't be needing a Tegra3 either.
 
Nice post at that link.

The memory bandwidth consideration is definitely valid. My Nook Color on Cyanogenmod 7 can be set for 16bit framebuffer and this very tangibly improves UI responsiveness. While Nook Color isn't exactly top hardware I'm sure bandwidth is impacting other tablets, especially with higher resolutions.
 
Bottom line:

Crappy engineering and/or engineering forever playing catch up with the competition.

I guess the way forward is clear. Google, going forward, will just throw hw at the problem. Galaxy Nexus seems close to the tipping point and beyond that, there is little advantage is investing effort in something that will resolve itself in due course.
 
Nice post at that link.

The memory bandwidth consideration is definitely valid. My Nook Color on Cyanogenmod 7 can be set for 16bit framebuffer and this very tangibly improves UI responsiveness. While Nook Color isn't exactly top hardware I'm sure bandwidth is impacting other tablets, especially with higher resolutions.

But none of her bandwidth works out for GPU composition when the GPU is a tiler. And that applies to PowerVR, Qualcomm, and ARM GPUs which along with nVidia should be taking a majority of shares. This is giving credence to her further claim that most 2D compositing is translucent and therefore bandwidth isn't saved by deferred rendering..

In the end 2D + tiling should be great for bandwidth since there's a relatively low number of primitives to pixels compared with 3D so the vertex load stays low. That just leaves one store for all the pixels on the framebuffer and texture loads for all the image sources - the biggest problem would be if those textures aren't compressed. Either way, 16bpp output shouldn't give you much of a bandwidth benefit over 32bpp.
 
Back
Top