NVIDIA Tegra Architecture

Lets concede that that is actually the case. Even so OMAP 4470 was announced Q1 2011 and scheduled for release first half of 2012. They would have to both announce OMAP 5XXX and release it well before the end of 2013 to be able to compete with T4.

Depends when 4470 was sampled to TI's customers.

Were they actually delays by Nvidia? Depends on if you believe Charlie or not. Its not impossible that Asus wanted a christmas launch for their tablet. Either way Nvidia beat all their competitors in bringing out a next gen chip.

You don't need to search for SA/Charlie's claims to see for when NV expected T30 and AP30 to ship. NV even gave a reason for the delay and yes things like that can happen to anyone. All I'm saying is that NV isn't immun to delays either.

Rumour is that T4 was taped out december 2011 and sampled first half of this year. So i would be bold enough to claim we will see it in products at MWC (and prepared to eat crow if im wrong:LOL:)

It would also presuppose that all factors that aren't directly controllable by NV would go according to plans. Besides when each product will get introduced to the press by any OEM f.e. is one side of the story and actual device availability another one and yes that goes for all SoC manufacturers.


Oh definately. But i was more talking about competing with Nvidia, Apple might be first with Rogue but they play in their own division. As for ST-Es delays im not sure if the blame can be put solely on them. Everyone has had issues with 28nm, to bring out both Rogue and Cortex A15 on 28nm as early as they planned is not a small task

Which brings us back to factors that aren't directly controllable by each SoC manufacturer itself. Before I am to believe single handedly each and every ones bold marketing claims, it's not that absurd to separate projected from actual device availability for everyone.

For ST E (or ST Micro) my only other point was/is, that it might be nice to make bold marketing predictions for N future products, but they still shot themselves somewhat in their own knee since it doesn't look like they'll be earlier there with fore mentioned SoC than anyone else.
 
Since when is Glowball a game?

Glowball is both a tech demo and a game (albeit simple game, with only two levels available for now, but that is besides the point). Obviously there are many other quality games on the platform too: http://www.youtube.com/watch?v=lBl-goBrWno&list=PLB1B7D5FDB0BD9664&index=7&feature=plpp_video

If NV would drive developers to concentrate game resources more on CPUs than on GPUs just like in Glowball, they'd be quite in an awkward position in the future.

Don't be silly, there is nothing about Glowball that would "drive developers to concentrate game resources more on CPUs than on GPUs". The CPU and GPU are both important in rendering/simulating all the visual effects in that game/demo. Way to turn a positive into a negative though :rolleyes:
 
Last edited by a moderator:
In an era when a good number of SoCs in the mobile space have GPUs actually capable of accelerating some basic physics for effects like rippling flags and flowing seaweed, nVidia's push for CPU driven effects in Glowball and in the Tegra enhancements to Shadowgun stand out more as an awkward/inefficient balance than some competitive advantage of Tegra's CPU focused design. The state of the APIs in the mobile space do make a case for nVidia's approach, though, for a little while longer at least.

With quite a few of the most advanced games in mobile like the Infinity Blades, EPOCH. and Fibble only on iOS currently, Android gaming is still a ways from the breadth and quality of its iOS counterpart. Also, games with Tegra exclusive enhancements like Riptide, Shadowgun, and The Dark Meadow run smoother and with more definition on the latest iOS devices, so Tegra 3 is not the SoC enabling the best mobile gaming experience by any means.

St-Ericsson's poor standing in the market and their reliance on handset partners who are having tremendous struggles of their own has forced them to restructure and push back the initiative on SoCs like the A9600 to ST Micro, so what once had the potential to stake out the high ground in the mobile market may have even more profound obstacles in the way of ever executing properly.
 
A "good number" of SoC's currently available for purchase by a consumer today have mobile GPU's that are capable of handling physics simulations, while at the same time having to render other effects such as dynamic lighting, caustics, fog, motion blur, etc.? I doubt that. At the end of the day, Tegra 3 does have a clear advantage [over A5X] in some applications with it's quad core CPU. It would be insane to deny that. And if given the opportunity to work with an SoC utilizing a quad core CPU, developers would have to be completely obtuse to avoid taking advantage of those extra CPU cores. Remember Unreal Engine 3 that was mentioned earlier in this thread? Here is what Tim Sweeney said some time ago about utilizing multi-core CPU's with UE3: "For multithreading optimizations, we're focusing on physics, animation updates, the renderer's scene traversal loop, sound updates, and content streaming. We are not attempting to multithread systems that are highly sequential and object-oriented, such as the gameplay." In other words, the GPU will still do much of the heavy lifting to render 3D graphics in the game, but the CPU can and will be used to enhance visual/audio effects. There is nothing wrong with that approach. And it is better to let those extra CPU cores do something rather than just sit there idling doing nothing.

With respect to TegraZone games, there is no question that Tegra 3 devices have better visual effects vs. iOS devices, and that translates to a better gaming experience. Even with these added visual effects, the gameplay is smooth (maximum raw frames per second is irrelevant with vsync limitations). The new ipad screen has richer colors and sharper text compared to most current Android tablet screens, but due to the extreme resolution, performance can slow down a bit at times even with the faster GPU hardware. Compared to ipad 2, the current Android tablet screens look as good or better with respect to richness of color and sharpness of text. Keep in mind that most Android Tegra 3 tablets available today are priced at or below the level of ipad 2.
 
Last edited by a moderator:
Yea totally agree with the above. The optimizations of tegra has yielded noticeably better games...this is one of the reasons I might take the plunge and by a tegra 4 device...because there will be apps designed to take advantage of the hardware.

If only tegra 4 would be equipped with a Mali t604..or a rogue...that really would rock....I just hope we get a new uarch with open cl and halti certification...combine that with likely 4 cortex a15s (hopefully with own voltage planes) and lpddr3 and we will be seeing Xbox 360 class games on phones for a fiver!.
 
Yup, NVIDIA has certainly done a very good job of porting their TWIMTBP model from the PC to the mobile space in my opinion (for better and for worse - obviously non-Tegra consumers are hurt by this, not just in a relative but also an absolute sense). The only thing I don't like about it as a gamer is that in order to achieve playable framerates on all Tegra platforms no matter the resolution, several of these games are rendering to a lower-res offscreen buffer (e.g. Riptide renders to 1024x768 IIRC) then scales it up. I know this is typical on consoles and even on the iPad 3, but as someone who really likes playing at native res, I'd personally like to be able to reduce the graphics quality slightly to play at full res instead.
 
Yup, NVIDIA has certainly done a very good job of porting their TWIMTBP model from the PC to the mobile space in my opinion (for better and for worse - obviously non-Tegra consumers are hurt by this, not just in a relative but also an absolute sense). The only thing I don't like about it as a gamer is that in order to achieve playable framerates on all Tegra platforms no matter the resolution, several of these games are rendering to a lower-res offscreen buffer (e.g. Riptide renders to 1024x768 IIRC) then scales it up. I know this is typical on consoles and even on the iPad 3, but as someone who really likes playing at native res, I'd personally like to be able to reduce the graphics quality slightly to play at full res instead.
Didn't Qualcomm announce a Tegra Zone competitor for their chips? Given nVidia's relative success with Tegra Zone, I'm a little concerned every SoC maker is going to start coming up with their own app store and lock up exclusive effects or whole games. I wonder whether Google should or even can take steps to dissuade this siloization which I think serves less to grow Android as a whole, but to grow each SoC makers piece of Android at the expense of the other.

The case with Amazon is a bit different, because there is a pretty clear distinction between the custom Amazon Android 2.3 and Google Android proper. As well, you can expect continuity within the Amazon ecosystem where a purchased app will run on future Amazon tablets. When smartphone makers offer their own app stores there's an expectation of continuity where a game purchased on one Samsung phone will work on the next Samsung phone you purchase. If every SoC maker starts locking in exclusives this adds another layer of complexity, where you can't expect apps purchased on one Samsung phone, for example, to necessarily work on your next Samsung phone. Given smartphone makers tend to use different SoC for the same phone in different regions, hearing about all the great games available for say a Galaxy S3 doesn't even mean you'll be able to get them if you pick up a Galaxy S3 in your local cell phone store. I know this open choice is an important part of the Android model, but I think it can also be intimidating and confusing for many consumers if they "upgrade" to a new and faster device and find that their purchased games are no longer available or that they are available but actually look worse.
 
Don't be silly, there is nothing about Glowball that would "drive developers to concentrate game resources more on CPUs than on GPUs". The CPU and GPU are both important in rendering/simulating all the visual effects in that game/demo. Way to turn a positive into a negative though :rolleyes:

I'd put from any IHV an IHV in house developed tech demo into the same category. Glowball has amongst its goals to show the strengths of a quad A9 vs. a dual A9 CPU config, otherwise performance wouldn't drop significantly if you disable half the CPU cores as you yourself pointed out.
 
The A5 and A5X have cycles to spare comparatively. For the part of the physics which could be offloaded and processed more efficiently by the GPU, balancing that with help from only two A9s could result in a lower amount of power consumed overall for the workload.

Shadowgun, as just one example of a multiplatform game, has received an update for the new iPad which claims "native 2048 x 1536 iPad resolution and 4x Anti Aliasing (MSAA) and yet it still retains completely smooth gameplay." I'm not sure what it's really doing (many games have actually been upscaling from a render target as mentioned), but the 2X graphics performance offered by the A5X over the A5 allows plenty of headroom for a properly polished app to render at a detail level higher than iPad 2 yet still lower than 2048x1536 while also increasing or at least maintaining frame rate.
 
I don't think this is a new uarch....

In my opinion, it is very likely that Tegra 4 will use Kepler DNA (G70-based architecture is long in the tooth, and G80, GT200, and GF100-based architectures are not an option). The performance of Robin should be quite competitive with A5X.
 
If that rings true..with halti support...then I will be a tegra convert as games will be shipping with advanced next gem features turned on..the software gap between tegra and non tegra games and apps has the potential to be huge.
 
The importance of the Kepler architecture to NVIDIA cannot be understated. This is their only modern (post-G70) architecture that is suitable for use in mobile devices IMHO. And not a moment too soon either. Just FYI, Robin is rumored to have an avg. power consumption no higher than Tegra 3.
 
Due to the design of the shader core, with extremely high operating frequency (aka "hot" clock), introduced in G80 and carried on through the Fermi generation.
 
Due to the design of the shader core, with extremely high operating frequency (aka "hot" clock), introduced in G80 and carried on through the Fermi generation.

While that is true, I don't think that is as big a deal. Cores can be underclocked after all and doing so will save area. Area which you need for big caches in an IMR anyway.
 
Yes, the cores can be underclocked, but the shader cores still need to operate at a relatively high frequency. Also, the clocking logic needed for the faster shader cores is more power hungry relative to the approach used in Kepler.
 
The importance of the Kepler architecture to NVIDIA cannot be understated. This is their only modern (post-G70) architecture that is suitable for use in mobile devices IMHO. And not a moment too soon either. Just FYI, Robin is rumored to have an avg. power consumption no higher than Tegra 3.

While I don't know how NV's future Tegra GPUs will look like, it doesn't have to be Kepler per se and here your original comment about Kepler "DNA" sounds more reasonable at this point.

So far ULP GF GPUs in Tegras aren't in a strict sense either a NV3x, NV4x or whatever else one would try to categorize; they're more a mix and match of capabilities and ALU layout amongst others from several generations adapted for a power critical design like a small form factor SoC.

While that is true, I don't think that is as big a deal. Cores can be underclocked after all and doing so will save area. Area which you need for big caches in an IMR anyway.

I'm not so sure I agree. Hotclocking ALUs might save die area when you do f.e. 64 ALUs@1GHz vs. 128 ALUs@500MHz, but it doesn't also mean that for the first you suddenly have half the die area compared to the latter. Hotclocking costs less, but it doesn't come for free either, both in die area as in R&D.

IMHO whatever NV developed for next generation SoC GPUs might have up to some degree its roots in past and/or current desktop designs, however I still would suggest that these where designed especially for the small form factor in mind and not just a simple copy/paste job from N existing design. Of course does their so far experience in desktop designs play a role, but up to some degree also since power consumption is by no mean as critical as for SoCs and there's also a shitload of difference in terms of latencies between a small form factor and a high end desktop design. Else it can only be a "from ground up" design, yet not necessarily a "tabula rasa" if that doesn't sound too confusing.

Hotclocks still sound like a weird idea for small form factor; Vivante does it for its ALUs yet the frequency difference is rather miniscule compared to core frequency that it makes up for comparable measures to G80 and the likes. Kepler or not I doubt NV ever really intended to use hotclocks for Tegras.
 
Back
Top