nVidia Denver discussion

Are there any (many?) Android games etc that use this performance, or is having the fastest hardware on that platform just a theoretical exercise?
It is tough to say without proper FPS measurements in games :(
Now I do believe that this power is somehow not necessary (I'm not sure about what are the most demanding games now, GTA 3 serie and modern combat 5 seems to appear often in perfs assessment on youtube for what it is worth).
I'm not sure it is doable on the Nexus 9 but on the shield tablet I believe one can lock the frame rate to 30fps which allows the chip to go to sleep /prevent to burn power uselessly.

The thing is dev aim a wide range of devices, and the (imho stupid) increase of the resolution makes sure that there can't be as significant a jump in graphics as silicon could provide.

I did not read rumors about Erista, but I'm not convince I like where Nvidia is headed, whereas the tegra 4i failed (and Nv abandoned it too) I believe that Nvidia needs 2 chips to covers the market it want to address. Pretty much they should copy Apple AX and AXx approach.
What they need the most now is not a crazy increase in specs but reach lower thermal dissipation.
Rumors reported here about Erista have me betting that Nvidia will again go without much design win of any significant volume all for the sake of reaching "greater power" (whether nowadays software and usages needs such power is another matter) a greater power that may restrict their chips to tablets and automotive, I don't know about the margins in automotive but tablets sell for a lot less than phones on average, quantity are lower too.
Imo Nvidia needs to reconsider its positioning, there is some room in between fighting for cut throat margins with the like of Mediatek, Rockchip, AllWinner, etc. and pursuing their own agenda which doesn't seem that in line with the needs of the market.

EDIT
That is a CPU thread got confused, I think that 2 Denver cores (if they perform in real world properly) is OK but I see no reason to push further the number of cores for mobiles devices (phones/tablets), my sincere opinion is that the third core in those new iPad is unnecessary /selling point (/benchmark winner).
I hope from there Nv mimick Apple and if increase the number of cores (for marketing /PR sake) it makes it conservatively: one core at the time. Denver being a new architecture, I would hope they go all the necessary length to increase performances (per cycle and per watts) before resorting to adding more cores.
 
Are there any (many?) Android games etc that use this performance, or is having the fastest hardware on that platform just a theoretical exercise?

It's a theoretical exercise and there's no point to selling chips like these, that are just Too Powerful. Typical Nvidia. Next thing you know they'll start funding ports of popular games to fuel demand.
 
These devices can always use more single threaded CPU oompth.

NV sees Tegra as their performance segment tablet and portable console hardware so the 3D is justified by that. They are trying to blaze a new trail. They aren't going after budget device wins anymore.

I imagine the 3D is also useful for the automotive industry.

Next thing you know they'll start funding ports of popular games to fuel demand.
Tegra Zone essentially is their attempt to foster development of more pretty 3D games. Android TWIMTPB.
 
It's a theoretical exercise and there's no point to selling chips like these, that are just Too Powerful. Typical Nvidia. Next thing you know they'll start funding ports of popular games to fuel demand.

If these chips are designed for demanding apps/games, and most of the demanding/optimised games are on the other platform then it does seem like an exercise in futility. What's the point in having the most powerful chips if they are never used to their potential?

The thing is dev aim a wide range of devices, and the (imho stupid) increase of the resolution makes sure that there can't be as significant a jump in graphics as silicon could provide.

I think Android these days is really dominated by the low end 'does it all' type chipsets that marry cost effectiveness with sufficient performance. nVidia can get a few tablets, but if they don't get volume then they will be cut out of the market. It is easier in silicon to go up than go down it seems because upwards is a natural trajectory of performance.

Rumors reported here about Erista have me betting that Nvidia will again go without much design win of any significant volume all for the sake of reaching "greater power" (whether nowadays software and usages needs such power is another matter) a greater power that may restrict their chips to tablets and automotive, I don't know about the margins in automotive but tablets sell for a lot less than phones on average, quantity are lower too.
Imo Nvidia needs to reconsider its positioning, there is some room in between fighting for cut throat margins with the like of Mediatek, Rockchip, AllWinner, etc. and pursuing their own agenda which doesn't seem that in line with the needs of the market.
[/QUOTE]

The main 3D application with automotive is mapping right? I guess they came out with the right chip at the right time that could last as long as a typical 5-6 year model cycle for cars. I remember when AMD dominated the laptop mobile GPU market briefly, and how quickly that all changed when nVidia stepped up their game.
 
Tegra 4i was the beginning and end of NV trying to go low. There is too much competition down there I imagine.

I don't see that the Android world has changed much in years. We still have the hot new chips in the showy releases from Samsung, Apple, NVIDIA and also some new budget stuff in Mediatek and others. It may not really be working out well but I think it's clear that NV is trying to build a high-performance gaming segment of tablets and nobody else is bothering to try to do that. Carving out their own niche.
 
Tegra 4i was the beginning and end of NV trying to go low. There is too much competition down there I imagine.
I think the issue was timeline to begin with, it got pushed back and pushed back again.
There are next to no serious review of the few devices powered by that chip but the performances definitely have nothing to do with "slow".
It was a tiny chip, I don't know what the problem was (or problems were) but it has nothing to do with aiming too low as far as performances are concerned it whips the floor of Snapdragon 400 type of SOC (and the matching Mediatek&co configuration of the time).

I don't see that the Android world has changed much in years. We still have the hot new chips in the showy releases from Samsung, Apple, NVIDIA and also some new budget stuff in Mediatek and others. It may not really be working out well but I think it's clear that NV is trying to build a high-performance gaming segment of tablets and nobody else is bothering to try to do that. Carving out their own niche.
Nvidia PR covers up make no sense, they pretend to avoid segment with cut throat margins but focus on tablets? It is a cover up for their inability (for now) to offer proper phone SOC.
As soon as they get a compliant solution be assured that they will change their tune, they won't pass on the higher margins offered by mid range (and higher) phones not too mention the higher volume.
 
They built two big chips, partly because of their Shield products, and partly to get another Nexus design win I'm sure. The Denver version wasn't ready until now so they had to also do the A15-based Tegra K in order to have something out earlier this year. Maybe a 3rd project for a phone chip is too much for them but I'm still not convinced they want to try to compete in the razor thin margin realm of cheap phones, nor go directly against Samsung and Qualcomm.
 
They built two big chips, partly because of their Shield products, and partly to get another Nexus design win I'm sure. The Denver version wasn't ready until now so they had to also do the A15-based Tegra K in order to have something out earlier this year. Maybe a 3rd project for a phone chip is too much for them but I'm still not convinced they want to try to compete in the razor thin margin realm of cheap phones, nor go directly against Samsung and Qualcomm.

I would think that you are right Nvidia wanted to make sure to have a high performance SOC out this year even if Denver cores were to prove problematic.

There was no room this time for a "lesser" SOC, pretty much Nvidia could not go lower than 1 SMX /scale down the GPU further. On that specific generation of product they had to pay the "price" (power and die area), convergence of their GPU lines is definitely worthy of that price.
The thing is finfet and new lithography are just ahead of us, in that context I hope Nvidia stick to its 2 chips strategy as with the tegra 4 generation.

Phone/tablet SOC 1: 2 denver cores + 1 SMM
Tablet and above (chromebook, STB, etc.) SOC 2: 3 denver cores + 2 SMM + 128 bit bus.

Not going against Samsung or Qualcomm? That is PR they are against Qualcomm, Samsung and others, there is nothing they can do to avoid competition, on any market segment.
Samsung efforts don't impress me, I don't see them ahead of Nvidia in CPU design, they are way behind in GPU design and when it comes to software Nvidia is in a league on its own. Nvidia has quite a few thing going for it-self till now they have always been something that prevent them to deliver a SOC that meet the requirement for high end phones. They can be there with their next SOC.

Tegra 4i never competed for the low end market (mid range CPU performances and high end graphic performances, how is that low end?), I don't think they should go there, but there is quite a world between what I'm thinking about (see above and compare to rumors about Erista) and anything "low end". It is more about putting the Shield tablet or the Nexus "into" a phone, more of an Apple approach to thing which has nothing low end to it.

I sort of agree with you about Nvidia carving its own niche, they indeed should not refrain from winning "design wins" (such an awful wording...) but I think they should push their own line of products, a more complete one.
 
I don't think the T4i has - or ever had in its practical lifetime - a high-end GPU.
3D performance and features (OpenGL ES 2.0) are similar to a MT6592, which is clearly a mid-range SoC.
 
Is there any particular reason NV would go the route of a "transmeta-like" CPU, other than, I dunno, buying a partially finished design and re-purposing it into their own ARM core (much like Transmeta re-purposed their chip into a x86-compatible processor, IIRC)?
 
Is there any particular reason NV would go the route of a "transmeta-like" CPU, other than, I dunno, buying a partially finished design and re-purposing it into their own ARM core (much like Transmeta re-purposed their chip into a x86-compatible processor, IIRC)?

According to some rumors, the reason why NVIDIA bought the team is because they wanted a x86-compatible CPU. However, since x86 market is now completely dominated by Intel, it probably not a good idea anymore. Further, the main point of making a Transmeta-like CPU is always about energy efficiency, so it's a good fit for mobile applications. The basic idea is that OOOE is expensive energy wise, and a Transmeta-like CPU should be able to replicate most of what an OOOE core can do with a simpler in-order core, as the analysis is done (and saved) via software.

In theory, a Transmeta-like CPU core should be able to translate anything (for example, Android Java bytecode are translated into ARM codes, and with project Denver it can be translated further into its "native" codes, skipping the middle man. In theory, you can do the same to Javascript codes). However, unless you have a lot control over the OS, it's very difficult to permanently save and manage the optimized codes, thus making the "slow ramping up time" a serious problem.
 
Since the days of Cg, GLSL, HLSL, GPUs have been in a position similar to having to run bytecode.
That's all I want to say :).
 
In theory, a Transmeta-like CPU core should be able to translate anything (for example, Android Java bytecode are translated into ARM codes, and with project Denver it can be translated further into its "native" codes, skipping the middle man. In theory, you can do the same to Javascript codes). However, unless you have a lot control over the OS, it's very difficult to permanently save and manage the optimized codes, thus making the "slow ramping up time" a serious problem.
Thanks for your reply! Yes, this is what I thought also, and isn't it also part of the reason Transmeta failed? The promise of power savings (and performance benefits) that just couldn't be realized?

Without adding some sort of extra software layer to Android, loading their transcoded executables instead of the standard ARM executable, how will NV make Denver a truly successful product? Having to re-transcode software repeatedly will chew power and performance unnecessarily...
 
Transmeta didn't have a native x86 HW decoder. Denver has a HW ARM decoder. That puts a floor on the worst case performance.
If the recoding costs less than the OOE HW in terms of power, it could still be better at perf/W.
Whether or not that's the case: who knows...
 
Back
Top