Nvidia Tegra

Discussion in 'Mobile Devices and SoCs' started by Frontino, Apr 15, 2008.

  1. Ailuros

    Ailuros Epsilon plus three Legend Subscriber

    Of course does web browsing benefit from it and I don't think only in terms of performance but also in terms of perf/mW which is far more important for a mobile device. The affair could become way more interesting though if vendors start to fine tune applications not only for multiple CPU but also for multiple GPU cores.
     
  2. wco81

    wco81 Legend

    But will they, for general apps? Will they even optimize for games? I don't think games are saying, "requires x GPU or SOC." They might say "requires x smart phone or newer."

    If dual-cores can provide 30-60% faster performance on browsing, that would certainly be something to tout. But would there be a tradeoff in battery life? Some say that if a web page loads faster because of a faster SOC, then the screen backlight could be shut off by the user, resulting in better battery life in the aggregate. Or will people do more browsing because of the greater speed? It may depend on how usage patterns change in response to generally higher performance.

    Will phones and tablets have Tegra 2 (or other SOC) branding up front? Will there be anything like "Intel Inside" on retail displays for high-end smart phones?

    I understand that Nvidia demo'd graphics and games at CES for the Tegra but it remains to be seen how big a market high-performance smart phone games will be, when smart phone users seem to be content with games like Angry Birds for 99 cents (or free for the Android version). Games which would do justice to Tegra 2 would presumably have much higher development costs and result in much higher pricing.

    The other part of high-end SOCs is video. So phones will play back 1080p and at least capture 720p. But as some expected, mobile networks may not support large data content. Verizon's first LTE products have caps which discourage downloading or uploading HD video.
     
  3. Ailuros

    Ailuros Epsilon plus three Legend Subscriber

    Feel free to surprise me with a solution that will scale performance on mobile devices as time passes. And no frequency as an alternative instead of multi-core isn't the best solution as higher parallelism with lower frequencies always wins against higher frequencies with low parallelism. Games or any other applications might not benefit immediately but for any developer to write code with more threads there has to exist hw before that. Besides T2 isn't by far the only dual core CPU SoC that has appeared or will appear within the year. In fact I'm confident we'll see SoCs with both multiple CPU as well as multiple GPU cores which strengthens the notion for higher parallelism.

    I'll gladly listen to some educated documentation that proves otherwise.

    Depends on the workload and the actual frequency any dual core will operate at. If the latter isn't maxed out then it will run at a frequency that will be analogue to the workload and it might sound funny to you but two idiots need less effort to accomplish task X at given time Y then one only idiot under the same presuppositions.

    No we should in fact work against evolution continue to insist on single core CPUs and clock them at 3GHz or higher as time goes by, because it's very practical to carry around a huge battery backpack. Is it dark at the other side of the moon?

    Make a list of future high end smart-phones and/or tablets and let me know why the majority has opted for dual core. They obviously don't have a single reason for doing so.

    Uhmm the majority of T2 material that has been shown doesn't hold a candle to what has been or is under development for iOS. On the other hand you've always been someone that doubted even that 3D will make it to anything in the end on mobile devices. It must be quite frustrating that we haven't gone back to sw rendering hasn't it?

    Eventually issues like that will iron out; Android doesn't let you download large packs (for a large game for example) either unless you set up a server. So what?
     
  4. wco81

    wco81 Legend

    I'm not disputing that devices will move on to multicore and programmable GPUs.

    After all, they have to tout something to get people to buy new devices.

    But PC sales no longer seem to be driven by performance. Nobody is buying the latest multicore CPUs to make their computer browsers render faster. Gamers are still building PCs with huge power supplies. Otherwise, people don't seem to shop for the fastest CPUs any longer.

    While mobile software will require more resources over time, is there some point where the demand for faster SOCs plateaus as it seems to have in the PC market?
     
  5. metafor

    metafor Regular

    Sure but there is an entirely different tangent in mobile SoC's that PC's didn't have: power. In fact, a PC processor that consumes half the power for the same performance is not really a big selling point at all.

    A mobile SoC that maintains the same level of performance for half the power is worth a really really big premium.

    After that plateau has been reached, it'll be about consolidation and integration. How much functionality can you squeeze into one chip?
     
  6. Rys

    Rys Graphics @ AMD Moderator Veteran Alpha

    I'd disagree quite strongly here; perf/watt in desktop x86 is still a really big deal.
     
  7. Blazkowicz

    Blazkowicz Legend

    I agree, this is why modern desktops are quiet or silent. the industry adapted for the prescott pentium 4 thus better heatsinks, better airflow, thermoregulation, better PSU as the industry standards.
    now get a low end OEM PC or build lowest end yourself, with CPU power use having been halved (unless quad core at actual 400% CPU use) you automatically get a kick-ass, quiet and not bad on electricity bills PC.

    halve that again and a similar high-quality mini-ITX desktop will become the industry standard for lowest end :)
    low end sandy bridge has quite a potentiel there, low end ivy will be even better and AMD, VIA and nvidia will have to be competitive I believe (or at least decently trailing Intel while being cheaper)
     
  8. Simon F

    Simon F Tea maker Moderator Veteran

    Probably even more so in a server room.
     
  9. metafor

    metafor Regular

    Well, yes and no. When we're talking about a 120W chip in a desktop, perf/watt matters. When we're talking about a 90W chip, it matters somewhat. When talking about a 60W chip in a desktop, it's an afterthought.

    The same could be said about mobile SoC's but the threshold there depends entirely on how much power the other components use.

    If the CPU uses 500mW compared to the display's 2W, shaving that in half is desirable but not a game-changer.

    If the display technology then improves to 1W, then halving the CPU power would be a big selling point.

    In the end, the difference the end-user experiences is what matters. And most people care less about whether their towers suck up 30 more watts of power or not until you reach the upper limit.
     
  10. tangey

    tangey Veteran

    Towards the end of the video, RIM guy states Q1 for first shipment of playbook

    http://www.youtube.com/watch?v=KQWIdbvzTUA&feature=channel
     
  11. silent_guy

    silent_guy Veteran Subscriber

    You forgot the case where the SOC is running but the screen is off for these exotic use cases, such as making a phone call.
     
  12. Exophase

    Exophase Veteran

    When a call is being made the CPU should be running at far less than peak too (if at all).
     
  13. metafor

    metafor Regular

    That isn't really a situation that taxes the CPU. I can't really think of a compute-heavy use-case that doesn't have at least the display running. There are also other components that tax the battery far more than your typical mobile CPU would; the RF radio for instance.
     
  14. Ailuros

    Ailuros Epsilon plus three Legend Subscriber

    I severely doubt that consumers buy any device purely on sterile on paper specifications and they can't see any benefit. Embedded/mobile devices scale constantly in display resolution as well as multi-tasking.

    I read several notions like that when multi-core CPUs appeared in the PC desktop space. Try and run today's applications and OSs with a single core CPU and be ready for quite a few surprises compared to a multi-core CPU setup.

    Let's go back to single core CPUs then since multi-core doesn't make much sense shall we? Core amounts will continue to scale for CPUs as frequency is not the best solution to further increase performance as time goes by.

    I don't know which gamers you're referring to, but typically a conscious gamer would pair a specific CPU with a specific GPU. If he's building a mainstream system: mainstream + mainstream and if it's a high end system: high end + high end. Someone that would for example pair a $500 GPU with a $50 CPU is not only a rare exception but also an idiot.

    I'm afraid you neither want to understand the concepts of multi-tasking nor the fact that high parallelism/low frequency vs. low parallelism/high frequency wins in terms of power consumption. If I have two cores running at 600MHz for the same task and one running at 1.2GHz guess which one consumes more power.
     
  15. Entropy

    Entropy Veteran

    I can't help myself from butting in.

    The prime reason a modern computer does better with two or more cores than one is due to OS issues rather than processing speed. And my present eight-core desktop system stalls just as irritatingly as my old single core systems did, for reasons also unrelated to number of cores. Multi-core is not a panacea, except, perhaps, to keep ASPs up for another few generations as it is the new buzzword to sell to the yokels rather than megahertz.

    While there is truth in this, there is also falseness. The overwhelming majority of problems have only fairly modest room for performance improvement from parallel execution. This due to both fundamental algorithmic issues, and due to limitations in the shared resources.
    For the foreseeable future, on the desktop, unless you spend your computing time doing one of the few tasks that does benefit greatly from multiple cores and is independent of the memory channel and other shared resources, 2-4 cores will be a good number to have. I don't believe Intel/AMD will be able to convince the general public that they need more even with the tech press cheering. That won't stop them from trying to raise the core count in order to help keep ASPs up. (For general embedded use the optimum number of cores will be lower than for the desktop by some small ratio.)

    The single core, obviously. But then again, it will also always be faster, so it's not much of a comparison. In the embedded space we are talking about here, where we are fairly far down on the frequency vs. voltage curve, the real comparison would be something like "would you rather have two cores at 600MHz, or one core at 900MHz" or something fairly close, which would yield roughly the same power draw. I'd always take the 900MHz single core, (particularly if the rest of the system wasn't dimensioned to really let the two cores be fully exploited in the cases where full bore parallel execution was possible. And there is always going to be compromises in these SoCs....) Even if it were 600 vs. 800 I'd still go with the single core. 600MHz dual vs. 700MHz single, I'd probably switch to the dual. Maybe.

    I started programming on parallel processors some 25 years ago due to my involvement in computational science. Getting high overall scaling just isn't possible most of the time. Some kernels sure, and for certain specific problems, but for general purpose devices the justification for going highly parallel is questionable.

    And this from a guy who think that the original IBM Cell processing whitepaper (before there was a Broadband Engine) was spot on.

    PS. Anyone have a link for that original article? It is lost to the search engines since "Cell" was also used as a moniker for the Broadband Engine even on IBMs own site (220000 hits. Thanks a lot). It is interesting to note that the articles written on the processor architecture that I went through in hopes of a reference don't refer to the original Cell paper at all. It largely dealt with long term development of computing and the diminishing returns of pushing single thread performance from an architectural standpoint, and outlined how the programming models could evolve.
     
  16. metafor

    metafor Regular

    Multicore has less to do with performance scaling for both the desktop and mobile market than it has to do with user experience. Rather, multithread does.

    Context switching is inefficient and expensive and if you have a GUI running alongside applications that stress computation, it's nice to have at least enough resources to run 2 threads/processes such that the GUI doesn't lock up unnecessarily.

    Going beyond dual-core to quad seems a bit excessive and I seriously doubt cell phones will have a use case that will demand it anytime in the future.

    What I'm surprised by is the lack of SMT in cell phone CPU's. But I see that it's on ARM's roadmap after Athena.
     
  17. wco81

    wco81 Legend

    Where do you get that I'm against multicore? I'm typing this on a laptop with a C2D and hope to upgrade to i7 eventually.

    I'm just trying to understand where multicore fits in the mobile space. I might not buy a dual-core phone this year but no doubt 2 years from now, I would.

    But it may not necessarily be because some unforeseen killer app. requires parallelism so much as mobile OSes and software generally creeping up in HW requirements.

    The CoreDuo came out around early 2006. Has there been a new category of software which has made a compelling case for multicore CPUs ? Not really, it's that OSes and things like browsers became more demanding.

    In the last couple of days, there are rumors that PSP2 will have performance approaching that of the PS3. If that pans out, I'm sure Nvidia and others will benchmark their mobile products versus the PSP2 and eventually eclipse it.

    I'm skeptical that there is mass-market demand for console-like graphics in a portable/mobile device but we'll see.
     
  18. Ailuros

    Ailuros Epsilon plus three Legend Subscriber

    Problem being that I didn't concentrate in my original post ONLY on CPU but in combination with GPU multi-core aspects. NV up to T2 doesn't obviously belong to that category for the time being, so we'd have to wait what T3 contains. In any case with IMG, ARM and Vivante having multi-core GPUs in their road-maps, it won't be much of a surprise that devices with those will appear sooner than some would expect.

    I'd estimate that for something like that the PSP2 would need a SGX543 8MP. Given that each ALU is VLiW4 that's 16SPs/core in recent GPU marketing terms and 128SPs/16TMUs for the total config. Unless NV's future SoCs (T3 f.e.) contain the possibility for GPU core scaling the road to eclipse that kind of performance sounds VERY long. So far NV (T1 & T2) has presented only one variant of their SoCs where they can scale of course frequencies if needed for something like smart-phones. There's a difference between selling only GPU IP and building and selling an entire SoC.

    Since Renesas already has shown a SGX543 2MP I'd expect to see devices based on something similar within the year for smart-phones and tablets. Irrelevant if the PSP2 after all contains a 4 or 8MP the difference between those two is still quite large (just getting quite as large in the latter case), whereby of course one shouldn't forget frequencies either.

    PSP2 being a handheld console belongs more than anything to the console world than a smart-phone or tablet would. Eventually here SONY could sell hw at the beginning of its lifetime at a loss like in the console space with things improving over time and the actual money milking cow being games. Would a smart-phone or tablet last for over 5 years?

    Not a single doubt; however applications that don't rely on single-threading there is a markable difference. I've two older systems here one running an AMD FX-53 and the other an AMD 5050e under the same OS and the same amount of system ram. Even system cold boot times show large differences there.

    It was merely an example for which you'd have to find singled out case where hypothetically dual core would accomplish the same task at the same time at 600MHz vs. a single core case at 1.2GHz. Samsung's latest Hummingbird runs the A8 at 1.2GHz while the two A9 on T2 run at 1GHz.

    Depends what the target is. I'll call it a corner case study from mid 2009: http://www.hotchips.org/archives/hc...1.23.270.Pulli-OpenCL-in-Handheld-Devices.pdf ...they're mentioning OMAP4430 in page 6. However there are plenty of "buts" in that case study one should regard.

    GUI is one of those keywords when it comes to modern high end mobile devices along with any kind of multi-tasking. Most of us have seen the Blackberry Playbook presentation. I'd guess that in the OMAP4430 the SGX540 it contains could be clocked somewhere in the 300MHz ballpark (with 4440 having obviously higher clocks). Would it be wiser for TI to continue to scale GPU frequencies in their next generation OMAP5 or rather opt for a MP? Resolutions also continue to scale in mobile devices and the majority of embedded GPUs have 2 TMUs max. For higher resolutions I'd suggest higher fill-rates for which the dilemma would be again single core at high frequencies or MP with lower frequencies.

    SGX cores are already quite good with context switching, now imagine how things could like if you combine a multi-core CPU with a multi-core GPU for any wild scenario a user could come up with.

    ***edit: by the way http://www.anandtech.com/show/4103/qualcomm-demos-dualcore-snapdragon
     
  19. Lazy8s

    Lazy8s Veteran

    A mobile computer such as a smartphone accompanies a person through their daily interactions with others and the surrounding world, so the demand for it to better its ability to analyze and interpret tasks like real-time facial and speech recognition, real time language translation, image recognition, and biometrics will only continue to grow. Powerful GPGPU and CPU processing will continue to be very important.
     
  20. silent_guy

    silent_guy Veteran Subscriber

    Ok, let's go back to your initial argument: 2W display. 500mW CPU.

    Those 500mW reduce the use time of your gadget from 10 to 8 hours. Or from 8 to 6.4 hours.

    When the media reports on tablets, one of the things that are always prominently reported are the 10h on iPad vs 6h on Galaxy Tab vs the rumored (and denied) horrible use time on the Playbook.

    You don't reduce power with gigantic gobs at a time. Once low hanging fruit has gone, it's hard, tedious work and you have to fight for each mW. You can bet that integrators take notice when you're able to shave 50mW from a common use case.
     
Loading...

Share This Page

Loading...