Confirmed: NVIDIA entering the Intel IGP market, preparing large G8x lineup

Discussion in 'Beyond3D News' started by Arun, Nov 10, 2006.

  1. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    NVIDIA has just held its quarterly earnings conference call, discussing its financial results and answering questions from analysts. The quarter is strong overall with record revenue and rapidly increasing market share, although and slightly higher net profit than expected by analysts. On a GAAP basis, it was however lower than some expected, and this was partially due to a confidential patent licensing charge of $17.5M related primarily to old products. But the real shocker is that Jen-Hsun Huang, President and CEO of NVIDIA, announced in the conference call that they're working on <b>Intel IGP solutions</b>, as customer demand is increasing and the merged AMD-ATI is leaving the market, thus creating a void to be filled. They're currently aiming at an introduction date as early as Spring 2007.

    Jen-Hsun cited high-definition video and DirectX10 as key reasons for the increased demand and necessity, but it is unclear whether he implied by that this IGP is G8x-based or not. NVIDIA is certainly boasting its architecture's scalability and increased performance per watt, and some presentations slides are in fact hinting at CUDA, their new GPGPU framework with direct support for C-based programming, extending all the way to embedded markets. It could also be that NVIDIA is working on a G80 derivate for the GoForce product line, potentially with only some parts shared (like the shader core), or a completely different architecture that boasts similar programmability.

    No matter what, they're working on at least <b>nine more</b> G8x-based products, that is to say, ones with unique codenames or brand names. This is substantially more than the historical average, although if an IGP was included in there and the notebook parts had separate codenames, that'd be roughly four chips for the desktop lineup - the same number ATI's lineup currently sports. It remains to be seen, of course, whether these upcoming products will have the same ALU-TEX ratios as the GeForce 8800GTX, for example. While the <a href="http://www.beyond3d.com/reviews/nvidia/g80-arch/">fundamental G80 architecture</a> we've detailed in our in-depth piece on the subject will certainly still apply, it is not out of the question that some things might also be different, and we look forward to repeating part of our analysis process on these upcoming parts.
     
  2. Skrying

    Skrying S K R Y I N G
    Veteran

    Joined:
    Jul 8, 2005
    Messages:
    4,815
    Likes Received:
    61
    Interesting. With how shockingly awesome G80 is and the possibility of its brothers coming sooner than I expected, I think I'll personally be holding off on a graphics card and therefore my Xmas present to myself.
     
  3. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    That part about extending GPGPU capabilities to the IGP market is very interesting.
    Can we expect these unused IGP's (when there's a discrete graphics card installed in the PCI-Express slot) to become low-cost physics co-processors ?
     
  4. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    That's also a very real possibility down the road. There are hints of some companies considering using the IGP instead of the actual GPU for things like Vista's desktop in order to reduce power consumption too. So I think there definitely are some attempts to increase performance or power efficiency that way that are currently being researched.

    I was actually hinting that GoForces (the handheld GPU product line) are going to be supporting CUDA and GPGPU acceleration in the future though, but we'll have to see when that'll be.


    Uttar
     
  5. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    Maybe this is just a wild guess, but anyway, who knows ?
    I'm just throwing a stick to the fire. :D


    Since the output of the Geforce 8800 is now done via an external NVIO chip, what would be the feasibility of a combined NVIO+Southbridge die on an ATX PCB, with the appropriate analog/digital ports (DVI, D-Sub, HDMI, DisplayPort, TV-out, etc) mounted directly on said motherboard, and have the graphics card ship without any video connectors at all, apart from PCI-Express bus, power and SLI/Crossfire ?

    Kind of like AMD's "Fusion" vision, but applied to a high-performance discrete GPU instead (which may or may not be used for graphics) ?

    Certainly "Fusion", along with the likely integration of memory controllers (AMD K8/K8L, or hopefully Intel's upcoming "Nehalem" microarchitecture) and PCI-Express controllers into the CPU, would allow that kind of flexibility for future Southbridges.
     
  6. Sobek

    Sobek Locally Operating
    Veteran

    Joined:
    Dec 17, 2004
    Messages:
    1,774
    Likes Received:
    18
    Location:
    QLD, Australia
    It is? :???:

    I suppose all of this was to be expected. Pretty much the first thing on a lot of peoples' minds since AMADiT (AMD + ATi) merged, was "When are Intel and Nvidia going to shack up and do some integrated solutions?"

    I guess we know. :sly:
     
  7. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    Do you believe Nvidia could have cooked up an IGP design for Intel (and with DX10 capabilities, of all else) and have it mass-produced in 8~10 months ? AMD bought ATI not that long ago.
    I think they've decided to make it simply because Intel Core 2 Duo/Quad is perceived as be the best CPU family right now (and rightfully so), and where is the best CPU, there is the enthusiast with the big bucks.
    No one is expecting this IGP's performance to be earth-shattering, DX10 is mostly a check-list feature, unlike DX9 (which is required by Vista's Aero Glass).

    I seem to recall a canceled Nvidia IGP for intel some time ago, apparently because having GF61xx GPU's coupled with Pentium D/4's was just not worth it (Intel already had DX9 in the GMA950, and commanded the lion's share of the IGP market).
    This time the GMA X3000 is less than stellar, NV could have sensed an opportunity for good enough profit margins and went for it.
     
    #7 INKster, Nov 10, 2006
    Last edited by a moderator: Nov 10, 2006
  8. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    NVidia's big pitch at the launch was application processors, which is what PortalPlayer does. I posted a big message on this in the "Fusion" thread awhile ago. In embedded markets, battery life and performance per watt are everything, and the way mobile platforms achieve extreme power reductions is custom fixed function silicon for many application functions -- viterbi/butterfly, h.264, java, communications, image processing, 3d, et al. You gain alot more power efficiency by marrying a slow dsp to a bunch of function blocks than making a fast DSP that runs those blocks in software.

    Although CUDA would seem to offer high performance for many tasks done by application blocks on mobile platforms today, I think the overhead of the architecture will prevent it from becoming anywhere near powerful efficient. To put it in perspective, we're talking about drawing 1/10 or less of a mW when idle, and ~200 mW on median workloads. That means NVidia would have to reduce a G8x core's power consumption by a factor of 100x to compete with the likes of a TI ARM married to a TI DSP + application blocks.

    If NVidia really intends to get into this market, they've got their work cut out for them. Although they could perhaps take a severely cut down stream processor chip and turn out something that does the functional of the ARM CPU, DSP, and gfx chips on a mobile phone, I think it's too ambitious. Given their acquisition of PortalPlayer, their most likely steps I think will be:

    1) near term, rebrand PortalPlay products
    2) longer term: leave ARM CPU to others, but extend PortalPlayer application blocks to include 3D graphics stuff, sell integrated DSP solution incorporation graphics, MCU, PP application blocks. (Compete with TI, STM, Renesas, etc DSPs)
    3) perhaps a few years later, they might succeed in taking what they produced in step 2, and extending the gfx architecture to incorporate the ability to execute traditional CPU code (perhaps even run ARM code via code-morphing)

    It will be an uphill battle. Handset manufacturers put battery life first, period. I work in the software area of the mobile market, with business units at Nokia, Ericcson, Motorola, Samsung, et al, and pretty much every software consideration revolves around "what's the impact on battery life going to be?"

    TI is making a killing in this market now, since the arrival of ubiquitous camera phones, which increasingly record/play video, and play mp3/AAC, has driven manufacturers to equip even their low end phones with more powerful performance. And TI right now sells a combo CPU/DSP solution with 2x the performance and 1/2 the power consumption of competitors. They have 67% of the application processor market. And OMAP3 will supposedly have IMGTEC graphics. NVidia would have to deliver mobile gfx more power efficient than IMGTEC, plus better perf/watt than TI's next-gen ARM and DSP. And, the mobile market doesn't really care about DX10, or even DX9 for that matter. A 3D chip that can run OGL 1.0/1.1 level gfx with good power / performance would be a big deal. Remember, phones don't even come close to PSP graphics today, and the PSP GS isn't even DirectX8 pixel shader capable. So NVidia will have a battle trying to sell uber 3D chips to manufacturers who don't have a big demand for mega-game performance and cutting edge API support. Most mobile phones target "casual gamers", most of the gamers who buy mobile games are women, and most of the games are of the puzzle/strategy/card type.

    Thus, any near term NVidia plan should focus on feature parity with the top players, and power efficiency, and not on world beating features and performance. I think handset manufacturers can wait another 3-4 years for that. (except maybe Sony, who will need something for the PSP2)
     
  9. elroy

    Regular

    Joined:
    Jan 29, 2003
    Messages:
    269
    Likes Received:
    1
    Interesting, thanks Uttar. From a layman's POV the G80 seems very modular so I assume it would be easy for them to release a G80 based IGP by decreasing the number of TCP's and L2 cache on die. And one would assume that the memory bus width would be significantly decreased as well.

    I also find the idea of using an IGP for Vista and a dedicated card for other applications interesting. I wonder if it would be possible for the dedicated card to be completely switched off until it is required? It would certainly decrease power consumption.
     
  10. asicnewbie

    Newcomer

    Joined:
    Jun 29, 2002
    Messages:
    116
    Likes Received:
    3
    Wow, thanks for the insight into the mobile industry! It seems like power efficiency still is, and always will be, a primary factor in any application-processor's (APU) bid to dominate the market. It's been awhile since I looked at the handset APU market, but I noticed a feature common to all; a big chunk (>512KB) of embedded-SRAM on the die. Does that save power (versus going out to an external SDRAM?)

    Some of the APUs (like Freescale's MX31) don't have any dedicated MPEG-4 decoder hardware? Yet, the MX31's product literature quoted it as capable of VGA (640x480@30fps) MPEG-4 simple-profile, using just the ARM11-CPU (@550MHz.) Though I suppose that makes it one of the "1/2 the speed, and 1/2 the battery life" kind of competitors.

    I'm a bit confused. I thought the minimum definition of 'application processor' (APU) is an ARM-based unit. No ARM-core means it's just a coprocessor or support-chip in the system. Furthermore, don't the cellphone and (non-phone) mobile markets have different cost structures?

    In the dedicated cellphone market, modem-integration into the baseband chip is the key. And that means the chip has some analog/mixed-signal functions (ADCs and DACs, basic low-pass filtering for RF-modulation.) In Nvidia's case, the lack of an analog front-end (modem) renders the DSP function non-marketable, because competitors (Broadcom, TI, etc.) have it. So even if NVidia licensed or developed a GSM/CDMA modem DSP, it wouldn't be marketable unless they tackle the analog-front end.

    There's still room for discrete GPUs in enhanced cellphones (gaming phones, smart/PDA phones, etc.). But in that case, NVidia will be sharing a design-slot with the host application-processor (and modem) ASIC. NVidia's strength as a graphics/video processor fits well in this platform role.

    Like Transmeta? If there were something to be gained through binary-emulation of the ARM11 ISA, I hoenstly would have expected a DSP/CPU-oriented company (like TI) to have already done it. Instead, everyone still licenses the genuine CPU IP-core from ARM. Portalplayer has licensed the ARM7 cores, which is adequate (and desirable, from a power-consumption standpoint) for the cellphone baseband. But to compete against a TI OMAP2/3 or Freescale MX31, Nvidia would need at least an ARM11 or Cortex license, at RTL level, which costs well into the $10M USD range (for first-use.)

    Eck, I'm confusing myself again! What is an 'application' processor, and what do you mean by 'DSP'? To me, DSP means the cellphone modem: GSM, CDMA, EDGE, etc. (In other words, the baseband processor with analog-front end.) And application-processor means a general purpose SoC with audio/video/LCD I/O, as well as peripheral/storage I/O (SDIO, flash, SDRAM, USB, etc.)

    The biggest question on my mind is ... *drumroll*, did they or did they not win the contract for the video iPod? If Nvidia did, then the iPod alone is enough to justify a custom-spec'd "one-off" ASIC, just for Apple alone. NVidia could replace the two separate chips (Portalplayer and Broadcom) in the current 5G iPod, with a single ASIC Goforce+Portalplayer. Heh, the "Portalforce" or "Goplayer" :)
     
  11. Bludd

    Bludd Experiencing A Significant Gravitas Shortfall
    Veteran

    Joined:
    Oct 26, 2003
    Messages:
    3,165
    Likes Received:
    737
    Location:
    Funny, It Worked Last Time...
    Lineupneup? I think the title of the news item is slightly hurriedly typed. :)
     
  12. iwod

    Newcomer

    Joined:
    Jun 3, 2004
    Messages:
    179
    Likes Received:
    1
    Would be very very nice with a G8x IGP. And if CUDA really took off; I am wondering on the possibility of third party using CUDA for a H.264 decoder or VC-1 decoder. Physcis calculation with with IGP and Gfx PCIe Gfx Card? SLI that works with IGP as well? Accelerated Maths simulation with Mathlab?

    Personally i am waiting for 8900 series. Since normally Nvidia tidy things up with their 2nd gen with lower power consumption and faster performance as well as single Slot cooler.
     
  13. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Yup, that's what I think is being researched although I can't really confirm it either. It'd be interesting if you could do that with IGPs from other vendors too. A X3000 is probably enough for Aero Glass in the short term, so why not shutdown the G80 when it's not really needed at all? I don't think it'll happen cross-vendor though, but I can hope! :)
    (EDIT: I wonder if Fusion would have an advantage there, hmm. That'd be pretty cool for AMD!)
    ---
    DemoCoder: Nice speculation there, although I think their strategy will be a bit different. From my point of view, within 12 months, they'll announce a GoForce that basically has the same level of programmability as G8x, although perhaps with or without some features exposed since I doubt the handheld market is going to use geometry shaders that soon, and it's not even in OpenGL ES 2.0. afaik!

    And then they'll use the GoForce 5500 for 2.5G, and that one for 3G, for the market where you got a non-integrated applications processor. And eventually they'll integrate the applications processor that'd be mostly ARM7 or ARM9+-based on the same chip, and keep the old one for markets that don't want that integration for misc. reasons, like already having their own SoC for some things. Eventually, instead of having to pay ARM for license fees, it makes sense they'd use something the ex-Stexar engineers will have designed with help of their other design teams.

    There's a patent on a handheld super-programmable GPU that NVIDIA issued recently. Personally, I don't like the design, and I don't think they're going to go with it; but they very well might. If they don't, clearly, the G8x design is there at their disposal, and I'm sure a massively underclocked and undervolted single shadercore with half the units on 65nm would work for that market. I doubt they'd reuse the other units there though.

    A good hint at some G8x tech being scaled down to handheld eventually is that Lindholm was the lead guy behind the shader core, afaik. And he recently signed some handheld patents as lead contributor. So we'll see what happens there...
    ---
    For those interested, it looks like a small discussion on the subject of the 9 chips and future parts is also going on in this thread: http://www.beyond3d.com/forum/showthread.php?p=870532


    Uttar
     
    micron likes this.
  14. overclocked_enthusiasm

    Regular

    Joined:
    Apr 26, 2004
    Messages:
    424
    Likes Received:
    3
    Location:
    United States
    I wonder how bad ATI's quarter would have been if they had been in a situation to report it. NVDA had huge sequential and Y/Y growth primarily in the desktop and notebook GPU segment. Coupled with the Mercury data from Q3, it looks like NVDA put a bigtime whooping on ATI and gained enormous share. You really have to hand it to Jen on his running of NVDA. The gross margins at NVDA are now about 45% and ATI's were recently in the high 20's to around 30. NVDA has a real shot of doing $1 billion per quarter as they continue to grow.

    Sounds like when R600 comes out it will face an entire G80 line from IGP to 8800 GTX. Much like 7800, ATI will have 1 or 2 R600 skus against a stack of 4-9 G80 skus from NVDA. In sum, Jen beat ATI again and is now in total control from Intel chipsets to the high end and everything in between.
     
  15. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,120
    Likes Received:
    2,867
    Location:
    Well within 3d
    Would that mean that Aero Glass will dynamically suck away even more RAM? The frame buffer and texture data would have to be copied back and forth between the graphics board and system memory, unless there's some established mechanism for running off graphics memory (which might keep G80 at least partially on).

    Are you saying you think Nvidia will try to build a non-ARM embedded processor?
    That's an expensive proposition.
    AMD used to have a popular (and profitable from a silicon perspective) embedded architecture, but discontinued it because of the expense of maintaining the compilers and software ecosystem. Challenging ARM on its own turf would be a lot more expensive than just paying some license fees, and it would be unlikely to win, since the 3d niche won't draw a vast pool of embedded programmers.

    If the idle power of G80 is accurate, then even if it were stripped down to 1/100 of its shader units and its clock quartered, it would draw too much to match a low-power ARM design. Perhaps in some environments where power draw is less constrained, it would have a chance.

    It would have to be a redesigned pipeline and methodology for something in ARM space.

    Maybe they'd junk two of the three thread schedulers and just have one that handles all three thread types, merge that with the local controller of a single cluster, cut down to less than ten (maybe just four) single-issue pixel/address/texture units, with maybe one or no special function/interpolators on a low-power process at a few hundred megahertz tops. Then they could implement some kind of aggressive clock-gating and sleep state techniques.

    Then it would be a nice embedded streaming processor that happens to be relatively okay with graphics.
     
  16. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    I would expect everything related to non-intensive graphics operations to remain in system memory. Clearly, disabling the IGP and enabling the discrete GPU (or vice-versa) will still require some coordination between the two, so ATI couldn't do it too easily on a NVIDIA IGP, for example, but could do so on their own platforms.

    As I said, I'd assume the decision of whether to use system memory or video memory to be the same one as to whether to use the IGP or the GPU. And if you got a single app that it decides would benefit from the GPU, you're gonna have it activated and the IGP disabled, with the non-performance-critical stuff running out of system memory. I'm partially speculating on the way that'd work here though, obviously.

    Sorry, I was relatively unclear on that. I think that eventually (late 2008/2009 timeframe), they're going to be using their own design based on the ARM ISA, and not an ARM-licensed core. I am in no way implying that they'd be trying to make a processor good at both CPU-like processing and graphics. A good example of what I'm thinking of here is XScale. It's the ARM ISA, but it's not an ARM CPU, and instead they're paying a smaller license fee for being able to manufacture products with that instruction set. I could be horribly wrong here but I think that if NVIDIA wants to take that market seriously (and they seem to, to say the least, given their huge investments in it no matter the fact it's still not a profitable business for them!) in the future, it's bound to happen eventually. We'll see! :)


    Uttar
     
  17. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,120
    Likes Received:
    2,867
    Location:
    Well within 3d
    I agree that what you're saying makes sense. I stated before the news came out about PortalPlayer that Nvidia would do well to counter AMD by getting an ARM license or buying a smaller manufacturer that already worked with one.

    It just sounded earlier like you though that Nvidia could out-muscle ARM, which is unlikely with the billions of ARM derivatives shipped yearly.
     
  18. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    Disclaimer, I'm in software design, not hardware. I believe the answer is qualified yes, with caveats. I have heard that SRAM, when being pummeled by lots of reads/writes is not neccessarily more power efficient than DRAM (although it can be, some SRAM designs claim high speed/throughput and low power). SRAM is also huge compared to DRAM to put ondie. However, the "average case" of mobile phones is "standby", that is, the power consumption when pretty much nothing is happening, and I believe it is this case, where SRAM is supposed to win. That is, to maintain the present state of the memory (not reading/writing) takes less power. In software design, we design for the standby case. For example, if one is working on an email client, one does not want to constantly be polling for mail from the mail server. Instead, one wants the phone to be sleeping and be "woken up" by a signaling method (to go poll for changes) that doesn't require much additional power. SMS is one such example. GPRS radio is not.

    Perhaps. I can only paraphrase the presentations shown at OMA, 3GPP, and wireless congress groups regarding general purpose vs fixed function effects on power consumption, in general, for many tasks like viterbi, symmetric cipher/public key dct/fft, video, the fixed function blocks are much more efficient than a general purpose CPU/DSP executing a software version of the algorithm. That's why TI is selling OMAP cores containing both a fast ARM chip plus APU. The ARM core is fast enough to handle many of the tasks offloaded to the APUs (with exception of video, especially H.263 and H.264), but it consumes many times more power.

    By "leave it others", I mean license someone else's ARM design. It will be hard to sell a separate standalone DSP in the future, although some people are doing it, but mostly everyone is moving towards single-chip integration. It's like selling a 3D chip where the T&L geometry processing was separate. What I mean by "leave ARM to others" is I think NV should just license someone else's ARM design in the near term rather than trying to design their own power efficient one. In the future, they can run ARM code on a non-ARM architecture via translation. Why? Developer adoption. There are tons of ARM ISA tools and many things are still written in assembly language in the embedded market. Despite the portability of C, developers have been slowly gravitating to ARM.

    I think the most likely scenario is that they would acqiure or license analog baseband parts, and integrate them with an all-digital DSP design (of their own IP) via chip stacking.

    In NVidia's case, the gain would be ARM ISA compatability without ARM, not to avoid paying for a license, which is small change for NVidia, but because NVidia would want to reuse their existing architecture and not be burdened with wasting transistors on a full separate ARM core. In other words, if a G80-style architecture, with a pool of scalar stream processors and special fixed function blocks, can run both DSP oriented code and ARM oriented code, then the only issue is a matter of front-end ISA translation. If NVidia instead, has to make a chip containing a full ARM core with RTL license as well as their DSP/Gfx/special function blocks, they may be power-disadvantaged.

    I guess to make it clear, I'm talking about a "Fusion" architecture here in the embedded space, where the "DSP" (where I use DSP to also include GPU, acceleration blocks, etc) and the "CPU" share functional blocks and pipelines. In such an architecture, I don't want an ARM RTL license, I don't even neccessary want ARM ISA except for the fact that ARM compatibility carries branding and development mindshare issues.

    Well, the definitions are being smeared by tighter integration in the mobile space, the functionality tends to move around,but if you look at TI's OMAP 5910 designs, for example, they consist of two cores: An ARM9 general purpose core, coupled to a C55x DSP core. It is in fact, the DSP core that contains the application-specific acceleration blocks: video/image, viterbi, cryptography, etc.

    In the newest OMAP cores, like the OMAP3 designs, it appears that TI has moved the 2D/3D, image and video codec processing onto the ARM part of the core. My point is, NVidia doesn't need this "CPU vs DSP" core distinction. The DSP usually provides high speed MAD performance plus fixed function video/image/viterbi/crypto blocks for acceleration. Just 4 of the G80 SP units would be enough functional units to equal the the ARM+DSP (not considering the fixed function acceleration stuff) Of course, the G80 ALUs are 32-bit FP and not 16-bit integer, so they'll be much larger and consume more power. A "Fusion" architecture for mobile space would take an architecture that is optimized for high throughput stream processing (pixel shading, dsp code, etc) and enable the same core to run general purpose ARM code as well.

    But, I think that's a long way off.


    I strongly suspect that the acquistion of PortalPlayer is indeed, either tied to the video iPod, or the mythical iPhone. The iPod's battery requirements are not as stringent as mobile handsets, so they could get away with being not as power efficient for the time being. iPhone tho, they'd need to have great power efficiency.
     
  19. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    You're entirely correct. Not bad for one of those SW guys. :wink:
    One thing you forgot to mention is that going off-chip sucks as lot of power by itself:

    P = f*V*V*C/2 * # pins
    • f is usually lower, but for the same bandwidth the burst lengths will be higher, so that's a wash.
    • address pin/databus pin ratio is usually higher so overall transaction efficiency is lower. (E.g. 16 bits data for external SDRAM <> 32 bits data internally).
    • IO voltage rails are higher than core voltage rails. (Quadratic term!!!)
    • pad and pcb trace capacitance is more than an order of magnitude higher
    Unrelated to going off chip:
    SDRAM often provides too much storage than actually needed, but you can't go lower due to limited granularity...
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...