The AMD/ATI "Fusion" CPU/GPU Discussion

That's going to depend on the target for the device.

Exactly, that was my point.

Given a restrictive power envelope Fusion products should do very well. You save a whole bundle of power on I/Os.

For all-out-MHz desksides, it will be discrete parts as usual.

Fusion do offer some interesting opportunities for accellerating certain tasks though. AMD will be in a position to provide libraries that accelerate eg. en/decryption and other similar tasks. Since they have 100% control of the GPU part they can ensure that there will be no compatibility issues between one GPU generation and the next.

Cheers
 
Exactly, that was my point.

Given a restrictive power envelope Fusion products should do very well. You save a whole bundle of power on I/Os.

For all-out-MHz desksides, it will be discrete parts as usual.

The target for the GPUs will be determined by AMD's x86 processor lines, so it's not too unreasonable to categorize CPUs as low, mid, or high-end.

High-end processors will not get into power-restricted environments, and since AMD likes to reuse the same core designs with modifications in every level, the low-power chips will be lower-performing.

There's little point currently to pairing a high-end GPU to a low-end processor, so whatever AMD core there is will be the determining factor.

Fusion do offer some interesting opportunities for accellerating certain tasks though. AMD will be in a position to provide libraries that accelerate eg. en/decryption and other similar tasks. Since they have 100% control of the GPU part they can ensure that there will be no compatibility issues between one GPU generation and the next.

Possibly, though it can also be that this forcing of compatibility will slow down ATI's development even more. GPUs aren't as rigorous about that as CPUs are in that respect.
 
Doesn't Intel own close to 10% of Nvidia's stock ?

That "fact" first appeared in a DigiTimes article and countless other stories parroted it. Public records do not and have never shown any stake in NVIDIA. They are required by law to report any ownership stake greater than 5% and this is defintely not the case.
 
dumb question, is AMD-ATI's Fusion series of CPU-GPUs going to be a CPU with a on-die GPU, or a true CPU-GPU hybrid, a unified processor of a new class ?
 
dumb question, is AMD-ATI's Fusion series of CPU-GPUs going to be a CPU with a on-die GPU, or a true CPU-GPU hybrid, a unified processor of a new class ?

My impression is that it'll probably be closer to an on-die GPU, but there may be some design changes to take advantage of the two cores' physical location. An actual hybrid chip would be craptastic at being either a CPU or GPU.
 
Just how close would a stream processor/GPU have to be to the CPU itself? Couldn't they just have a CPU in one socket and a GPU/stream processor in another connected via HT? Might not be a great low end setup but for a high end setup it would be like having a PPU/GPU stuck in a socket. Low end could just integrate the CPU and GPU. Wouldn't be much different than having a discrete card in a system to do extra processing, and in situations where steam processing would be beneficial, dedicating an entire socket to a stream processor and dropping the basic CPU would still be a huge performance boost. It wouldn't necessarily have to be attached to the CPU itself, just located on a HT link close to it.

My understanding is that each AMD CPU has it's own memory controller onboard. So if a dual socket board was used would it be necessary for each CPU to be running identical memory setups? Ignoring pin counts, couldn't one chip run DDR and the other DDR2 for instance? Each chip is capable of accessing the memory associated with another chip over HTT. So what kind of latency penalty is incurred when fetching data from another chip?

Could fusion be a weird take on the 4x4 setup that AMD is playing around with now? I'm sure memory management would take some serious work on where exactly to place data but couldn't one chip have GDDR4 connected and the other standard DDR2. Both chips could feed off the DDR2 of chip A for standard operations and then pull from chip B when high bandwidth is required. If it all works they'd have their high bandwidth while maintaining low latency for traditional CPU roles.

It could all be done with a single socket but every scenario I can think of still ends up with outrageous pin counts. HT looks to have about 20GB/s max so they'd likely have to speed that up a bit for this to actually work. But if they were to put a stream/GPU anywhere they'd already need to increase that bandwidth significantly. Of course having 4-5 links per chip would get it done but would be rather pricey I'd imagine.

Ok finished rambling.
 
There's likely a market for high-end cpus with a basic GPU in areas where high CPU load is paired with little need for graphics, but in other areas the problem becomes more complex.

3dilettante, by this do you mean basically replacing the craptastic graphics chipsets usually included on workstation/server motherboards? These graphics chipsets are so low end (e.g. XGI with ~20MB of RAM) that I'm amazed the cost contributes significantly to the motherboard BOM. I'm not saying it doesn't make sense to integrate, I'm just surprised that this would make for a compelling product differentiator. Assuming that high-graphics performance is not the goal, can't Intel compete in this space simply by integrating GPUs into their north/south bridges?
 
3dilettante, by this do you mean basically replacing the craptastic graphics chipsets usually included on workstation/server motherboards? These graphics chipsets are so low end (e.g. XGI with ~20MB of RAM) that I'm amazed the cost contributes significantly to the motherboard BOM. I'm not saying it doesn't make sense to integrate, I'm just surprised that this would make for a compelling product differentiator. Assuming that high-graphics performance is not the goal, can't Intel compete in this space simply by integrating GPUs into their north/south bridges?

It would be a major selling point for motherboard manufacturers. Their margins are often so low that shaving even a few cents off any given component would look good to them.

Assuming the GPU on-chip doesn't force the need for better power regulators or stronger cooling brackets, it would make things cheaper.

This in turn would make OEMs happier, assuming AMD's strategy doesn't burn up all its capacity.

On the other hand, it wouldn't make any integrated chipset makers very happy, and Intel could counteract AMD's move. Since Intel's manufacturing volumes are so huge, it might be able to cut the price to its chipsets to match the savings.
If AMD's volumes can't rise enough to allow them to underprice Intel, then they will have trouble gaining traction.

Of course, if AMD does manage it and Intel cuts its prices too low to be profitable, it could prompt the Justice Department to look into it.

edit:
On second thought, AMD is rather insulated here since it isn't in the chipset business. Intel would be trying to match prices with all the other chipset manufacturers, something much harder to do than match AMD.

The big question is whether AMD can supply enough processors to make it possible for the chipset makers to count on the volumes.
 
Last edited by a moderator:
Intel gets more $/mm2 from their fabs than NVIDIA could ever dream of. Why would Intel do such a thing, except perhaps to fuck with AMD?

Uttar

They will do it when they start losing serious marketshare and design wins. If they don't, then that means AMD's ATI aquisition and Fusion did nothing more than allow AMD to potentially take marketshare from NVidia, VIA, etc which ironically is cannibalizing their own market, since those companies marketshare is predominantly AMD core logic chipsets. They'll get revenues from their own ecosystem players (whilest kicking their own partners out of IGP the market), but they won't really decrease Intel's share. AMD's primarily business is CPUs and unless they can get more AMD core logic into the market, they will not have any cross-sell for AMD CPUs. So AMD will want to continue to increase its marketshare vis-a-vis Intel, and if they continue to do this, and if the process accelerates with Fusion, Intel is going to take a hard look at its IGP.

So far AMD has been doing well in grabbing marketshare (prior to Core 2), and by corrolary, AMD's chipset vendors have been doing well too. Especially in mobile, AMD has been increasing marketshare, and Fusion has the greatest potential in this market to go head to head and beat Centrino platforms. If they beat Intel at perf/watt by good margins, they have a huge chance of getting a future Apple contract as well, since Apple is fixated on perf/watt now.

However, Intel and AMD's marketshare gains in some markets, like mobile, have seem to come at the expense of smaller players (Transmeta, VIA, etc) and not Intel. Intel increased its share by 3% this quarter, and AMD increased by 1.3%. Once Intel and AMD have sucked up all of the marketshare from the smaller players, AMD will have no way to gain more share except by taking from Intel.
(BTW, the fortunes of Transmeta and VIA declining also signal their buyouts. As I have mentioned, Transmeta would be a good fit for NVidia to attack the ultra-portable/handheld market, as traditional x86 chips don't compete these, and TI's competitors are vulnerable. Transmeta chips can be made to run ARM code with Transmeta code-morphing, so there is another nice benefit to stealing existing vendor contracts)


As for $$/mm and fabs, I would not envision Nvidia producing chips at TSMC for Intel, rather, I would imagine Nvidia customizing and selling a design to Intel, which its engineers in partnership with Nvidia would then refactor for their own fabs, either for IGP, or later, on-die integration. High margins are great for ROI/EPS, as long as you have marketshare, but if you start losing significant marketshare, you can't make up for the loss of earnings by increasing margins further, it won't scale. Even at 100% margin, they'll come a point where your earnings trending downward.

If you look at Intel's Centrino roadmap, even Montevina is only going to have a GMAX3000 with 10 shader pipes instead of 8, clocked at 475Mhz. This will put them around X300SE performance, maybe marginally better. ATI's Fusion is likely to integrate a cut-down R600 which will probably spank the GMAX3000 bigtime. Even if it went with an integrated X1300/1600 cut-down it would still spank it.

Intel has long benefitted from the fact that people who buy its IGP products basically don't care about 3D performance and for the most part, run 2D desktop apps or non-perf sensitive 3D games. However, that may change with Vista, as many of the new technologies in Vista leverage 3D, and Microsoft is pushing apps to utilize more and more 3D functionality for their UIs via XAML/Avalon. For the first time, poor 3D performance could be visible to 2D non-gaming desktop users.

At first, this probably won't be very visible as Aero Glass isn't all that taxing on 3D performance (although heavy use of compositing blending will strain bandwidth at high resolutions) However, as more and more apps start to integrate 3D animations into the UI ala OS X CoreAnimation or MS Avalon, the I think it will start to become visible in store demos, as AMD Fusion powered Vista desktops will appear more Fluid and Snappy. Never underestimate the effect that fluid desktop animations have on people's psyche. Whenever I show off OS X animations like fast-user-switching, the desktop gadget flip-and-zoom-ripple effect, or FrontRow activation, I always get an initial "aww...cool"

OS X is in fact a preview of Vista's effect on 3D. OS X looks less fluid on a Mac with Intel G950 vs ATI Mobility X1600, especially at high resolution. My friend as a MacBook with G950 hooked up to a 24" monitor, and it shows obvious hiccups compared to my wife's MacBookPro with X1600 on her 24".

Basically, if AMD starts to threaten Intel Integrated marketshare, Intel will have to go shopping. Simply pouring more man-power on GMA architecture isn't neccessarily going to fix the problem, and they will have a limited time window to get competitive. If AMD doesn't threaten Intel integrated marketshare, it means Fusion is a failure, and AMD's acquisition of ATI was a questionable move in the long term, and merely of interest to short-term earning acquisition and cannibalization of their own IHV ecosystem.

I would keep an eye on the mobile market, where perf/power, size, and cost are absolutely critical. ATI has the potential to either be a disruptive platform here, or to fail spectacularly. A power-efficient CPU core with adequate performance on-die power-efficient GPU and video playback would be an obvious value proposition to notebook manufacturers struggling to increase battery life and performance. Intel doesn't have any solution in this area today, and their notebook IGP sucks ass. Centrino is a nice platform, but it's lacking graphics. Next-gen notebooks will need to run Vista well, and handle HD-DVD/BR playback efficiently, while at the same time, satisfying people's desiring for ever decreasing weight, increasing battery life, and smaller form factor.
 
Yeah, but unless they hire former ATI/NVidia engineers, it takes time to build expertise and figure out how to solve all of the problems that ATI and NVidia already solved through years of effort. Many companies have tried and failed to enter the DX8/DX9 market over the years, and they always end up way behind ATI/NV. Intel can hire all the new guys they want, but I think they're still going to need time: time that they might not necessarily have if AMD puts pressure on them in 2008.

If we're talking low end value solutions anyway, why would Intel care about performance? Anything they integrate is still likely good enough to run most professional applications, including the likes of Autocad.

The mobile market seems to be converging as well, onto Xscale and OMAP. Everyone except Intel and TI has been losing marketshare in recent years.

Didn't Intel sell off its xScale division? Anyhow, I don't think that market will really persist as is. It will fold into cell phones (which don't have near the performance requirements that xScale reached in PDAs), and low powered x86 devices will take the place of current PDAs and integrate with PCs using a version of Vista specially designed for them, unlike the current Oragami venture. Well, maybe not x86, .NET is supposed to distance MS from specific architectures right? Even still, I think it would be a poor idea to try and scale ARM upwards to meet increasing performance requirements and that some kind of mobile-desktop class processor should find its way in. Heck, maybe PowerPC will find a new market.

Sweeney talks to FiringSquad about D3D10:

http://www.firingsquad.com/hardware/...view/page8.asp

Quote:
Talk of "adding physics features to GPUs" and so on misses the larger trend, that the past 12 years of dedicated GPU hardware will end abruptly at some point, and thereafter all interesting features -- graphics, physics, sound, AI -- will be software problems exclusively.

Sweeny was preaching that 5 or 6 years ago as well, and I think he predicted by now that GPUs would be gone.
You know, one day ray tracing is supposed to provide superior performance than standard rasterization, but I don't think we're near that day. Not for performance reasons anyway, maybe market reasons. Sound I'd say is a good thing to move onto the cpu, benefits are extremely minimal over what the cpu can do (especially with dual core) if existent at all. Physics I think will split between graphics and cpu for a while and settle on one eventually, AI will stay a cpu problem. In the meantime, GPUs will become more cpu like, and ultimately you can probably consider them cpus, but they'll still be cpus heavily organized for graphics and have a major advantage over x86 cpus. (unless x86 cpus get texturing hardware and start getting hundreds of repeated functional units)

I don't know that it really would be in the cards if the CPU makers hadn't hit a clock speed wall, forcing them to go wider rather than faster. . . and that wasn't readily apparent (at least I don't remember it being so) at the time he first predicted it.

It's been predicted since the start of the gpu. People love to see integration.
Also, didn't he make that statement pre t&l or at least while only fixed function existed? GPUs have been going more cpu like, not the other way around, and they've been advancing in performance far faster, even if the clock speed wall hadn't been hit. But come on, the Playstation 2 was what was big in the graphics world at the time, it's easy to see how that could have made people think cpus could be the near future for graphics.

In a couple of years, we could be seeing CPU/GPU hybrids that give you "enough" graphics power, ie, G80/R600 levels of power (even though they will have been superceded by then in the discrete graphics segment). For a lot of people, except the high end gamers, that's more that adequate.

How/where do you draw that line? If you're talking about games, performance requirements will always increase, it's inevitable as long as consoles drive PC gaming, and probably would have happened even without them.
If you're talking about other markets, then just about anything there, including most workstation applications, can be handled by integrated graphics NOW, and there's really no need to invest billions in R&D on a need that's already been satisfied. Or are they spending billions to develop for the <1% of the market that does intense 3d rendering far beyond what you'll see done in AutoCAD or SolidWorks?

Nvidia could possibly use coherent hypertransport, since AMD has allowed other vendors to use it, but that's still up to AMD/ATI.
Even if Nvidia could go hypertransport for its CPU, it would be going against a very established AMD by that point.

Isn't Nvidia part of the Hypertransport consortium? I think it's already been positioned as an industry 'standard'. If they could use it, they could make drop in replacements for AMD processors potentially.

I'd say nvidia would have a better chance going with alternative markets entirely, work with Sony to make Playstation the dominant PC or something.

However, in the span that Intel's market share went from 2% to 35-40 it is now, Nvidia and ATI had quadrupled their profits and revenue. I am sure that Intel's integrated graphics had cost both of them sales, but did not impede a very substantial growth lvel.

Prior to Intel "taking over the 3d graphics market", most people had 2d graphics. Intel didn't really have integrated graphics then (that I know of), but other companies did, their market share didn't come out of nvidia and ati, it came from a market those two companies were not part of. Integrated 3d graphics took over integrated 2d graphics, not discrete 3d graphics.

Of course, maybe a flipper like gpu design integrated in would do well. Say a dual core processor, and instead of adding more cores, add in a very well featured graphics core (maybe even a TBDR) and a decent amount of edram, and you've got something that doesn't need to match the transistor budget of a discrete chip to perform close enough to drive discrete cards out of the market. (same as the sound card market) Well, assuming that the discrete market doesn't pick up edram as well, but it seems a no brainer for the integrated chips to pick it up.
 
The target for the GPUs will be determined by AMD's x86 processor lines, so it's not too unreasonable to categorize CPUs as low, mid, or high-end.
You're right, it's not unreasonable.

But what metric do you use to categorize low/mid/high-end ?

Mhz ?
Cores * MHz ?
Performance ? (SpecINT/3DMark/Sysmark/Whatever)
Performance / watt ?

Since both CPUs and GPUs are power dissipation constrained today it *must* be the last one.

Cheers
 
Last edited by a moderator:
Fusion will be high-end

Hey guys, i found another reason proving that Fusion is high-end.

http://www.overclock3d.net/articles...d__fusion__vision_for_the_future_-_on-die_gpu

the bottom picture says

48 GPU pipes
* 8 FLOPS\Cycle
*3 GHZ
= 1 TeraFlop

So the on die GPU will deliver one TFLOP of performance. But the question lies in heat dissipitation, since they said 48 GPU pipes at 3 GHZ !. However, if it is possible then why not.

However, if this doent work out, there is a backup plan. One of the pics sates that for graphics centric apps (one of which is gaming), the GPU will be really powerful, but taking away the available die space of the CPU. This is more reasonable, since the only thing a gamer will ever do on his PC is gaming, and the GPU will be able to take up physics and AI in addition to graphics, so the CPU need not be that powerful, although a gamer does web browsing etc, these tasks dont require powerful CPU cores.
 
Isn't Nvidia part of the Hypertransport consortium? I think it's already been positioned as an industry 'standard'. If they could use it, they could make drop in replacements for AMD processors potentially.

Cache-coherent Hypertransport is not part of the open standard. Until recently, I don't think AMD let anybody use that.

With Torrenza, AMD is letting coprocessors use it, but not full CPUs.

I suppose in theory that Nvidia might get away with a CPU in a single-socket configuration, but the other concerns would persist. I don't know if Nvidia has the license for x86, or whether Intel will grant it.

Even with that, it is unlikely Nvidia can produce a chip that would be able to match AMD on performance or price/performance.

Nvidia would possibly be something like a bargain-basement Transmeta, assuming that AMD doesn't control the use of its sockets. If AMD can control that, then it's a no-go for drop-in replacement.

Gubbi said:
You're right, it's not unreasonable.

But what metric do you use to categorize low/mid/high-end ?

Mhz ?
Cores * MHz ?
Performance ? (SpecINT/3DMark/Sysmark/Whatever)
Performance / watt ?

Since both CPUs and GPUs are power dissipation constrained today it *must* be the last one.
The determining factor is what AMD (the cpu side) segments its offerings into, not what metric we think up.

An Athlon FX-60 burns way more power than a Sempron that will probably achieve 80% of the performance, but the Sempron is considered lower-end.

The high-end is basically made up of those processors that clock high and get high performance. There is a low-power segment, but it is not the same as AMD's high-end.

Techno+ said:
48 GPU pipes
* 8 FLOPS\Cycle
*3 GHZ
= 1 TeraFlop

So the on die GPU will deliver one TFLOP of performance. But the question lies in heat dissipitation, since they said 48 GPU pipes at 3 GHZ !. However, if it is possible then why not.
I'd be wary of putting too much trust in marketing, it's usually ahead of itself even when it's behind everything else. It's also waaayyyy too early to be spouting off numbers when they don't even have silicon that will instantly catch on fire if it's as fast as that.

I'd be impressed if they think they can clock a GPU that high, even more so if they think they can get something as broad as a top-end GPU that high. I don't think they can, but I'd be impressed with their enthusiasm.
Failing that, how would they feed 48 GPU pipes at 3GHz on a socketed memory interface that will likely supply less bandwidth than today's mid-end GPUs with half or fewer pipes and half or fewer MHz?
 
Last edited by a moderator:
Failing that, how would they feed 48 GPU pipes at 3GHz on a socketed memory interface that will likely supply less bandwidth than today's mid-end GPUs with half or fewer pipes and half or fewer MHz?

Sure those are full pipes and not just ALUs?
Anyhow, embedded dram or cache might work. ATI's had some experience now with both Flipper and Xenos.
 
If we're talking low end value solutions anyway, why would Intel care about performance? Anything they integrate is still likely good enough to run most professional applications, including the likes of Autocad

Given that their low-end solutions run professional Mac applications and OS X crappy today (integrated Intel graphics on MacBooks has poor Quartz Extreme performance, and many Apple applications like Final Cut Studio essentially won't even run) I am skeptical that GMA* will be "good enough" for future Vista desktop apps.

As I explained earlier, future desktops are utilizing more core GPU power to offload tasks as well as increase UI fidelity and Intel's integrated performance is barely adequate.

Intel will "care" about graphics performance if AMD markets Fusion right, and endusers begin to "care" about graphics performance.

Anyhow, I don't think that market will really persist as is. It will fold into cell phones (which don't have near the performance requirements that xScale reached in PDAs), and low powered x86 devices will take the place of current PDAs and integrate with PCs using a version of Vista specially designed for them, unlike the current Oragami venture. Well, maybe not x86, .NET is supposed to distance MS from specific architectures right? Even still, I think it would be a poor idea to try and scale ARM upwards to meet increasing performance requirements and that some kind of mobile-desktop class processor should find its way in. Heck, maybe PowerPC will find a new market.

Cell phone performance requirements will continue to scale as cell phones add more features, from video and photo processing, VoIP, H.264 conferencing, web browsing, biometrics, wifi, gaming, music, etc. However, they will do so with specialized hardware, not by scaling to x86 performance. General purpose x86 performance will never come close to the power efficiency of DSP-style solutions with fixed function acceleration blocks. C55x TI DSPs for example consume .12 mW (yes, a tenth of a milliwatt) on standby and OMAP platforms with media processors decode and encode video at power efficiency levels that NVidia/ATi/Intel/AMD can only dream of.

.NET is irrelevent, Vista is irrelevent. Most DSP work on cell phones is still done in assembly, with the exception of upper level UI stacks, apps, etc which are in C. Embedded Vista/.NET compact framework on cell phones is hopelessly illconceived. Microsoft's smartphone OSes todate have been power hogs with crappy performance and eat up way more RAM/ROM space. The embedded space is one area where a desktop development paradigm simply does not fit well.

As for XScale PDA performance, XScale in PDAs got its ass handed to it by TI OMAP by large margins in benchmarks. Because XScale whilest an ARM CPU, is way less efficient at common mobile tasks than a media platform like OMAP which includes ARM, DSP, and fixed function acceleration.

The #1 requirement in mobile phones is power efficiency, and the best way to achieve that is to throw custom hardware at it.

The market isn't going away anytime soon. Low powered ultra-portable PCs might become a reality on x86, like OQO style devices, but most PDA functionality is going to simply be subsumed into mobile phones. The idea that people will separately carry around mobile phones, iPods, and PDAs is absurd. Most of the features of PDAs that revolve around PIMs and mini-apps have already been subsumed by more powerful phones, and music and mobile video playback will soon head that way too. There's just not justification for a separate PDA platform (which is not a phone).

All of the basic features people want in mobile devices: voice communications, email, text messaging/im, photo capture and manipulation, video capture, video encode/decode/playback, music playback, gaming, PIM functions, internet browsing, etc will be captured by the general purpose CPU + acceleration blocks on a family of OMAP-style platforms, depending on which features the vendor is selling (minimalist phone, vs kitchen-sink) It's the only way to achieve the power requirements. There is no room for x86 here, and there won't be a "one size fits all CPU", as the nature of the market demands custom solutions for each product. You are simply not going to see one CPU exist which does everything, and put into every phone, because the cheaper and simpler phones won't wan to trade off margins, battery life, form factor, et al for features or power they are not using.
 
aren't you forgetting the scores of crappy Java cell-phone games :).
and, not everyone is interested in all that crap.
dedicated mp3 players are also quite popular (ipods and chinese USB keys), or things like the nintendo DS (you know)


cell phones will always be limited by screen size, controls, interface, battery (not only length; you don't want to waste it if it means you can't receive calls or phone after that), crippled by operators (and a closed system in general), nevermind the slow and expensive networks.

though I agree the AMD thing might be interesting for embedded stuff (especially the GPU+CPU combo).
 
The #1 requirement in mobile phones is power efficiency, and the best way to achieve that is to throw custom hardware at it.

The market isn't going away anytime soon. Low powered ultra-portable PCs might become a reality on x86, like OQO style devices, but most PDA functionality is going to simply be subsumed into mobile phones. The idea that people will separately carry around mobile phones, iPods, and PDAs is absurd. Most of the features of PDAs that revolve around PIMs and mini-apps have already been subsumed by more powerful phones, and music and mobile video playback will soon head that way too. There's just not justification for a separate PDA platform (which is not a phone).

Sorry if I came off as inferring that cell phones would follow a desktop style development, it's not what I meant. Rather, cell phones will grow their features to encompass current PDA functions (in a far more streamlined manner and probably less flexible), while the PDA market will grow into mini PCs. The PDA platform will disappear, folded into both phones and PCs. For that matter, Xscale processors are overpowered for the general purpose requirements of a phone, a phone does not need an 800mhz processor, even an ARM, to accomplish what can be done better with DSPs. I understand that, and didn't mean to imply that cell phones will become full blown PCs.
BTW, I could see something similar to a Cell design replacing all the hardware in a cell phone sometime in the future. Stream processors have already shown good performance per watt and are more programmable than DSPs.

Additionally, by the time Intel and AMD get around to integrating GPUs into their CPUs, I think it's feasible that the integrated graphics could be at a level sufficient for most desktop applications. GMA950 is unacceptable now, but ATI's integrated graphics are already quite usable for most 3d applications within XP, how much beefing up would it take to make an integrated graphics chip that could handle Vista and those same applications? Of course, that would be ignoring one of the fundamental lessons in computing that otherwise would have had Amdahl's law dooming concurrent processing and Windows never needing more than 16MB of ram, that any problem can and likely will be scaled up to meet the capabilities of the hardware. The scope of the problem expands to meet the limits of the hardware, so it's unlikely that just because an X300SE can handle Autocad now that it will still do so when AutoCAD2009 or whatever version comes out.
 
aren't you forgetting the scores of crappy Java cell-phone games :).
and, not everyone is interested in all that crap.
dedicated mp3 players are also quite popular (ipods and chinese USB keys), or things like the nintendo DS (you know)

Of course not everyone is interested in kitchensink phones, which is why companies like TI sell a wide range of devices that let phone manufacturers make anything from "just a phone" to "Swiss army knife" But in general, someone who wants a "free" simple phone, that only makes calls and is missing all the bells and whistles, is also not a person who is going to carry around a bulky video iPod and PDA.

There is no real justification for something like an iPod mini or USB mp3 player to exist in the long term. MP3/AAC music playback is a commodity operation, and a separate device is simply not needed in the future. Anymore than one needs to carry a separate "phone book" portable device, and separate "SMS" device. And if it weren't for problems with integrating quality optics into phones, there wouldn't really be a need for separate wallet sized digital cameras. I have a Exilim S-500 myself which is about as small as my RAZR, but frankly, the only reason I won't use camera phones is the terrible optics. (for real photography, I use a digital SLR) I carry around my Exilim all the time to take pictures of my son, but the fact is, I'm sick of it.

Likewise, I own a 80gb video iPod. But frankly, H.264 video playback is done just as well by many cell phone platforms, and the iPod screen isn't really bigger than many of the more advanced phones out there (Treo, PPC, UIQ and Series 90, etc) There is no real advantage iPod and Zune devices have over embedded platforms, infact, I would bet that H.264 playback on the newest OMAP platforms consumes much less power than iPod style devices.

Media playback is a commodity.

cell phones will always be limited by screen size, controls, interface, battery (not only length; you don't want to waste it if it means you can't receive calls or phone after that), crippled by operators (and a closed system in general), nevermind the slow and expensive networks.

They are no more limited by screensize, controls, interface, and battery than any other handheld device of similar proportions. I've had phones with bigger screens, more sophisticated software and interface than an iPod.

Crippled by operators yes, but it won't last forever. The operators are fighting a defensive position, just like long distance carriers, against future packet switched wired and wireless networks. The writings on the wall, the carriers know it. I've spent the last 5 years working with operators in the mobile industry and they admit is much, which is why they are so keen to get into the content business to protect against the day when they become bitpipes.

At the last wireless congress, I was shown a private demo of a phone platform with pseudo-software defined radio. It had fixed function blocks and hardware to implement multiple configurable radios. It was not true SDR, but it did practically every 2.4Ghz protocol (bluetooth, 802.11a/b/g, etc), GPS, GSM/GPRS/EDGE/UMTS, and CDMA equivalents. The fact is, device manufacturers are looking at future phone platforms really as general purpose media and communications platforms that can be programmed to do a wide varierty of things depending on phone form factor, audience, power management, etc. They're putting acceleration blocks, DSP, and radio hardware so that the device can subsume all of the common commodity functions of a vast array of handheld consumer electronics, turning what was once a device specific issue into nothing more than a software issue.

These companies have amazing roadmaps down to 32nm with incredible levels of chip-stacking and interconnect to pack a vast array of functionality into a really small space at unbelievable power consumption levels. You have to not think of these devices as "phones" but as mobile media communicatons platforms, sort of like a sci-fi tricorder, a very efficient I/O oriented bound develop (electromagnetic I/O in particular)

Some problems will be more difficult to tackle, like putting anything even remotely competitive to digicam optics on these devices, but pretty much every other device I can think of, phone, pda, gps navigator, music playback, video playback, gaming, has no real barrier to commodification on these platforms. The only exception is building a device with top-end game performance (e.g. PSP), but as Nintendo demonstrated, mega first person 3D performance for mobile gaming isn't neccessary what people need on these devices.


As for future Intel IGP being good enough, Intel's existing roadmap out to 2008 shows that their IGP won't be much more than a DX10 chip with X300 performance, whereas Fusion will probably be alot better. We'll see what happens.
 
They are no more limited by screensize, controls, interface, and battery than any other handheld device of similar proportions. I've had phones with bigger screens, more sophisticated software and interface than an iPod.
Of course, but there is a limit on how big a cellphone can be if it's supposed to appeal to a mass audience.

Some problems will be more difficult to tackle, like putting anything even remotely competitive to digicam optics on these devices, but pretty much every other device I can think of, phone, pda, gps navigator, music playback, video playback, gaming, has no real barrier to commodification on these platforms. The only exception is building a device with top-end game performance (e.g. PSP), but as Nintendo demonstrated, mega first person 3D performance for mobile gaming isn't neccessary what people need on these devices.
I don't see much of a problem with getting top-end game performance. The real barrier for some of these applications is form factor. Some people consider the DS Lite to be a bit too small to be held comfortably as a gaming device, despite it being larger than some PDA like the Dell Axim x51v.

I'd much rather have a small device that's a phone/music player/"read only" PDA and a separate, larger PDA/GPS/web browsing/video/gaming/camera device. Simply because the latter requires a certain screen size and control layout to be useful, while a phone should be small enough to fit in every pocket. Would be nice to have these be able to sync with each other, though. Of course the larger device could also have phone functionality, thus being "all-in-one", but it has less mass market appeal.
I can imagine cases where you'd plug an external monitor into your phone, like in a car, train, or plane. But you don't always have that option. I can even imagine PCs being largely replaced by docking stations for smaller mobile devices, but that's a more distant future.
 
Of course, but there is a limit on how big a cellphone can be if it's supposed to appeal to a mass audience.

Seen this, btw? http://phx.corporate-ir.net/phoenix.zhtml?c=114723&p=irol-newsArticle&ID=925584&highlight=

The agreement focuses on development of high-volume design for manufacturing of Microvision's proprietary Integrated Photonics Module (IPM(TM)), a tiny display engine suitable for a variety of display applications, including ultra-miniature laser projectors for mobile phones.
 
Back
Top