Is it time for Nvidia to go to the next level?

wolf2

Newcomer
Nvidia has made its name by selling semi-conductors into all things graphics. Sure, along the way they've gotten into core-logic, some video and sound compression technologies, IO and a lot of software but these have been necessary ancillary competencies to keep the graphics machine rolling along.

I started this thread to ponder the idea that maybe it's time for Nvidia to step up a level. Will they still be considered a relatively pure graphics play in 5 years? Or, will they become known as (take your choice):

- semiconductor consumer electronics supplier (media player, UMPC, portable game, phone, console)
- semiconductor platform supplier (replacing or standing next to Intel/AMD with their own PC platform solution
- a branded supplier making high-tech products for the consumer electronics sector

Or is there some other direction they might go?
 
All IMO and highly speculative - [so don't bother reading if you're not really interested!] - but...

It makes sense for NVIDIA to leverage their expertise into new domains, especially so with more and more companies becoming fabless (TI, anyone?) and the gap in process nodes between the different players getting smaller and smaller. Chartered is now part of one of the IBM alliances, and TSMC has a ridiculous amount of money to invest in R&D. UMC seems to be following quite well too, but we'll see how that goes in the longer term.

What this means in my mind is that NVIDIA shouldn't be afraid of entering new markets that are somehow, even just partially, affiliated with their traditional areas of expertise, or any other ones they'd develop along the way. Jen-Hsun's motto tends to be that they want to compete in markets where they can "add value". That's an excellent strategy right now, but if they kept following it for the next 5+ years, they'd eventually end up in a position with few substantial growth opportunities remaining. And then they'd stagnate with a $20-25B market cap (plus inflation) for the next 500 years or so.

Let us then predict a few things. First of all, R&D budgets for both the technology side of things (IP) and the implementation-specific side of things (chips etc.) are going to rise in all semiconductor markets, at when aiming to deliver a highly competitive offering. This is hardly a revolutionary idea, and the consequences of it are fairly predictable. Higher levels of company integration and more synergies across a company's various product lines will be necessary in order to turn in appealing levels of net profit, rather than merely approaching "profitless prosperity".

What I'm trying to get at here is that there is no good strategic reason why NVIDIA shouldn't progressively try to become pretty much all the things you listed. The IP necessary for the various addressable markets is going to become more and more similar, until eventually, it (nearly!) fully converges. And I believe that dynamic is going to play out much sooner than some are expecting. We're going to see cutting-edge desktop GPU architectures used in handhelds Very Soon(TM) now, and that's only the beginning. It's not unthinkable for even video to have synergies between different product lines soon - in fact, AMD seems to already be going in that direction!

This is already getting a tad long and I don't want to get carried away, so I'll try conclude pretty fast from here. I believe NVIDIA's strategy will have to become a very diversified player leveraging the same technologies in a huge number of product lines. This also means that they will have to expand their technology portfolio, but they're rapidly getting there already. A good company that has followed such a strategy, but perhaps to less of an extreme, is Texas Instruments. Look at how many different chips they've got using their DSPs, and how many different markets their DSPs are in!

When it comes to CE, I think it should be fairly obvious where I think NVIDIA should go from here. Ideally, they should have their own DSPs (Stexar?) to go with the rest of their IP (including GPUs), in order to substantially increase their addressable market - but they could do fine even without that.

As for the PC market, NVIDIA isn't going to compete with Intel (or AMD) in their current position. Unlike what some people might be thinking, they're not going to come out of no where with the new Conroe. On the other hand, they've got an interesting market dynamic working for them: the CPU is getting more and more useless every day. The question is when this will truly begin to matter: in 2008, or in 2015, or even much later?

If I had to make a wild prediction, I'd say NVIDIA's best shot today is to kill VIA and pick up the pieces. They've got an unique opportunity to do this, by competing aggressively in the lower-end of the chipset market which is currently VIA's bread and butter. My understanding is that by acquiring the entire company, and making it a subsidiary, they'd be legally able to continue developping and producing x86 CPUs, including the C7 family. That's a free pass into the embedded market, and also a great way to compete in certain parts of the PC market where the CPU doesn't really matter anymore.

It's very nice to speak of performance-per-Watt, but the chip that takes the least power is the one that doesn't exist. From that same perspective, consider something like a beefed-up and dual-core derivative of the C7 on 45nm. Such functionality would take barely 15-20mm2. How likely is it that Fusion takes less power than that? Now, rather than adding the GPU to the CPU, just add that CPU to the MCP+IGP, which is already single-chip today. This strategy has other interesting dynamics that would play into it, such as the influence of GPGPU.

Overall, this obviously all makes pretty good sense to me, but that doesn't make it any less speculative. I think the CE part of my prediction is not really out-of-this-world; the PC part, however, is much more wild-eyed. I think there's an opportunity for NVIDIA to go in that direction and win big, very big. At the same time, plenty of other possibilities exist, and in all honesty, I'm just describing the kind of strategy I'd be preparing right now if I was in the management team of any of those companies - which I'm obviously not, so take all of this with a grain of salt ;) (But hey, you asked for it to be speculative pretty much, so hopefully this is what you are looking for! All of this assumes that NVIDIA is successful in all of their intended markets though, which is a pretty big IF, but it'll be interesting to watch anyhow)


Uttar
 
Considering that JHH has been proclaiming that they view GPUs as the DSP of the 21st century it seems unlikely that they will build DSPs, but rather address the DSP market with GPUs. It seems pretty obvious that audio processing, for instance, will be handled by GPUs. Didn't JHH speak of SoundStorm reappearing, but not in the form we might expect?

Other DSP functions that are suitable for GPUs that immediately come to mind are imaging and visualization. NVIDIA is already making a big push in medical imaging, and there are opportunties for sonar, radar, and other sensors for military applications that currently use DSPs.

Robotics seems like a tailor-made application for GPUs. There is GPGPU research into neural networks and other artificial intelligence. I personally would like some smarter robots at my house.

From the beginning NVIDIA has targetted SOC. See NV1 and JHH's stint at LSI. They are arguably getting there in the handheld arena. To date functionality of phones have been limited, but the iPhone looks to change all that. In my opinion, the reason is WiFi, and web browosing outside of the constraints of carrier charges. The handheld market is wide open, in that there isn't an Intel dominating. And the opportunity is massive. 1 billion phones a year. In five years costs will be low enough so the vast majority of phones have iPhone like functionality, only with WiMax (hopefully). As NVIDIA integrates more functionality onto chips they can raise prices a bit (they have been working on RF for years, for instance, will we see them commercialized?). That translates into a vast market opportunity.

Then there is the high-end imaging that the quadroplex targets. If that evolves into a substantial business with very large margins in concert with handheld growth, the PC bsinesss could even stagnate and NVIDIA might have massive growth potential. And I would say that Intel's impending entry into these high-end visual computing markets is a validation of JHH's assertion of this being a large market indeed. Otherwise, why is Intel getting into GPUs in a serious way after all these years?
 
I think they will mosey towards the CE side of things, but not entirely. My prediction is they focus on being the high-end value add supplier to anyone who needs reasonably good graphics and doesn't have their own captured division to do it credibly. I think Intel and AMD are going to mostly kill their low end discreet and IGP lineup by 2012. . .but Sony, Apple, etc CE partnerships will take up a good bit of the slack, as will maintaining a robust midrange and high-end PC presence.

If Apple can continue to grow their Mac business off their CE business, that could provide an interesting play for NV too.
 
If I had to make a wild prediction, I'd say NVIDIA's best shot today is to kill VIA and pick up the pieces. They've got an unique opportunity to do this, by competing aggressively in the lower-end of the chipset market which is currently VIA's bread and butter. My understanding is that by acquiring the entire company, and making it a subsidiary, they'd be legally able to continue developping and producing x86 CPUs, including the C7 family. That's a free pass into the embedded market, and also a great way to compete in certain parts of the PC market where the CPU doesn't really matter anymore.

It's very nice to speak of performance-per-Watt, but the chip that takes the least power is the one that doesn't exist. From that same perspective, consider something like a beefed-up and dual-core derivative of the C7 on 45nm. Such functionality would take barely 15-20mm2. How likely is it that Fusion takes less power than that? Now, rather than adding the GPU to the CPU, just add that CPU to the MCP+IGP, which is already single-chip today. This strategy has other interesting dynamics that would play into it, such as the influence of GPGPU.

This would indeed make very much sense to me. They'd get all the x86 licenses needed, as well as loads of great IP.

I could also see them come out with a great audio solution then to compete against Creative, and by subsuming VIA's Envy24 IP and what they already have with Soundstorm legacy, that would even give them a nice edge.
 
It seems pretty obvious that audio processing, for instance, will be handled by GPUs. Didn't JHH speak of SoundStorm reappearing, but not in the form we might expect?
I was a pretty big fan of SoundStorm back in the days, and I loved my nF2, so I actually looked into that a few months ago - and came to the conclusion that Jen-Hsun was referring to the audio engine of the GoForce 5500. They announced the licensing of the Tensilica DSP+Sound Engine just 2 months after he said that (so, they were already looking into it) and it was later confirmed as technology used in the GoForce 5500.

I believe the customized Tensilica DSPs are also helping for the video and image processing parts of the chip. It's some really great tech btw - really the kind of thing that makes you go "damn! I thought I'd have thought of that!". And it turns out that, iirc, the Tensilica founder has also been on the NVIDIA board of directors pretty much forever, so the IP choice doesn't surprise me much personally ;)
Considering that JHH has been proclaiming that they view GPUs as the DSP of the 21st century it seems unlikely that they will build DSPs, but rather address the DSP market with GPUs.
You know, what most people are not aware of is that PureVideo actually includes what most would call a DSP - and that's the part which was broken (if it even existed, ofc) in the orignal NV40. It's all in the PureVideo tech docs, relevant diagrams are on page 4 and page 6. From those diagrams alone, I'd tend to believe it has been developed in-house and that it's a fairly basic - but I could be wrong, of course. In either case, it makes a lot of sense to me to be developing energy-efficient and highly scalable DSPs to re-use them for GoForce, PureVideo, sound processing, wireless signal processing, etc.

That doesn't mean I disagree that NVIDIA would want to use the GPU for all the things you listed. And you're most likely right that this is what Jen-Hsun is thinking of. There are TONS of applications where DSPs are used and in which GPUs would do amazingly well, and the ones you listed are very good examples. But the way I tend to think of GPUs in that context is that they are "parallel DSPs with FP32 units". That means if you don't need FP32, they're going to be less area/power-efficient than an optimal solution. And if your workload isn't massively parallel (even more so than standard parallel DSPs, I'd tend to believe), you're also going to have some problems.

That's why I think it makes sense to also have some DSP IP at your disposal. Arguably, a 'full-blown' CPU would work too; the two terms have been more and more converging in the past. For NVIDIA, they've got the choice of either developing their own IP or using Tensilica's until the end of days. Given that the IP business model tends to give you very little revenue per chip, and that NVIDIA would likely not want to compete in the super-low-end parts of the markets (at least, not in the coming 5+ years), they could do fine with Tensilica. At the same time, they just got these great guys from Stexar, and they're likely some of the brightest DSP engineers in the business - so why not give them something to do in that area? We'll see - this is much more a business decision than a technology one, anyway.

1 billion phones a year. In five years costs will be low enough so the vast majority of phones have iPhone like functionality, only with WiMax (hopefully).
Funny you mention that. Steve Jobs said in an interview at Macworld that he really wants to get the iPhone costs down and target a wider and wider market over time. What interested me most in his statements is that I felt he wanted to cost-reduce that actual model, rather than create lower-end models for other markets.
As NVIDIA integrates more functionality onto chips they can raise prices a bit (they have been working on RF for years, for instance, will we see them commercialized?). That translates into a vast market opportunity.
Exactly. It's also quite interesting you mention NV has been working on RF for years. They certainly have a few (not-so-interesting-imo) patents on the subject. What surprised me more, however, is that PortalPlayer also has been working on wireless for years, and not only WiFi/bluetooth, they've been working on wideband too. When NVIDIA bought them, they had just taped-out their "super-integrated, single-chip" offering that supposedly included Bluetooth, WiFi and wideband (GSM/EDGE?). They clearly presented that as having the analog parts on the same chip, but I'll admit not to be too sure what kind of die size they're aiming at if they want so much analog and high-end functionality on a single chip!

Either way, if all of that wireless expertise at PortalPlayer is in-house, then the engineering dynamics of that acquisition were most likely very different from what we have been considering so far. From a political point of view, it's still very much about Apple - but from a technical point of view, it'd also be a great way to get some IP in tons of areas that Jen-Hsun would tend to consider as "mature", in order to integrate that into more future products/chips.

Then there is the high-end imaging that the quadroplex targets.
Indeed. NVIDIA has done some amazingly smart design choices for those markets with G8x. If they execute properly, they're poised for tremendous success that, in my opinion. Let's just say that NVIO is much more flexible in terms of configurations than what G80 would make us think. I believe the chip they're waiting for to make another big push with Quadro Plex *and* GPGPU is coming in Q2/Q3 2007, and it's obviously 65nm.

And I would say that Intel's impending entry into these high-end visual computing markets is a validation of JHH's assertion of this being a large market indeed. Otherwise, why is Intel getting into GPUs in a serious way after all these years?
I could be horribly wrong, but I'd tend to believe Intel is thinking about this more from the GPGPU point of view. They know that there is a huge chunk of the server market that IS massively parallel, so once the proper synchronization primitives exist in GPUs (CUDA is a good first step!), NVIDIA and AMD will compete there with GPUs or pseudo-GPUs. They can't afford to lose that business, and at the same time, they need some extra bundling power versus AMD in the OEM space.

In either case, it'll be very interesting to see what NVIDIA does from here. The CE and handheld markets are definitely the "safest bets" for them, but if they played their cards flawlessly, there are some great opportunities to expand above and beyond their current position in the PC market. It remains to be seen if they'll want to take those risks though, given that taking Intel and AMD head-first tends to be a pretty dangerous proposal!


Uttar
 
I could be horribly wrong, but I'd tend to believe Intel is thinking about this more from the GPGPU point of view. They know that there is a huge chunk of the server market that IS massively parallel, so once the proper synchronization primitives exist in GPUs (CUDA is a good first step!), NVIDIA and AMD will compete there with GPUs or pseudo-GPUs. They can't afford to lose that business, and at the same time, they need some extra bundling power versus AMD in the OEM space.

Servers are obviously a major opportunity for NVIDIA and GPGPU. It will be interesting to where CUDA goes and how quickly NVIDIA can begin to address this market. Will it be with a double precision refresh to G80?
 
Nvidia has done very well as a supplier of graphics chips to the PC industry. They have surpassed every one of the 50 odd (maybe 100) graphics suppliers that existed in the early 90's when they started, until they stood shoulder to shoulder with ATI.

It is my belief that ATI knew they would not be able to compete going forward and managed to get bought for a good market cap even as their marketshare was dwindling. Of course the other argument is that ATI was doing just fine and was shrewd enough to link up with AMD because they had some pre-ordained vision. I personally subscribe to the former theory.

In any case, we are left with the current situation and Nvidia is king of the hill. It is my belief that for Nvidia to continue to thrive they need to set their sight on the next target. That is, they need to achieve parity in the next semiconductor space that is synergistic with their competencies of graphics and software.

I think Nvidia needs to be able to say in 5 years that they will stand shoulder-to-shoulder with Intel.

Can they do this? I don't know. The questions are:
- Can Nvidia beat the Intel hegemony of sales, marketing, and pricing?
- Can Nvidia beat Intel with their in-house fab-advantage?
- Can Nvidia beat Intel's deep multi-headed product development agenda?

There are other questions of course. Overall the problem is huge, but Nvidia is a crafty insightful competitor.

What if standing shoulder-to-shoulder with Intel is too big a target? What are the alternatives?

There are some. Portable Consumer Electronics (CE) is huge and changing quickly. There will be only a few dominant semi-conductor suppliers in 10 years as the cell-phone, PDA, media player shrink to an acceptable form-factor and usage model that everyone accepts. The 100's of semiconductor developers today will dwindle just as graphics suppliers of 10 years ago have done to today.

Alternatively, maybe Nvidia doesn't slay the Intel dragon, but simply takes away part of their pie. The engineering scientific piece of the pie with its associated high margins. Further, what if Nvidia adopts the SGI or Sun model of branding their own systems into the space. A very big change in margin and business model from that of a seniconductor supplier, but maybe Nvidia can change the rules of the game just as they did with graphics chips (eg. raise prices by adding and driving value).

Just some thoughts.
 
Alternatively, maybe Nvidia doesn't slay the Intel dragon, but simply takes away part of their pie. The engineering scientific piece of the pie with its associated high margins. Further, what if Nvidia adopts the SGI or Sun model of branding their own systems into the space. A very big change in margin and business model from that of a seniconductor supplier, but maybe Nvidia can change the rules of the game just as they did with graphics chips (eg. raise prices by adding and driving value).
I don't even know that they have to do that anymore, considering what we're seeing with CUDA and Quadro Plex.
 
Will it be with a double precision refresh to G80?
At launch, NVIDIA said a refresh with FP64 would be ready in "about a year". We'll see if it turns out to be more or less than that soon enough, hopefully. Also, that might just be a quite interesting differentiating factor for the professional market in the future. They could just not expose FP64's full performance for the consumer market, in order to encourage people to buy the more expensive alternatives. They already did the exact same thing for wireframe rendering with Quadros in the past, after all...

It'd be interesting if they also had some other incencitives for the professional GPGPU market. Obviously, only exposing CUDA for that market is a recipe for disaster. so unless they go nuts, they won't do anything so generic. NVIDIA would benefit a lot from more game middleware using CUDA as an acceleration path for misc. things - they need to encourage that, not prevent it.

wolf2 said:
I think Nvidia needs to be able to say in 5 years that they will stand shoulder-to-shoulder with Intel.
Ideally, they certainly should. And if they do, it'd be in their best interests to be ready before that, given Intel's roadmap. As I said, though, I'm not convinced they can compete in a "traditional" way, by beating Intel at their own game.

NVIDIA can still add a lot of value by focusing on much more "minimalist" parts of the CPU market. What's even cooler from that point of view is that they wouldn't be alienating their own market, because they hadn't entered it yet! Think of the VIA C7 as I listed above. Then look at the applications the entry-level part of the market is running. If NVIDIA could embed a processor of similar or higher performance in a single-chip solution (including southbridge, unlike for Fusion!), they could make a very interesting entry in the market.

The logical evolution of that is to include the same cores in some of the GPUs as control processors. There are many advantages to having single-threaded cores that can run certain classes of programs on the same die as the GPU/stream processor. This could also make some possibly game-oriented workloads, such as raytracing and physics (beyond "effects"/particle physics), more efficient on GPUs in the future.

Alternatively, maybe Nvidia doesn't slay the Intel dragon, but simply takes away part of their pie. The engineering scientific piece of the pie with its associated high margins. Further, what if Nvidia adopts the SGI or Sun model of branding their own systems into the space.
That's a very appealing proposition, indeed. If they managed to do that successfully, they could definitely have a huge market with ludicrous profits at their disposal. Intel obviously knows about that risk though, which is why NVIDIA should hurry up before Larrabee and other related projects. They need to get a foothold in the market before it's too late.

Just some thoughts.
Indeed, and very appreciated too. This subforum was a tad too inactive lately to my tastes, and this seems to be doing a pretty good job at waking it up ;)


Uttar
 
The Baron said:
I don't even know that they have to do that (take part of Intel's pie) anymore, considering what we're seeing with CUDA and Quadro Plex.

Uttar said:
Ideally, they certainly should (Nvidia needs to be able to say in 5 years that they will stand shoulder-to-shoulder with Intel.). And if they do, it'd be in their best interests to be ready before that, given Intel's roadmap. As I said, though, I'm not convinced they can compete in a "traditional" way, by beating Intel at their own game.


Well, overall we're in agreement that Nvidia is headed down a path of dominating and owning the engineering scientific space. CUDA is a leading edge effort to address parrallel computing tasks and Quadro Plex is the likely vehicle. Likewise, any territory that Nvidia carves out at Intel's expense will use the non-traditional methods characterized by CUDA or its evolution.

QUESTION: Can Nvidia use the CUDA architecture concept to implement a highly parallel x86 instruction set? We know Transmeta used an interpetive micro-coded engine to implement one. Can a hybrid CUDA/RISC architecture be used to make a leap in x86 execution? Sort of a reverse version of an x86-CISC with SIMD instructions.

Uttar said:
If NVIDIA could embed a processor of similar or higher performance in a single-chip solution (including southbridge, unlike for Fusion!), they could make a very interesting entry in the market.

The logical evolution of that is to include the same cores in some of the GPUs as control processors. There are many advantages to having single-threaded cores that can run certain classes of programs on the same die as the GPU/stream processor. This could also make some possibly game-oriented workloads, such as raytracing and physics (beyond "effects"/particle physics), more efficient on GPUs in the future.

Well this is also an interesting idea, but I guess my question is how does Nvidia dominate. It seems to me a simple ARM (for example) mated with an advanced GPU is a very traditional idea. I think it is too timid a development when what is needed is SOC yes, but a dominant solution superior in every way (fast CISC, fast GPU, parallelism, low power).
 
QUESTION: Can Nvidia use the CUDA architecture concept to implement a highly parallel x86 instruction set? We know Transmeta used an interpetive micro-coded engine to implement one. Can a hybrid CUDA/RISC architecture be used to make a leap in x86 execution? Sort of a reverse version of an x86-CISC with SIMD instructions.
No. CUDA's paradigm (parallel operations on large multidimensional data sets) is so different from basic x86 (serial operations on four or eight bytes at a time unless you use SIMD instructions) that they would gain nothing from CUDA. Unless, of course, they go with a standard x86 CPU with on-die dedicated vector units (think Fusion) with their own instruction set to use it (in addition for use with SSE or something).

But I think AMD is already thinking that :p
 
Well, overall we're in agreement that Nvidia is headed down a path of dominating and owning the engineering scientific space.
I'd just like to insist on the fact that Intel will likely compete head-to-head with them in that space soon enough. If NVIDIA plays their card rights, they'll definitely dominate there though - but now is not yet the time for overconfidence.
CUDA is a leading edge effort to address parrallel computing tasks and Quadro Plex is the likely vehicle.
Well, just thought I'd make this a bit more precise - I would tend to believe Quadro Plex is the brand name for their high-end imaging solutions. It is not impossible that they would be adopting the same product name for both markets, although that'd surprise me - but sure, the basic idea and casing would be the same, I'd imagine.
to implement a highly parallel x86 instruction set?
It depends a bit on what you mean by that, but...
We know Transmeta used an interpetive micro-coded engine to implement one. Can a hybrid CUDA/RISC architecture be used to make a leap in x86 execution? Sort of a reverse version of an x86-CISC with SIMD instructions.
The fundamental problem for that with G80 is the batch size. You need to run the same instruction with the same registers for 16 to 32 operations. The control logic just isn't there. It doesn't matter how good your preprocess ("code morphing") is, because it just won't work there.

Anyway, following this, I figured I might as well Google to find if anyone has had the same idea on how to fix this problem that I have (although I obviously didn't develop it or had a sufficiently precise idea on how to implement it efficiently). It turns out someone already had that idea back in 1998. Hmm! :) Sorry for being such a tease, still reading the paper... ;)


Uttar
 
Uttar said:
I would tend to believe Quadro Plex is the brand name for their high-end imaging solutions. It is not impossible that they would be adopting the same product name for both markets

For my clarification what are the two ("both") markets you are referring to? To my mind, engineering/scientific and visualization are one and the same, or are you thinking of something else?


Uttar said:
I'd just like to insist on the fact that Intel will likely compete head-to-head with them in that space soon enough. If NVIDIA plays their card rights, they'll definitely dominate there though - but now is not yet the time for overconfidence.

Yes, I tend to agree with you. Product and marketshare do not always correlate.
 
For my clarification what are the two ("both") markets you are referring to? To my mind, engineering/scientific and visualization are one and the same, or are you thinking of something else?
Visualization versus GPGPU applications.
 
Sorry for being such a tease, still reading the paper...
"Shared Control Multiprocessors A Paradigm for Supporting Control Parallelism on SIMD-like Architectures" - Nael B. Abu-Ghazaleh

Well, it turns out that even though that paper has some very interesting ideas, and nicely summarizes a variety of related domains, I don't think the solution is really applicable to GPUs. That's because this paper proposes an apparent solution for the control logic problem when your minimum batch size is already minimal, and the only limitation to achieving that is the control units' bottleneck. That's the kind of case you'd be thinking of for multi-chip massivaly parallel machines, I'd assume. With GPUs, your entire architecture is optimized around your batch size (including 8-wide ALUs, lesser-ported register files, etc.) and the costs would be much more significant.

In the unlikely event anyone wants to waste as much time reading this as I did (and no, I didn't read ALL of it yet!), here's the link: http://citeseer.ist.psu.edu/abu-ghazaleh98shared.html - please note that the PDF doesn't seem to be complete, only the postscript file is... meh. But unless you have a fascination for the subject even though it (probably!) doesn't apply to GPUs, don't bother, hehe.

Now, in the unlikely event you'd get your batch size down (or, rather, back to G7x VS levels!), then with Code Morphing via Instruction Level Distributed Processing, you could get some interesting things going. One interesting way to implement ILDP on the GPU would be to allow a program to "spawn" another program/thread and then use synchronization primitives.

No matter how you achieve it, it's fundamentally only possible to do this kind of thing efficiently with MIMD (think G7x VS), not SIMD (same, but PS), as far as I can see. And then, what you're doing in practice is what some would describe as the concept of "anti/reverse-hypethreading", just with an architecture that is even less suited to the task than AMD or Intel's. I very much doubt the approach's efficiency, let alone how NVIDIA would manage this before AMD or Intel does! and unless I'm missing something, this looks like a rather bad design anyway, unless your ultimate goal in life is to make single-threaded code work as fast as possible, rather than getting an acceptable level of performance at higher efficiency...


Uttar
 
The Baron said:
Visualization versus GPGPU applications.

To my mind GPGPU (general-purpose-GPU ?) is a technology. Not a market. I'd surmise that a GPGPU would be used to address a market such as visualization, or am I off-base?
 
To my mind GPGPU (general-purpose-GPU ?) is a technology. Not a market. I'd surmise that a GPGPU would be used to address a market such as visualization, or am I off-base?

I'd agree with that. Have you seen AMD's stream computing launch video? Yeah, it's AMD and not NV, but it does a reasonable job of talking about what the market opportunties are for gpgpu.
 
To my mind GPGPU (general-purpose-GPU ?) is a technology. Not a market. I'd surmise that a GPGPU would be used to address a market such as visualization, or am I off-base?
No, I mean Quadro Plex (according to Uttar's guess, which I tend to agree with) will probably be the dedicated video card platform for high-end visualization--medical, oil, etc. There will probably be a different brand name for an outboard box that contains video cards specifically for GPGPU applications (with minor differences, potentially related to the number of NVIO chips per board and the like).
 
Back
Top