Intel to make bid for nVidia? **Reuters**

I too share the feeling that none of these mergers are in the longrun for the benefit of the high end GPU market.

I think many people feel this way. But what I've been pondering as of lately, is how would Intel and AMD justify axing high end GPU development all the while designing high end CPUs?

I get the impression that the GPU justification is that so few people need a high end GPUs level of performance, but the exact same argument is true for CPUs. After all how many business and office workers would care let alone notice if their C2D or A64 were replaced with a C7? Not many I would hazard, and if I'm not mistaken this is the largest market segment. How come AMD and Intel haven't created a CPUs that are adequate for their users, much like how Intel treats their IGPs as adequate?
 
I think many people feel this way. But what I've been pondering as of lately, is how would Intel and AMD justify axing high end GPU development all the while designing high end CPUs?

I get the impression that the GPU justification is that so few people need a high end GPUs level of performance, but the exact same argument is true for CPUs. After all how many business and office workers would care let alone notice if their C2D or A64 were replaced with a C7? Not many I would hazard, and if I'm not mistaken this is the largest market segment. How come AMD and Intel haven't created a CPUs that are adequate for their users, much like how Intel treats their IGPs as adequate?

If you go by the idea that a GPU is a programmable graphics processor, then most office workers could go without a GPU entirely without noticing.

I think replacing the CPU with nothing would be a lot more noticeable.

In other markets, there can be systems that have thousands of processors, but few if any GPUs. If there are a lot of GPUs, there are still a good number of CPUs to give them data to crunch on.

High-end GPUs need high-end processors to do their job, no matter what. High-end processors can be used without the GPUs.

If push came to shove, the GPU camp isn't going to trump the CPU group.

If they share silicon, things might get worse, we'll find out how much worse when they try to port ATI's circuit designs onto AMD's process.
It'll probably be worse than the trouble AMD had going from bulk Si to SOI.
If that's the case, a lot of effort will be used up just getting the GPUs to work, forget about high-end.
 
If you go by the idea that a GPU is a programmable graphics processor, then most office workers could go without a GPU entirely without noticing.

I think replacing the CPU with nothing would be a lot more noticeable.

If you ignore Vista for a moment, then yeah what is required to draw your typical 2D GUI would be pretty insignificant, but at the same time how much of a CPU would really be required to take user inputs and apply them to the word processor's/email client's/web browser's memory?

In which case neither is replaceable with nothing. But my point still stands, if this is all that's required, Intel and AMD should be building and selling far more integrated, much simpler, and much cheaper products instead of these clock speed and IPC behemoths that we have today.

Which makes me think that the only reason we have the CPUs we do today is because high-end products create interest, and interest sells. What is unfortunate, is that they haven't seemed to have realized that the same can be made true for GPUs and this is was ultimately has been holding back IGPs and will probably cause the death of high end GPUs in the case the Nvidia goes out of business or gets bought.

In other markets, there can be systems that have thousands of processors, but few if any GPUs. If there are a lot of GPUs, there are still a good number of CPUs to give them data to crunch on.

That would be something interesting to look at, how many GPUs are sold to gamers, versus how many CPUs are sold to the Top500 super computers. And then break GPUs down into single chip and multi-chip/board configurations.

I honestly haven't the foggiest idea how that comparison would truly look, but I think it would be safe to assume that GPUs are at least as profitable as those super computers are.

You may notice that I'm intentionally ignoring web servers, this is because they are primarily IO and much like my hypothetical office machine listed above, except now you might want to scale to more cores, ala Niagara.

High-end GPUs need high-end processors to do their job, no matter what. High-end processors can be used without the GPUs.

Are XBox360 and PS3 examples of the future? I certainly wouldn't consider either of their CPUs particularly high end if compared to a C2D or X2 on current programs.

What's to say that a GPU won't progress to being programmable enough and with virtualized memory be capable of performing low-end CPU tasks. Just like high-end CPUs today are capable of doing all the rendering a GPU is capable of.

But going into the future, what type of workloads is one expecting? Will they be TLP and parallel, or do we still need more increases in clock speed and IPC? Personally I say we need both, but from the appearance of everything we will be getting processors (CPU and GPU) that very much firmly fall into the first type of workloads, and the only reason that the CPU will swallow the GPU is because the CPU companies are bigger.
 
The cool thing about D3D10 is it makes the GPU much less reliant upon the CPU. Trouble is, it'll be years before games get to take full advantage of that.

Meanwhile, Aero Glass will carry the flag, lots of swanky visuals with practically zero cpu load.

Jawed
 
AMD right now has three variaties from nV a integrated gf 6, and which has just been upgraded to an integrated gf7, both of which have 4 pipelines, at perform better then the x700 and have more features
I was under the impression that the GF6 IGP has only two pipes, that the GF7 IGP would stay with two pipes but add the extra MADD per pipe and drop everything to 90nm, and that RS690's X700-based (read: SM2b) IGP would have four pipes (not the X700's full eight :rolleyes:). As it is, the GF7-based MCP61 appears to stay in the same class as the similarly two-pipe 6100/6150 and RS485.

As for the rumor, though I don't think it's likely (given Intel's upcoming 80-ALU CPU, NV's market position and value, egos, etc.), it's got to be worrying someone at ATI-AMD. I'm also not sure how a $10M investment in ImgTec in any way precludes a $10B "investment" in Nvidia.

Edit: NP, Razor. BTW, confirmation RS690 will have a quad pipe IGP.
 
Last edited by a moderator:
:oops: hmm odd thought they had 4, sorry !

ah the gf 6100 IGP update had double madd execution like the gf7's thought it was 2 times the pipelines.
 
If you ignore Vista for a moment, then yeah what is required to draw your typical 2D GUI would be pretty insignificant, but at the same time how much of a CPU would really be required to take user inputs and apply them to the word processor's/email client's/web browser's memory?

The key is that a CPU can be made to run a basic software renderer.
A GPU can't be made to take user input and drive the system.

The lower the need for a GPU, the more acceptable it is having the CPU or some basic video hardware to do some rendering.

There is no corresponding increase in the utility of a GPU if the CPU can do less and less.

The cost of low-end CPUs is such that there is little difference in price to most buyers if it's a Celeron or Centaur, besides the fact that a bunch IT departments only buy Intel due to the branding.

We buy newer cores just because Intel's not going to make Pentium Pros on its latest and greatest fabs. It's not worth maintaining that platform when the costs of the units are so low anyway.

In which case neither is replaceable with nothing.
Not exactly, the CPU can struggle along pathetically without a GPU.
The GPU cannot do the same thing.

But my point still stands, if this is all that's required, Intel and AMD should be building and selling far more integrated, much simpler, and much cheaper products instead of these clock speed and IPC behemoths that we have today.
The high end needs more performance, and the high end brings margins. It is cheaper to amortize the development costs of the high end by using it again for the mid and low end, as well as suppressing any upstarts that might try to weasel in from below.

Which makes me think that the only reason we have the CPUs we do today is because high-end products create interest, and interest sells.
We haven't run out of a need for performance growth, though Intel and AMD would slow core introductions, if they could be confident the other would do the same.
The high-cost variants exist as marketing strategies than they do as income sources. It's the less than insane versions that carry well.

What is unfortunate, is that they haven't seemed to have realized that the same can be made true for GPUs and this is was ultimately has been holding back IGPs and will probably cause the death of high end GPUs in the case the Nvidia goes out of business or gets bought.

GPUs already use high-end SKUs to generate interest and create price segregation. If Nvidia and ATI and everyone else magically disappeared, then AMD (let's say the merger didn't happen) or Intel would step in, because the primary obstacle to their entry is the great lead in expertise that Nvidia has on its own turf.

That would be something interesting to look at, how many GPUs are sold to gamers, versus how many CPUs are sold to the Top500 super computers. And then break GPUs down into single chip and multi-chip/board configurations.
The Top500 aren't the only consumers of large numbers of processors. The entire low to mid-end server market is a heavy user of CPUs.

I honestly haven't the foggiest idea how that comparison would truly look, but I think it would be safe to assume that GPUs are at least as profitable as those super computers are.
The top supercomputers often get discounted rates, direct profits aren't the only concern in that market.

You may notice that I'm intentionally ignoring web servers, this is because they are primarily IO and much like my hypothetical office machine listed above, except now you might want to scale to more cores, ala Niagara.
Not every server only has lightweight threads. Niagra's niche is a bit broader than some thought it would be, but not that broad.

Are XBox360 and PS3 examples of the future? I certainly wouldn't consider either of their CPUs particularly high end if compared to a C2D or X2 on current programs.
I don't think you can put CELL in the low-end category, and Xenon isn't entirely that bad.

What's to say that a GPU won't progress to being programmable enough and with virtualized memory be capable of performing low-end CPU tasks. Just like high-end CPUs today are capable of doing all the rendering a GPU is capable of.
There's no reason why not, but is it really a GPU or just a CPU that's good at graphics? If it becomes the central processing unit of the system, the GPU moniker would be even more meaningless than it is now.

But going into the future, what type of workloads is one expecting? Will they be TLP and parallel, or do we still need more increases in clock speed and IPC? Personally I say we need both, but from the appearance of everything we will be getting processors (CPU and GPU) that very much firmly fall into the first type of workloads, and the only reason that the CPU will swallow the GPU is because the CPU companies are bigger.

Neither type is going to stagnate entirely with IPC and clock speed. There will always be a need for single-threaded performance, if only to fully utilize the parallel units in less than ideal conditions.

If CPU companies swallow up the GPU companies, it's in no small part because the market for CPUs is just bigger, and the fact that GPUs need CPUs far more than the other way around.
 
The key is that a CPU can be made to run a basic software renderer.
A GPU can't be made to take user input and drive the system.

The lower the need for a GPU, the more acceptable it is having the CPU or some basic video hardware to do some rendering.

There is no corresponding increase in the utility of a GPU if the CPU can do less and less.

In reality you are absolutely correct.

The general point I was trying to make is that your average user uses so little processing power that the CPUs we have today make little sense. This is also the reasoning that the CPU manufacturers would likely use to axe high-end GPU development. And in both of those cases it would make very little sense to put much resources into either.

But in reality Intel and AMD are building very nice CPUs because some people need this performance and they are willing to pay for it. The same is true for GPUs, yet who ever is in charge of these types of decisions at Intel and AMD doesn't seem to realize this and they try to keep the GPU as small and low-end as possible.

The cost of low-end CPUs is such that there is little difference in price to most buyers if it's a Celeron or Centaur, besides the fact that a bunch IT departments only buy Intel due to the branding.

We buy newer cores just because Intel's not going to make Pentium Pros on its latest and greatest fabs. It's not worth maintaining that platform when the costs of the units are so low anyway.

Where I was trying to lead that thought was that Intel keeps their IGP small, since no one 'needs' that performance, to minimise cost, and maximise profit. The same could be done for CPUs, they could have stayed with a Pentium classic, kept shrinking the die, ramping the clock speed, adding cache and features until they have a SoC. After all no one 'needs' that performance in the very same sense no one 'needs' a fast IGP.


Oh and I do realize that there is a lower bound on how little silicon one would want to use, and that by going with a single high-end design can help cut costs. I just think that the way Intel has treated the GPU is kind of silly when you compare their apparent reasoning for it, with the world in which most of their CPUs live.


The Top500 aren't the only consumers of large numbers of processors. The entire low to mid-end server market is a heavy user of CPUs.
....
Not every server only has lightweight threads. Niagra's niche is a bit broader than some thought it would be, but not that broad.

I'll admit I don't know a lot about web serving, but what tasks would a web server be doing that isn't very latency tolerant and doesn't have a lot of concurrent tasks? If you have to send data over the internet isn't that latency going to let you mask the delay from any heavy weight threads run on a CPU like Niagara? I honestly don't know, but it seems like it should.

I don't think you can put CELL in the low-end category, and Xenon isn't entirely that bad.

I'm not trying to say that they are necesarily low-end or bad, but they made trade-offs that hurt serial general purpose computing very badly. And that is why I tried to qualify my comment as current programs. And if the future goes one way, CELL will go down fondly in the history books as being revolutionary, if the future goes the other way it will go down in the history books right next to Alpha, Itanium, and many others.

There's no reason why not, but is it really a GPU or just a CPU that's good at graphics? If it becomes the central processing unit of the system, the GPU moniker would be even more meaningless than it is now.

They will probably become on and the same, but I imagine the name will still indicated what something is going to be good at. But what would still differentiate a chip called GPU from one called CPU is a matter of how the internal data paths are configured, what functional units are emphasised, batch sizes, etc...

After all what's the difference between Conroe and CELL? Would you use them for the same type of tasks? Of course my example here is flawed since both are considered CPUs, but in the future who knows!
 

Now that they are in debt to the bank due to ATI, it would be more likely the other way around (NV taking over AMD). :D

I still believe IBM would be a perfect candidate.
They previously colaborated producing several GPU's (FX 5700 Ultra, etc).
Both have ties to AMD, both are in the HyperTransport Consortium.
Both colaborated on PS3.
Either one of them has an extensive Tech/IP portfolio.
Intel doesn't trust either one of them, apparently.
IBM certainly has the big bucks to do this.


TSMC would be a distant second.
They already have the Fabs, why not purchase one of their best clients and get a bigger piece of the profits (and cutting one of the middlemen from the chain).
Most likely Philips (one of TSMC's main shareholders) would be interested too, due to NV's multimedia and mobile/phone graphics chips.
 
Last edited by a moderator:
Booooring!

Now Sony might intrigue Jen-Hsun. . .

They got paid by Sony under contract.
That doesn't mean that they don't know it would be a bad deal (and bad PR, NV is good at it :D) to allow themselves to be bought by a company on a downward spiral.

Jen-Hsun is no fool...;)
 
The worrying thing about the ATI merger, is the simple fact that the drive to built integrated bread and butter platforms and enter the mobile space may turn the high end GPU race for them into an "R&D" project of less concern and status within the company, with future add-in board high-end GPUs becoming AMD's version of Itanic or Intel's 80-core monster -- nice research projects to trot out to the press at conferences, but not their main focus.

+

All I'm saying is, the future impact of AMD/ATI's merger on the GPU market is uncertain, and by no means is it a slam dunk against Nvidia. I worry about premature "maturity" of the GPU market, of this merger turning the contest into a replay of the CPU market, with real innovation giving way to treadmill of clocks and tweaks.

+

I preferred the ole days of ATI vs NVidia mano to mano. I would hate to see this turn into Intel business unit vs AMD business unit. And ATI fnbys better hope that AMD doesn't kill off NV, and this turns out to be AMD vs Intel GMA* because that will be the deathnell of the highend GPU market.

Spot on, DemoCoder!

I can understand why AMD wanted ATI, but the other way around? I doubt it with regard to the high end ATI R&D for the next generation. :cry:
 
It is highly doubtful, but the previous experience means no one really has the cred to say it's impossible. . .
Wait, no one?

Man, you were the one to doubt the power of Chinese written internet entries, not me!
 
Back
Top