[H]ardocp (Kyle Bennett) thoughts on ATI/AMD, Intel, and Nvidia.

I hate to say it, but the majority of your post kept reminding me of the classic Bill Gates line "Nobody needs more than 640 K" (I'm just paraphrasing).

It seems the software people continually use and abuse processing power, so while in 2010 we could have a 20 mm sq chip that would handle today's workloads... that doesn't mean the same thing in 2010 with the refresh/update of Vista, new office crap, and much more immersive productivity tools and games.

Josh, I read his reply a little differently. He saying that we there will always be an envelope to be pushed. And why the CPU/GPU sounds like a wonderful little fantasy for high end platforming. It doesnt seem practical. Considering that as long as theres
something to accelerate. The need for more powerful computing and processors will always exist. I am person who fundamently believes there is "never" enough or "too much". Because the more power you have. The more flexibility your software can have.
 
We're still a long way behind holodeck-like rendering quality in a PC add-in card. ;)
Until that happens, high-end GPU vendors will surely stay in business.
 
Heh, I guess my writing isn't very clear today, I'm thinking I'm out of practice!

I agree that there will be a market for NVIDIA for a long time to come, as software people will always find ways to make their product "better" which will consume more resources. Scientists will create larger data sets to get crunched to make more accurate simulations. Graphics people will put new effects and features in their engines. One point that I don't think I pushed enough is that there is a lot of research going on in terms of visual delivery media. While the "holodeck" is the most extreme concept (and most likely the end of the line when it comes to visual simulation/representation) that is obviously a LONG ways away. And the amount of computing power needed would obviously be phenomenal.

My point with the CPU/GPU in 15 years is that it would render realistic graphics on media like today's LCDs and CRTs. This is assuming that there are no new devices out there that are more accurate, denser pixel display, 3D imaging etc. In cases like that, then obviously more power is needed, and guys like NV will continue to push it.

So yes, NVIDIA will have a struggle on its hands with AMD and Intel really getting into the fray, but I also believe that NV will continue to branch out into other areas and stay relevant as a tech company well into the future.
 
What I keep saying the potential threat to Nvidia will be is the possibility of the historical graphics IHV investment model reversing.

Historically, companies spent hundreds of millions of dollars of R&D to bring that new architecture flagship top-end performance part to market. That high-end market, as we know, is only a few percent of the total market. And then they leveraged that investment down the gpu food chain to the other ~90%+ of the discrete market. But that other 90% HAS TO BE THERE for that model to work. If it's not, and you don't have those millions of low/mid cards to spread that R&D over, then that financial model collapses on itself even tho you were never "beaten" on the performance front head-to-head by superior engineering.

If Intel (and this is still a Big If in my mind. . not that they're going to take a shot at it, as this is now very clear. . .but whether they have the will and intestinal fortitude to keep at it and keep investing for multiple product cycles) is serious about making reasonably good integrated and low end graphics now, and AMD/ATI is as well, then you have the possiblity of that historical investment model reversing itself --so that you have the possibility of the only companies who can afford to make high-end graphics are the ones who are leveraging their graphics core(s) and software investment UP from the integrated high-volume end rather than DOWN from the cutting-edge performance end.
 
We're still a long way behind holodeck-like rendering quality in a PC add-in card. ;)
Until that happens, high-end GPU vendors will surely stay in business.


The 'threat' to Nvidia might be that Intel has a transistor budget that needs to be utilized. With Moores law intact but diminishing single thread performance the question facing Intel is where to spend the budget.
 
The 'threat' to Nvidia might be that Intel has a transistor budget that needs to be utilized. With Moores law intact but diminishing single thread performance the question facing Intel is where to spend the budget.

Don't discount the fact that the profit margins are shrinking rapidly in the CPU market too.
Even Hector, i think, said that higher than 50% is no longer a viable goal. At least as it used to be, even if this price war with Intel slows down.
Manufacturing alliances are inevitable.

Nvidia has no such burden, yet.
 
JohnMST said:
I hate to say it, but the majority of your post kept reminding me of the classic Bill Gates line "Nobody needs more than 640 K" (I'm just paraphrasing).
I don't know the exact context of that quote, but I think one way to consider it is that if we hadn't moved away from the PC's basic functions, such as word processing without an user-friendly interface, we might not have needed more than 640K. So that could be categorized as not properly predicting the importance of advanced GUIs and new applications.

So, what I'm saying is actually quite similar, if and only if put in that context. The *kind* of applications 90%+ of consumers out there use don't need more power than is available today. It's even not just the specific applications; it's a fundamental characteristic of the kinds of workloads. There is nothing a mainstream consumer is using today that is going to need a lot of CPU power in the future.

Also, notice that I say 'CPU power'. What I imply by that is that some workloads, such as video encoding and editing, might need more performance in the future. But those problems tend to map better to GPUs and exotic architectures (such as CELL and Larrabee) than to CPUs.

JoshMST said:
Considering that both ArcGIS and AutoCAD are putting out major updates every two years or so
Unless I'm missing something, those are actually massively parallel workloads, so they could eventually move to throughput-oriented processors (GPUs; CELL; Larrabee; etc.)

It's easy to find workloads that benefit from massive parallelism, it's harder to find workloads that benefit only from moderate parallelism (and need the performance), and it can be even harder to extract it. There are very notable exceptions to this rule, of course, and with a bunch of programming effort, miracles can be made.

Also it's probably worth pointing out that this is not the market I'm thinking of most; most consumers don't use that kind of application. Clearly, some things are not mappable to throughput architectures, and as such there are still some markets where more powerful traditional CPUs make sense. The economies of scale and potential profits diminish rapidly when the mainstream is no longer part of that market, however.

pelly said:
However, should something go wrong with the integrated DVD player you are now forced to be without the TV AND DVD player while it is being fixed....Should a new technology come out....that DVD player is basically useless.....
I don't think that really matters, because integration doesn't have sufficient cost savings in the mid-end and upper-end parts of the market. The intrinsic chip costs are the primary factor, and integration is just going to reduce your yields unless you can also sell parts with redundancy (in which case, it would arguably increase them!)

So, if that kind of integration only matters in the <= $400 parts of the market (that is, for the entire PC, not only the chip!), it doesn't really matter if it's fully integrated because that's only targetting mainstream users, who won't want to switch individual parts anyway.

What I'm really predicting and arguing for is that a large part of the customers that have traditionally been buying mid-end stuff will migrade towards the low-end price points, and that new 'extremely-low-end' segments will be created at ridiculously low price points, and with stunning levels of integration.

Geo said:
What I keep saying the potential threat to Nvidia will be is the possibility of the historical graphics IHV investment model reversing.
Indeed, 100% agreed... NVIDIA is not in a position where they can afford to lose volume. On the plus side of things, you would expect them to leverage some of their desktop GPU investments for handhelds and GPGPUs, so those also are new areas where they can amortize their R&D.
 
Indeed, 100% agreed... NVIDIA is not in a position where they can afford to lose volume. On the plus side of things, you would expect them to leverage some of their desktop GPU investments for handhelds and GPGPUs, so those also are new areas where they can amortize their R&D.

Yeah, I agree with that. Be the value add independant vendor to the Sony, Apple, etc of the world as well as on the PC side to keep your volume for your cores high over several market niches, all of which you're the prestige/high-end choice. Arguably, RSX could be seen as the poster boy/tryout for that. . .
 
What I keep saying the potential threat to Nvidia will be is the possibility of the historical graphics IHV investment model reversing.

Historically, companies spent hundreds of millions of dollars of R&D to bring that new architecture flagship top-end performance part to market. That high-end market, as we know, is only a few percent of the total market. And then they leveraged that investment down the gpu food chain to the other ~90%+ of the discrete market. But that other 90% HAS TO BE THERE for that model to work. If it's not, and you don't have those millions of low/mid cards to spread that R&D over, then that financial model collapses on itself even tho you were never "beaten" on the performance front head-to-head by superior engineering.

If Intel (and this is still a Big If in my mind. . not that they're going to take a shot at it, as this is now very clear. . .but whether they have the will and intestinal fortitude to keep at it and keep investing for multiple product cycles) is serious about making reasonably good integrated and low end graphics now, and AMD/ATI is as well, then you have the possiblity of that historical investment model reversing itself --so that you have the possibility of the only companies who can afford to make high-end graphics are the ones who are leveraging their graphics core(s) and software investment UP from the integrated high-volume end rather than DOWN from the cutting-edge performance end.

Then there is the major innovation that people seem to be overlooking. The only company that right now has a Fusion-like part on the market is NVIDIA with its single-chip integrated parts.

Neither the vaunted Intel or AMD/ATI has one. Nor does Via or SiS.

As has been pointed out by NVIDIA management, this is not easy to do. Not that others won't, and I am not an engineer, but if it were so simple you would think that there would be competing parts.

NVIDIA has been from the beginning (NV1) an SOC company. Just look at JHH's bio. I highly doubt they are ignoring integration since they are the ones already embracing it.
 
When GPU functionality is fully intergrated into the x86 arc. I would imagine the most major impact would hit notebooks, and low cost OEM desktop machines. I can see fusion based products migrating to the mobil space as well.
 
When GPU functionality is fully intergrated into the x86 arc. I would imagine the most major impact would hit notebooks, and low cost OEM desktop machines. I can see fusion based products migrating to the mobil space as well.

I agree. Low-cost space limited environment such as mobile would seem to be the target for Fusion-type parts.
 
Humm... Disturbing that a (now former) ATI-only partner is saying this in public.

AMD's roadmaps seem to be unclear in the professional graphics market. Enough to have ELSA Japan close the door on them for good.
What do you guys think ?


edit
IIRC, the original "ELSA" was a german company allied with Nvidia in the GPU market (they also made network equipment, i think), but then it went bankrupt and sold the brand name to another company, based in Japan.
 
Last edited by a moderator:
Oddly enough, I work at an engineering firm, and ...
Josh, I understand that very fast machines have their place: we are using them too after all. But I just think you can't deny that for word, excel, powerpoint applications, and a browser, a 3-4 year computer works just fine. And that's what's different from the early nineties: even for regular business applications, the SW guys had no trouble finding ways to gobble up the cycles of even the most demanding machines and add features that were actually useful. No more...
 
I've been hearing the "Intel will come for 'em all" song since I think 1997 or so and still wait for anything interesting to happen.

I think that it will be exactly the opposite, not the CPU will swallow the GPU, but the other way around. We see the strong movement of processing from the CPU to the GPU already, think physics or GPGPU.

I also still remember an interview with 3dfx in '97/'98 where I think Scott Sellers said something along the lines of "one day, you'll be upgrading your PC just by plugging in a new GFX-card", it sounded funny back then but now it seems more likely.

So in the end, I'm sure that high-end GFX cards are not only here to stay, but their importance will actually increase in the next few years. I expect to see nV flying higher than ever if they keep their current leadership, I also see them entering CPU market with some partnerships happening.

AMD's big chance IMO is to follow their Fusion stuff, but with more emphasis on the GPU side than on the CPU side.

Intel will not make a powerful gfx solution, all they will do is IGP-stuff for the low end and office work, with minimal 3D capabilities. They do produce nice slides as always, but I trust them as far as I can throw them. You can't build a serious gfx expertise from scratch within 2-3 years, not even Intel.
 
Well, if you look at the slides we posted from the Intel Larrabee presentation, what they don't talk about is far more telling than what they do. There's no mention of raytracing, rasterization, or anything related to graphics specifically. There's a lot of talk about GPGPU applications and building a better CPU for those.
 
Yeah, but the thing they showed there doesn't look like it'll be capable of dethroning the GPU in those tasks.
 
Kyle said:
It is also worth noting that Intel has gobbled up many ATI, NVIDIA, and S3 employees.

While one presumes they would like to do so to at least a degree (tho every account of Intel I've ever seen talks about how insular that culture is and how fiercely "NIH" --Not Invented Here-- they typically are), do we have any actual evidence it has happened?
 
Having read the whole thing now --and I don't want anyone to fall off their chair here-- but I more or less agree with Kyle. Happens a couple times a year, usually, so here's one for 2007. I would have expressed it with a bit more nuance in spots, but then that's just down to personal difference in style.
 
I think that it will be exactly the opposite, not the CPU will swallow the GPU, but the other way around.

I think that, in the end both solutions are going to swallow each other, i.e in the form of a 'hybrid GPPU(General Purpose Processing Unit) ' that would merge a GPU's ISA with that of a x86 CPU. Like this you've got a massively multithreaded processor which can doesn't sacrifice single threaded performance. This is going to take more time than FPU integration into x86 however. I am new to GPU architectures, but what is going to happen is too obvious

GPUs swallowing CPUs is highly unlikely, why?

You can't parallelize tasks like word processing which are used by all people, so unless you introduce some kind of CPU ISA you won't be able to run a single-threaded program, and the most likely candidate is x86 since it is a legendary ISA. By doing this you go back to my point of the GPPU.

edit : used correct terms in some places
 
Last edited by a moderator:
GPUs swallowing CPUs is highly unlikely, why?

If you're conjoing a 100M CPU core and a 700M transistor GPU core, who's the daddy?

[In the end this is all just semantics, and politics too (like when two companies "merge" rather than "takeover").]
 
Back
Top