[H]ardocp (Kyle Bennett) thoughts on ATI/AMD, Intel, and Nvidia.

They've been pretty good at making video cards, but when that market diminishes, will they be able to adapt and survive? I'm not so sure. The other market Nvidia has is chipsets, but with AMD and Intel making their own chipsets, one really cannot be sure if Nvidia will continue to do well in that market either. I believe Nvidia will probably end up like a VIA (still around, but not a predominate force).
 
I don't see any reason why the market for visual processing solutions would diminish. If anything, it will grow. We've already seen that GPU's have a lot of unharnessed power for applications even outside of gaming (finance, physics, medicine, etc). In an ideal world, we would just have one do-it-all jack of all trades contraption, but it rarely if ever works like that in practice.
 
Last edited by a moderator:
They've been pretty good at making video cards, but when that market diminishes, will they be able to adapt and survive? I'm not so sure. The other market Nvidia has is chipsets, but with AMD and Intel making their own chipsets, one really cannot be sure if Nvidia will continue to do well in that market either. I believe Nvidia will probably end up like a VIA (still around, but not a predominate force).

Does VIA have a mobile phone chip family and/or portable media player chipset designs like Nvidia ?
Arguably, if they succeed in those markets, the current GPU share will seem like a drop of water in the ocean -of revenue sources-.


The GPU market is very small in the overall chip business, and Nvidia does have fairly large reserves of money and IP that could indeed be put for good use in the mobile market. Add to that a reasonably solid brand and they could be in business. The x86 architecture is not particularly well suited for these markets, but ARM and others are (i believe PortalPlayer used it).
Sure, there's a very aggressive and numerous competition out there, but the potential financial gains would be very significant even with as much as a few percentage points of market share.
 
Last edited by a moderator:
How is it that Kyle isn't violating and NDA there? Sure it's qualitative commentary, but quite revealing.
 
Does VIA have a mobile phone chip family and/or portable media player chipset designs like Nvidia ?

But that's exactly my point, will Nvidia be reduced to a company that only produces mobile chips for phones? :???:

Arguably, if they succeed in those markets, the current GPU share will seem like a drop of water in the ocean -of revenue sources-.

Pretty big if (although possible). Look I'm not saying this is the end for Nvidia, but I think we should be able to see (and acknowledge) the tough times ahead for them. If they expand into new/other markets, more power to Nvidia, but I think they will have many difficulties in two markets they are already in (GPU's and chipsets).

I think he meant "relatively diminish", 'cause if/when Intel joins the game for serious they're gonna take a big chunk out of whatever market they go into.

This is correct. I think the inevitable entrance of Intel combined with products like Fusion will significant reduce/diminish Nvidia's market share in discrete graphics. I also think now that AMD makes their own chipsets, Nvidia will also lose a significant portion of market share in chipsets (I believe Nvidia's market share on Intel platforms is relatively small to begin with right?).

Since Nvidia will lose so much market share in both markets (GPU and chipsets), they will in turn make (far) less money (in these markets) than they use to (I'm just guessing here ;) ). They obviously would invest less money in R&D and eventually stop becoming a dominant force in these markets (maybe not be totally eliminated, but something like a VIA like I said before).

Obviously none of this should be taken as absolute truth; it's just my insignificant predictions. Basically I was throwing in my two cents that I agree with Kyle in that Nvidia will be facing rough times in the future (since everyone else seemed to jump on him). But what do I know? :p
 
Tough times ahead at nVidia? Lets talk AMD first , now there is tough times ahead.


Even if K10 / Barcelona is great Intel will just apply pricing pressure with their 45nm process before the new architecture in 2008. How long is K10 going to be about .. considering AMD's past recordf you have to think at least 4 years. The problem is that Intel is going to be doing a new process every 2 years and a die shrink and optimisations in the intervening year. AMD just cannot keep up with that if it keeps the same chip for as long as it has in the past. As long as Intel does not screw the pouch then I cannot see AMD being able to compete on rate of advancements.

Already they are talking about reducing capital expenditure by $500m, not a good starting point. I think the New York plant will never see the light of day and they will instead concentrate on getting their two current FABS fully updated / in production. Will they have enough money to do this and pay their loans and interest especially if Intel keeps up the price war with Penryn ? Maybe they will have to sell off parts to keep themselves afloat, the none core parts. I can see a Benq / Siemens type scenario again without the huge losses that Benq suffered with that deal and that means perhaps the AMD graphics division being put up at knock down prices.

Of course you don't sell a cash cow, so I will be interesting to see how AMD fairs with R6xx.

As for nvidia, if Via can survive then nvidia can, if Intel complete their mission of not killing AMD but instead just severely clipping their wings then I can see the space available for nvidia to exist in. It certainly does not help Intel to cripple nvidia when it is fighting such a war with AMD, I would say that Intel's hate of AMD would mean that it would be benevolent to nvidia in the medium term.
 
It does seem very. . . variable. . .when Kyle decides to be "forward looking" vs "right this minute" (think his first Conroe review here). Maybe he expects an Intel solution quicker than the rest of us.
 
Still, it will be interesting to see where NV goes when obviously there is a trend towards convergence of GPUs and CPUs. Don't they figure it will be about 10 to 15 years before we get truly lifelike 3D scenes? Curious what they will do when they seemingly hit the visual performance wall.

I'm still not buying this ultimate convergence of GPUs and CPUs. How long will it take for IGPs or these CGPUs to reach the performance level of G80? Now how long is it going to take these CGPUs to reach the visual performance wall that discrete parts are expected to take 10 to 15 years to reach?

So if Nvidia and the ATI branch of AMD continue to bring in the majority of their profit from discrete boards they have nothing to fear, since that's not the market that's getting eaten. And I have yet to see any reasonable explanation for why IGPs are going to see a large enough (or any) performance increase from moving from the chipset into the CPU that they are going to supplant Nvidia's and AMD's mid-range cash-cow cards let alone the high-end.
 
How is it that Kyle isn't violating and NDA there? Sure it's qualitative commentary, but quite revealing.
That was my first thought as well, yes.
Killer-Kris said:
I'm still not buying this ultimate convergence of GPUs and CPUs.
Same here. Assuming that somehow, CPUs manage to be only slightly less efficient than GPUs, that remains very significant - especially so if all that processing power isn't really useful for anything else. Then a CPU basically just becomes a weaker GPU. How useful!

What I like to remind anyone mentioning CPU-GPU convergence is that today, 3D Graphics still look like shit. That might be a bit blunt, but here goes: http://www.bit-tech.net/content_images/crysis_new_screenshots/crysis8_large.jpg - I look at that, and then I look out of my window (where I can see a number of trees on the street, people, lamps, cars, houses, etc.) and then I look back at that screenshot, and it just looks really, really bad. That's one of the first high-res images I found by searching for Crysis on Google Image, before anyone accuses me of looking for the worst possible one.

The trees look bad. The foliage looks flat. The clothes aren't that realistic. The aliasing is horrible. The materials look wrong. The shadows are blocky and flat since there's no good indirect illumination. The terrain texture is not realistic. And the animations probably aren't quite that cinematic in action. You could argue that the 'mainstream' will not notice improvements in visual quality as well as some of us. This is, imo, incorrect; the big difference is that most people only realize when something looks worse than what they are used to, excluding real life.

And I can easily imagine what we could do with not 10x, but 100x or 1000x more processing power. Even in Moore's Law terms, that's very, very far away. If all we wanted to do was render ONE tree incredibly well, what we could do today would be truly astonishing. Rendering a forest is quite another problem though, and it's not even a certainty that the same approach would scale if the complexity isn't linear.

In the end, the CPU will imo become a commodity (at least in the client space) much, much sooner than the GPU. Whether the CPU becomes much more throughput-oriented and more similar to a GPU, or that the GPU becomes more programmable and less SIMD-oriented... You know, that's not an engineering problem. It's not even an economic problem. It's more politics and market positioning than anything else.
 
As I said over at rage3d. As long as there are quality and performance barriers to be broken. Add in boards will continue to have a place in my opinion.
 
Heh, many good points here.

I guess my talk about convergence was that the majority of graphics parts sold in the future will probably be processors like Fusion (from both AMD and Intel), just like the majority of graphics chips being utilized right now are integrated units vs. standalone cards. I believe that both ATI/AMD and NVIDIA will continue to push the envelope in terms of functionality of their product, mainly because they absolutely have to to keep a good reason going to sell their primary moneymaking product. So while we are sitting at shader units doing FP32 calculations, it is not a leap to figure that in the next two years we will be seeing higher precision calculations on these chips that will not only slightly improve graphics, but make these floating point monsters even more significant in HPC and scientific/financial simulations.

When I look back at my Voodoo 1 in 1997 and playing Mechwarrior 2, and then go play games like Oblivion, I see exactly how far we have come in that 10 years. In another 10 years we should have really, really nice stuff that will render stuff close to reality at the high end (or at the very least at the level of real time, high quality CGI type playback). Five years after that the budget/integrated stuff should show the same type performance and quality. That is where I was figuring the 15 years.

Now, at that time what will we be actually seeing or interacting with in terms of visual medium? Will they be super high definition LCD's? True 3D displays? I figure it will be something that will require a lot of pixel pushing power. If not, the NV and ATI need to start putting some cash into visual delivery systems that will keep them relevant in the industry where pixel pushing power is a commodity.
 
I guess my talk about convergence was that the majority of graphics parts sold in the future will probably be processors like Fusion (from both AMD and Intel), just like the majority of graphics chips being utilized right now are integrated units vs. standalone cards.
The reason why IGPs make sense is that most people don't care about games; thus, in that part of the industry, the product degenerates into a commodity. This can be minimized by the importance of a company's brandname, however.

I don't think there is any fundamental reason why the same principle that applies to GPUs and IGPs cannot apply to CPUs. The reason IGPs make sense is that most people don't need more than that. So, what's your justification for selling anything more than a 20mm2 dual-core CPU in the 2010 timeframe, anyway?

Most consumers won't need more than that either. Most other workloads are massively parallel, and even just a minuscule integrated GPU next to it would be good enough for most consumer-oriented GPGPU workloads, such as voice recognition, I believe.

In a couple of years, it will be possible to create a single-chip architecture (excluding analogue; I wouldn't be surprised if different kinds of analogue functionalities converged into a single chip for footprint reasons though) that includes everything 90%+ of consumers will ever need, and with a die size of less than 60mm2 on a foundry process. The big question is if that's a positive or a negative. It could, in theory, expand the userbase. But it would also substantially reduce the gross profit per customer.

If your idea of convergence is that 'different architectures will merge into a SoC', then the dynamics are imo much, much, much more complicated than most people seem to think. It's more of an economic, marketing and political problem than anything else too, imo.

So while we are sitting at shader units doing FP32 calculations, it is not a leap to figure that in the next two years we will be seeing higher precision calculations on these chips that will not only slightly improve graphics, but make these floating point monsters even more significant in HPC and scientific/financial simulations.
We'll be seeing GPUs capable of 1/4th speed FP64 within the next 9 months or so. I don't think there's a good reason to go beyond that, either.

As for whether that's useful in the consumer space... probably not very much so. What NVIDIA and ATI are really selling is 'performance for a given level of image quality'. Increasing the 'sweet spot' precision without any practical reason is not a good way to improve that.

In another 10 years we should have really, really nice stuff that will render stuff close to reality at the high end (or at the very least at the level of real time, high quality CGI type playback).
I'm not convinced about that. You know, this might seem ironic, but I'm going to point out that diminishing returns will prevent that from happening as quickly as you seem to be thinking.

I'm not saying diminishing returns will be such a problem that you couldn't notice the improvement between a generation of hardware and the next. It'll still be very perceptible, I'm sure. But it won't be as significant each time, simply because it (arguably) takes less effort to go from 'ugly' to 'bad' than from 'bad' to 'okay'. It's much easier to notice the difference between 500 polygons and 2000 than from 2000 to 8000. You can easily tell the difference, but it's still smaller.

It's hard to say how long until we hit diminishing returns that are so obvious nobody will care about "what's next" anymore. I would personally not be surprised at all if perceived visual quality could get *better* than real-life (photosurrealism, anyone?) and that we would still notice the difference.

After all, what looks 'good' or 'right' to the eye is extremely subjective and sometimes pseudo-random. If nature was the very definition of beauty, then why are girls bothering with cosmetic surgery anyway? You may argue this is not a very good comparison, but consider it from a more abstract way (rather than "zomg boobs!!!") and I think you'll hopefully get my point... :)
 
I hate to say it, but the majority of your post kept reminding me of the classic Bill Gates line "Nobody needs more than 640 K" (I'm just paraphrasing).

It seems the software people continually use and abuse processing power, so while in 2010 we could have a 20 mm sq chip that would handle today's workloads... that doesn't mean the same thing in 2010 with the refresh/update of Vista, new office crap, and much more immersive productivity tools and games.
 
I hate to say it, but the majority of your post kept reminding me of the classic Bill Gates line "Nobody needs more than 640 K" (I'm just paraphrasing).
:?: I'm thinking he's just saying the opposite: there's be an ever increasing need for more power in some cases, but just not for everything.

It seems the software people continually use and abuse processing power, so while in 2010 we could have a 20 mm sq chip that would handle today's workloads... that doesn't mean the same thing in 2010 with the refresh/update of Vista, new office crap, and much more immersive productivity tools and games.

I used to upgrade my computer every year or 2 years and the reason was compilation speed. My current one is 4 years old and I haven't felt any need at all to upgrade. For the things I do, it's perfectly adequate. I see the same thing at work: I couldn't tell you what kind of processor resides in my desktop and neither can any of my esteemed coworkers, yet you'd have a hard time finding a larger collection of geeks. ;)

These days, you improve productivity by increasing the number of pixels, not the MHz. The hard work is done by a few machines in the back room.

Edit: I don't doubt that different arrangements are used elsewhere, but ours is a very common arrangement. And if an engineering company can survive by giving its employees a low level desktop machines, why would it be different for a bank?
 
Last edited by a moderator:
Oddly enough, I work at an engineering firm, and with many of the new GIS programs out there, the processing and storage requirements are seemingly shooting through the roof. People are far more productive with the new dual core machines than the older Athlons and Athlon XPs that we were using. We are talking about certain process and rendering times that could go up to an hour with the older machines, to a much more manageable 10 to 15 minutes. The more work that gets done, the more billable that worker is.

When dealing with a smaller engineering firm that requires the majority of that processing work to be done on the individuals computer, upgrading every 3 years is just a fact of life. Considering that both ArcGIS and AutoCAD are putting out major updates every two years or so, we are always a bit behind the curve in terms of power after multiple service packs and updates to both the OS and application.
 
I can see this going either way. Nvidia can get squeezed out of the AMD and Intel markets or they continue to supply discrete GPUs for that market.

What I am not really buying into is that

A. Intel can stomach a 12-18 month product cycle in the graphics arena, or they are even looking to get into the high end market.

B. GPU\CPU convergence will resemble much more than what Integrated Chipsets bring to the table right now. The only difference being the integrated graphics processor is on the CPU die. I highly doubt the convergence will result in a product that can compete with a high end gpu. Especially one that is refreshed every 12-18 months. Intels big new GPU\CPU is supposed to be an x86 processor with 16 pipelines. Right now we are lucky to see 1.0 IPC on x86 processors. I bet this thing runs like a 6800 in 2009. In other words low low end junk.
For starters what happens when your GPU\CPU cant run Crysis 3.0? Grab an entire new chip?

C. That AMD's troubles magically end with Barcelona. The way they are sounding, I wouldnt expect to see Barcelona cores as their majority product until the end of 08.
 
A. Intel can stomach a 12-18 month product cycle in the graphics arena, or they are even looking to get into the high end market.

Actually the nice thing is that Intel doesn't have to play the same 12-18 month cycle that the GPU vendors currently have to. A CPU used to do graphics is compatible with any future release of Direct3D. At this point they'd be releasing for performance not features, and with that it's just a matter of speed steps on what ever schedule fits them best.


...Intels big new GPU\CPU is supposed to be an x86 processor with 16 pipelines. Right now we are lucky to see 1.0 IPC on x86 processors.

Though to be fair the IPC has just as much to do with the workload as it does the architecture. Since we know the graphics workload can support a very high IPC it then comes down to the architecture.
 
Actually the nice thing is that Intel doesn't have to play the same 12-18 month cycle that the GPU vendors currently have to. A CPU used to do graphics is compatible with any future release of Direct3D. At this point they'd be releasing for performance not features, and with that it's just a matter of speed steps on what ever schedule fits them best.

Right which leads back to my comment about the combo being nothing more than moving the IGP onto the CPU. Intel can raise the clocks on the chips however dont expect miracles when Nvidia releases a card with more resources.

Nvidia can retain the discrete game and professional market which has higher margins anyways.
 
One thing to remember as we talk about "FUSION" and similar CPU/GPU combinations is the limitation convergence can bring.....A great example of this would be a TV with an integrated DVD player. When everything is working, the combination is stellar (possibly cheaper even) and provides the same functionality and performance as two discrete CE pieces. However, should something go wrong with the integrated DVD player you are now forced to be without the TV AND DVD player while it is being fixed....Should a new technology come out....that DVD player is basically useless.....

Take these concepts and apply it to the PC upgrade cycle and overall flexibility and you have one of the major concerns against this type of "combined" architecture...

With the "standard" IGP platform we have right now, you have a great deal of flexibility. Say you had a $300 budget to work with. You could pick up a $200 "X-core" CPU and an IGP motherboard......Now with the combined CPU/GPU architectures, you'd be spending $200 on an "X-1 core (due to 1 core being used for GPU) and $100 on a motherboard. (exact prices are not important here....just the fact that the CPU has 1 -less core...)

If the user in question were to go out and buy a discrete GPU to play some killer new title....they'd be in an inferior position with the combined CPU/GPU architecture. In the most basic form, the GPU would be idle and a core would be unused.....However, with the traditional IGP platform the CPU remains unchanged and can be fully utilized (all cores and all components of the CPU are used)....only the GPU portion of the northbridge remains unused....

Obviously, the major push for this technology is the cost savings and efficiency the combined architecture will provide....However, we just need to remember that all those benefits do come at a price. That price will vary depending on the user and their applications....but the price is definitely there....Whether it is worth it or not will vary...
 
Last edited by a moderator:
Back
Top