[H]ardocp (Kyle Bennett) thoughts on ATI/AMD, Intel, and Nvidia.

I wonder if Kyle has considered the possibility that the reason why today's GPUs consumer so much power has less to do with them being piece of crap designs by know-nothing neophytes and more to do with all the tasks they have to perform? Those FLOPS have to come from somewhere, you know. As such, the running theme that all that is needed to bring power consumption down to nothing is some magical insight only people with CPU-building experience can posses seems rather strange.
 
As if CPU makers can magically make GPUs and NVIDIA is increasingly investing in custom layout.

AMD buys ATI so R700 is going to be amazing?

If underlying 8000 series architecture is more efficient, as does seem to be the case, one wonders how long it will take for ATI bring out a new architecture. NVIDIA responded pretty fast to FX series with NV40, but underlying architectures seem to be sticking around longer these days and one cant exactly say amd's forte is bringing products to market quickly, at least relative to NVIDIA.
 
ATI having more power efficient designs? Has he forgotten why nvidia has made up so much ground in the laptop space the past few years? And the rest of his assertions seem to assume AMD is going to somehow help them be more power efficient in the future, what by giving them fab space? I doubt it.
 
Nevermind that P4 was a glowing abomination or that 90nm Athlon X2s have sucked juice pretty badly, those CPU people know what they are doing. In fact, I am sure they can put out a 400-500 GFLOPS chip drawing 15W.
 
Also, isn't CUDA in beta? What bizarre criticisms, especially since NVIDIA is clearly excited about the double-precision part coming this fall.

Inflexible management? What does that mean?
 
I wonder if Kyle has considered the possibility that the reason why today's GPUs consumer so much power has less to do with them being piece of crap designs by know-nothing neophytes and more to do with all the tasks they have to perform? Those FLOPS have to come from somewhere, you know. As such, the running theme that all that is needed to bring power consumption down to nothing is some magical insight only people with CPU-building experience can posses seems rather strange.

That's a frustration that I've started to have with a particular author from another one of those big sites. They don't seem to realize that most CPU workloads have a hard time surpassing an IPC of 1.0, which on a chip like C2D means that 75% of the ALUs are idle. Now if they look at the fact that the GPU idle time could be close to the reciprocal of the CPU time (or less) it should become painfully obvious why one can be made to sip power and not the other.
 
inefficient build qualities
Uh huh... I suppose he's a master engineer and knows all about efficiency. Gosh, what were all those ATI engineers thinking...

Pushing these power envelopes up to such great heights does not exactly bode well with the larger system builders in our market. Mass production of inefficient systems that produce more BTUs than a Home Depot $29 electric heater is not what Dell, Gateway, Alienware, and HP want.
Oh yea... these so-called larger system builders outfit their systems with only the top of the line components.... How could I forget. :runaway:

I see Kyle is a Master of AutoMagics.



I do agree that things look grim with AMD and their apparently lagging release schedule of *new* designs though. IMHO, the AM2 was just pathetic, and the constant delays of the R6xx family is getting on my nerves. :(
 
Well, it is an editorial. I think quite a few of us have been guilty of producing one of those now and again. I don't know how much Kyle knows about the internal workings of these chips, or why they perform the way they do, but he certainly knows things from the end users perspective. Just because he is taking that point of view for most of his arguments does not mean it is invalid... because this is supposed to be a customer-centric set of businesses.

While I don't agree with a lot of his contentions in the article, he certainly has a right to put forward his point of view.

Some years back I wrote an article called "Slowing Down the Process Migration" and in it I talked about what I thought some trends were going to be. One of the biggest points of the article was that these companies are going to reach a point in that die shrinks are going to become harder and harder, and to keep die sizes to more manageable levels they are going to have to do more functionality with fewer transistors. NVIDIA's use of custom scalar units certainly underlines this point, as they have significantly increased performance without doubling that particular transistor count... and they do it by increasing the speed. While the promise of the Intrinsity technology has never appeared to go anywhere, NVIDIA taking the tough choice to do custom portions of the GPU is certainly a far reaching decision. We will end up seeing in within two generations time more portions of the GPU going full custom, which will make for better performance and power consumption vs. a traditional standard cell design. So instead of having 256 scalar units of standard cell we have 128 units running at more than double the speed without much more power consumption. I certainly hope ATI/AMD can do something like this in short order...
 
I will say this in favor of Kyle, I don't think Nvidia's days will get any easier. While I don't foresee them folding anytime soon, I do believe trouble is on the horizon for Nvidia.
 
I will say this in favor of Kyle, I don't think Nvidia's days will get any easier. While I don't foresee them folding anytime soon, I do believe trouble is on the horizon for Nvidia.

I don't think things have ever been easy for NVIDIA, rather they just made it look easy. Except for the hiccups that were the NV1 and GeForce FX, they have had a pretty amazing record. From what I understand, the climate there is one of "everybody works hard, people don't play games unless they are in Dev Rel". They have a solid roadmap, they have expanded into areas that have made them money, and they work incredibly hard. From the CEO to the lowest janitor, everybody there comes early and stays late. Just the culture they tend to foster. Oh, and they are apparently one of the best companies to work for in Silicon Valley when talking about pay, bonuses, incentives, and benefits.

So, while I agree that things won't get easier for them, I think it has constantly been a struggle for them to get to where they are. ATI was a larger, older company with more engineers, but NV was able to catch up. Even with the tremendous boost ATI had with the R300 series, NV caught up again after a year and a half.

Still, it will be interesting to see where NV goes when obviously there is a trend towards convergence of GPUs and CPUs. Don't they figure it will be about 10 to 15 years before we get truly lifelike 3D scenes? Curious what they will do when they seemingly hit the visual performance wall.
 
Still, it will be interesting to see where NV goes when obviously there is a trend towards convergence of GPUs and CPUs.

Which is why I am foreseeing problems for Nvidia.

I think that one day there will not be any more video cards, sound cards, physics cards, etc (but the Killer NIC card will live on forever! :LOL: :rolleyes: ) because everything will eventually be rendered/processed on several CPU's setup in a parallel configuration.

I also believe this will happen sooner than most think; I have very high hopes for technologies like Fusion (obviously Fusion won't be targeted at the high end in the beginning, but I think the potential of Fusion will have a profound impact on the industry). Which I think (perhaps I am wrong) is the point Kyle was trying to get across. If the future of computers is technologies like Fusion (and whatever Intel's version is called), where does that leave Nvidia? Not an easy question to answer (especially if you're Nvidia).
 
I don't see why NVIDIA would have any more problems than other companies. They may have disadvantages in some areas, but other companies will have disadvantages in other areas. Developing higher and higher performance programmable graphics solutions is extremely tricky, and the hardware design is only half the battle. I think that the very unique aspects of G80 in comparison to prior NV architectures is pretty good evidence that the company is willing and able to innovate and evolve over time to produce novel graphics solutions.

It's just not wise to write off anyone at the moment. I remember that a few years ago, lots of people had written Nintendo off as it was competing against behemoths Sony and MSFT. Now look at how successful they have been simply by coming up with a very innovative and novel approach.
 
Last edited by a moderator:
I think many people forget that, nV has been the best for the longest period of time because they push the envolope with products and make them sell much better then any other graphics card company in the past and present. As a company they are very good too, and that is probably being modest.
 
Yeah, I agree. Look at their track record of big successes over the past few years: Geforce 2, Geforce 3, Geforce 4 Ti, 6800, 7800, 8800 (only exception being NV30). It's no fluke that they keep having so many very successful generations of cards, they have a Razor-sharp focus on advancing visual processing solutions (no pun intended :) ).
 
Back
Top