Stagnation in GPU and CPU market (Come AMD, we need you!)

Geeforcer

Harmlessly Evil
Veteran
I am sure that I am not the only one who looked at today's CPU and GPU launches and is someone perturbed by the fact that barring an unexpected RV670 breakthrough GTX might remain the fastest card on the market for a year and a half. Even when R300's unchallenged superiority allowed ATI to dominate the market, they had enough decency to update the high-end from time to time. (No, 1 beelion dollar Ultra does not count). Value is fine and all, but as 3D Enthusiast, I can't help but be disappointed by the current state of affairs.

We have a similar situation in the CPU market. Just like G92, Peryn is solid incremental update of an excellent architecture. However, when was the last time Intel actually raised their frequency? They are perfectly content selling QX9650 clocked at ~3GHz for $1000+, while the chip is perfectly capable of hitting 3.6GHz and more.

I think the current state of affairs clearly demonstrates how vital the competitive AMD is to the enthusiast landscape, provided we want our CPUs and GPUs to actually get faster year-to-year. Hopefully RV670 and Phenom will arrive in timely fashion and be the step in the right direction.
 
Competition is always good, but I think the reason 8800 Ultra wasn't all that was because of G80 not really being capable of going that much faster. It's hot as hell and the GTX doesn't overclock very well. HD 2900 is a bug-ridden monster chip that is even hotter....

And it is time for refresh products, not revolutions. It is how things have always worked. New architecture arrives and then some months later comes a refresh of that design. These chips have gotten so huge that things are taking a bit longer though. Years ago, the IHVs said that it was not going to be possible to refresh every 6 months anymore (or whatever the number was).

I do wonder if G92 will be used for some high-end refresh. SLI-on-a-board and perhaps G92 has more functional units than are enabled on 8800GT.

CPUs may be stagnating, but hey I think a $266 quad core that stomps all over my maximum-overclocked Opteron 165 from a year or so ago is worthy of being called progress. Hell I think that one core of my Q6600 @ 3.2 GHz is faster than both cores of my previous Opty 165 @ 2.6 GHz in a few cases. Heh.
 
Last edited by a moderator:
I posted this on the other Thread, but I guess it is even more relevant to this discussion

http://www.fudzilla.com/index.php?option=com_content&task=view&id=3882&Itemid=35

Phenom X4 vs. Yorkfield tested in Crysis

Expreview has managed to get their hand on an AMD Phenom X4 and an Intel Yorkfield based QX9650 and have run the two against eachother in Crysis to see which chip is the better one.

Also thrown in for comparison was an Intel Core 2 Duo E6850 and a Core 2 Extreme QX6850. As you might have already realised, the Intel chips are clocked a fair bit higher than the Phenom X4, so to make things a bit more even, the Phenom X4 was overclocked to 3GHz.

This is still not quite a fair comparison, as the Phenom X4 had it's memory running at 375MHz due to limitations of AMD's integrated memory controller which prevented it to run at 400MHz as per the Intel based test systems. The Phenom X4 system also had a lower bus speed, due to Intel's move to 1,333MHz bus.

The Phenom X4 was tested using an RD790 board while the Intel CPU's were tested on a P35 based board. The graphics card used wan an 8800 GTX and the test results are looking quite promising for AMD, at least if the Phenom X4's will retail below Intel's asking price. The only problem is that the Core 2 Duo E6850 seemed to outperform it.

The Crysis time demo was run 5 times and Expreview took the average of those runs, with the Phenom X4 scoring an average 46.48fps while the QX9650 managed 49.95fps. The Core 2 Duo E6850 scored an average of 49.19fps while the Core 2 Extreme QX6850 managed 49.92.

You can check out the full test results and some screen shots here in IE only as the site doesn't work in Firefox.

http://news.expreview.com/2007-10-29/1193590532d6599.html

Are we doomed?
 
I have to disagree that GPUs are stagnating, for one of the first times in graphics history there is a truly and utterly compelling midrange part with the G92. Put a beefier cooler on one and you aren't a very far overclock from enthusiast performance at an unbelievably low price.

CPUs on the other hand I feel are stagnating and badly. Conroe provided the type of IPC gain I'd like to see with each new generation, Barcelona on the other hand seemed to focus on memory subsystems and throughput improvements. I just hope that Nehalem follows Conroe's model of IPC gains, and I don't just mean from the IMC, similarly I also hope that AMD does the same with some future core.
 
G92 is a good part, but in my limited, performance-focused world view a card being top of the line for (potentially) 18 months = stagnation.

I will also take an issue with G92 being the best midrange card evah~. It's exactly that high-end stagnation that makes it look better then normal. 6600 GT was on par/faster than previous generation high-end. Radeon 9500 Pro, ditto. Of course this time, there IS no next-gen high end. If alongside 8800 GT Nvidia also intro'ed a $599 GTX replacement 50%+ faster than that relic, some of 8800 GT luster would be lost.
 
If I got a dolar every time I heard/read this over the last 10+ years, I'd be rich by now. Seriously.


No you wouldn't a dollar is only worth 0.03 Eur nowadays :)

In regards to performance of cpu's Intel is on it's tick part of the cycle so no massive gains in performance, this time around they have got the power consumption down for their top quad to something around a dual, that's not stagnation in that regard. Not as sexy for sure. They will be able to bing the cost down as well if needed. Performance should jump again with Nehalem. I only think we will start to stagnate if AMD disappears.

For GPU we are also in the tick part of th cycle and they are reducing power and making it cheaper and reducing power too. I do agree with comments above that the effort required nowadays does seem to be pushing out cycle times for complete refreshes.

If you have a 7900GT now coupled with a dual core Athlon then in the new year you could go to the 8800GT / rv670 coupled with a dual Wolfdale or K10 and not have to pay much, have less power consumption and have far greater performance. (forgetting other hardware costs needed of course).

By that time with the gpu's coming down in price and bugs being fixed dual card 8800GT or rv670 might be a worthwhile option too. Certainly it is a better option than the dual 8800GTX / 2900 that we have today I think.
 
If I got a dolar every time I heard/read this over the last 10+ years, I'd be rich by now. Seriously.

Well, if anyone wants to point out a stretch of time this century when a video card lasted for a year (possibly more) as the fastest thing on the market, I am all ears.
 
By that time with the gpu's coming down in price and bugs being fixed dual card 8800GT or rv670 might be a worthwhile option too. Certainly it is a better option than the dual 8800GTX / 2900 that we have today I think.

Not for people who have 8800 GTX already. Which is my point. A year later, I have no viable upgrade options. I am having trouble recalling the last time that happened.
 
Geforce 256 (10/99) -> Geforce 2 GTS (5/00) = 7 months
R300 (8/02) -> R350 (3/03) = 7 months
G80 (11/06) -> ??? = 12 months and counting.
 
If you have a problem and two independent teams tryingo to solve it, and if there is a solution, one team probably finds it.

CPUs- Athlon and P4 for example: former bet on IPC, latter on clockspeeds. We all can see how it turned out. Currently AMD and Intel designs doesnt't depart so radically, the overall performance is limited by thermals and generally seems to be within 10% given equal clockspeeds. Dramatic improvements doesn't come from optimizing existing things, but introdicing new instructions (SSE 4, 4.1, 5 etc). I doubt Larrabee and Fusion change that.

GPUs - ATI and NVidia have quite different architecture. Again, performance is limited by thermals. 8800GT runs 90C under load. I bet that RV670 performs within 10% 8800GT running same temperature.

I would say that the pure "accelerators of software" have stagnated in the sense that after years of perfection the devices have run out of significant internal optimizations that could be made without bottlenecking it somewhere else. But the progress of "finding pieces of software that can be reasonably speed up by implementing them in hardware" is far from stagnation. But as the "pieces" get bigger and more complex, the new generations take longer to produce.

For example if we could graph the time it would take to compress the entire Library of Congress in Winrar on last 15 years' CPUs, the time would decrease quite linearly. But if we'd graph the time it takes to encode a movie, I expect with current CPUs (Penryn etc) the time to make a quite nonlinear drop.

Obligatory car example - engines have been perfected to the point where they are jointly developed by Peuget-BMW, for example. But I wouldn't call the consumer market engine development stagnated. Slow and incremental, yes.

I agree on one thing- competition (surviving of AMD) is vital for any progress whatsoever: stock market punishes very hard for "wasting" billions in R&D without marketshare to lose...
 
Last edited by a moderator:
Well we could go back to the days when the i486 dominated CPU speeds from 1989 to 1993.

And continued to be relevant well into the mid-90's. Not to mention that back in 1989 a computer containing an i486 would run you in excess of $10,000.

Regards,
SB
 
Geforce 256 (10/99) -> Geforce 2 GTS (5/00) = 7 months
R300 (8/02) -> R350 (3/03) = 7 months
G80 (11/06) -> ??? = 12 months and counting.

You forgot the Voodoos? ;) What was that, 2-3 years each?

GF2 was not more of an update than G92 is, R350 even less so.
 
Geforce 256 (10/99) -> Geforce 2 GTS (5/00) = 7 months
R300 (8/02) -> R350 (3/03) = 7 months
G80 (11/06) -> ??? = 12 months and counting.

Were those really worthwhile upgrades? Radeon 9800 PRO vs 9700 PRO wasn't exactly OMG WOW. GeForce 2 GTS vs. GeForce 256 DDR is the same. Those chips also have like a tiny fraction of the complexity and size of what we're dealing with today. They also weren't pushing the limits of power demands and heat output. I don't think it's realistic anymore to expect massive tech updates in less than a year....

Right now, other than a SLI-on-a-board solution, I don't see how NV could ~double the speed of G80 at 65nm. It would be an absolutely huge chip.
 
Last edited by a moderator:
I remember the GF->GF2 transition being more impressive than the 9700->9800 transition. That being said, neither was a "knock your socks off" kind of release like R300, R580, G80, and G92 have been.
 
Back
Top