Disappointed With Industry

2) Its not only expensive from a transistor PoV. Its expensive from a power point of view. The HD2900XT and 8800GTX cards ((yes the 8800GTX is fairly power efficient. But its still enough to heat up a room))
You would have a point if they didn't increase performance to match the increase power consumption. Now ATI might not have managed that but NVIDIA did.

The problem is that you can't limit yourself in your purchases not that their architecture uses too much power.
 
I'm sorry but you can have decreased power consumption "And" increased performance. But we'll have to wait for 0.65 node to show that. The power consumption of the 8800GTX series may be good. But despites its efficiency its not something you can ignore. But even at idle it consumes alot of power and outputs a lot of heat. There are easily directly comparable results. The 7900GT cards were infact my favorite hardware from a power/performance perspective. They not only were faster but dropped a whole lot less heat into my room than my 7800GTX cards at much higher clocks. I do believe 8800GTX performance could have been met, With less power consumption than a DX10 variant. Even having a unified architecture. Nvidia did manage good power and thermal designs with the 8800GTX. But the amount of power consumed "and" used is not something I can wave my stick of dismissal at.

Run 8800GTX cards in SLI and tell me that the increased power is not noticable. I believe we'll see history repeat itself. With affordable .65NN cards with similar performance to the 8800GTX in a years time give or take. With much lower power consumption and heat output.
 
Last edited by a moderator:
Perhaps I am unimaginative (most likely), but how else do you see this working? Would you recommend a more evolutionary approach to the software side, so that these massive changes in both software and hardware can be minimized, thereby providing a smoother flow of products? I guess that sorta sounds nice, but I would then wonder how long it would take to make significant changes? I guess when looking back at all the generations of cards, I see the same pace being taken. How long did it take to get a good DX8 applications (I am not talking X-Isle, 3D Mark 2001, or the handful of DX8 demos released at the time)?

I guess also considering the length of time most titles are in development (two to three years average) there has to be a cutoff early in the games development where they say, "Are we ready to try this on DX10? Microsoft has already postponed Vista a couple times now..." My guess that the risk of shipping a product with no operating system to run it under one of its major rendering modes would weigh heavily on these developers.

I guess my point here is that these times of confusion and change are unavoidable in an industry that moves as fast and changes as often as this one. Transistors are cheap to these guys, and they need ways to justify packing 680+ million of them in a product. I for one am glad of the 8800s and 2900s, as they both play current games like mad, and I don't particularly care that a portion of these cards are not being utilized yet. Hell, I remember back with the 9700 Pro where people were screaming where were the DX9 games, I was ecstatic to be able to play Medal of Honor: Allied Assault at 1600x1200 with 4X AA enabled at pretty smooth rates.

I'm not disappointed in the industry one bit. They continually make things very interesting with some pretty progressive changes year in and year out. I look at how Morrowind looked in 2002 and see what the same developers were able to do with Oblivion in 2006. We probably wouldn't have these jumps if we didn't have aggressive folks like NV and ATI pushing the hardware, as well as MS and the DirectX guys trying to expose new and interesting functionality to create more immersive worlds.

I still think you are sexy though Pelly.
 
I do believe 8800GTX performance could have been met, With less power consumption than a DX10 variant.
You are free to believe what you want, but the simple fact stands ... their DX10 architecture is more efficient than their DX9 architectures. The performance per Watt increased on the same process.
Run 8800GTX cards in SLI and tell me that the increased power is not noticable.
It is YOUR choice ... honestly, you sound like a crackhead blaming his dealer for his addiction.
 
I dunno about Chris, but I can give this high-end vidcard thing up ANYTIME I LIKE. I just don't like *right now*. :yep2:
 
You are free to believe what you want, but the simple fact stands ... their DX10 architecture is more efficient than their DX9 architectures. The performance per Watt increased on the same process.

I think he meant something else, if the DX10 part was left out of G80 (yeah, I know it's nonsense but for the sake of argument) it would be a much less power hungry chip.
 
This whole DX10 and cards thing starts to remind how consoles are evolving:
1) forward looking and fixed spec;
2) first generation hardware - hot, with strange pieces missing (UVD in R600, scaling and hard disks in consoles etc).
3) die shrinks and refinements lead to refreshes that run cooler, have fewer quirks and earn companies actual money, because content has emerged.

Of course DX is not set in stone (10.1 coming) and cards themselves are under active development, but honestly - 2900 launch is a mirror of Xbox and PS3 launches.
 
You are free to believe what you want, but the simple fact stands ... their DX10 architecture is more efficient than their DX9 architectures. The performance per Watt increased on the same process.

It is YOUR choice ... honestly, you sound like a crackhead blaming his dealer for his addiction.

I dont see any reason an efficient scalar architecture has to be DirectX10. None whatsover. I did not say Nvidia's G80 isnt efficient. It is. But I believe its theoretically possible to design a DX 9.0 Unified shader architecture on a scalar design with similar efficiency levels to that of the G80 with better performance per watt ratio.

Yes it is my choice. But I would rather the cards have less power consumption. And I do believe that DX10 came at a cost on our current nodes. And that on future nodes, We will see performance per watt improve on midrange/low high end cards. And when that happens I'll probably be the first to jump onboard.

I think he meant something else, if the DX10 part was left out of G80 (yeah, I know it's nonsense but for the sake of argument) it would be a much less power hungry chip.

Yes that is what I meant. Purely hypothesis. I believe a DX 9.0 architecture with similar design decisions to the G80 could have theoretically achieved a better performance per watt ratio. That saying. The G80 is what it is. And for the most part I'm happy with it. But I would like more power efficiency and less heat output dumped into the room/case.
 
Last edited by a moderator:
This whole DX10 and cards thing starts to remind how consoles are evolving:
1) forward looking and fixed spec;
2) first generation hardware - hot, with strange pieces missing (UVD in R600, scaling and hard disks in consoles etc).
3) die shrinks and refinements lead to refreshes that run cooler, have fewer quirks and earn companies actual money, because content has emerged.

Of course DX is not set in stone (10.1 coming) and cards themselves are under active development, but honestly - 2900 launch is a mirror of Xbox and PS3 launches.

I think it's far more reasonable to call a spade a spade -- the 2900 is a late, hot, under-performer. This is not indicative, however, of an industry trend, nor even ATI's future performance.
 
If you've been listening to graphics companies quarterly conference calls the last three years or so, it was all "Vista is the inflection point. Vista will help us sell more discrete. Vista will etc, etc, etc".

I think this mindset clearly had an impact on how much NV and ATI tried to stuff into their highest end parts, which resulted in die size and power consequences for this generation.

Re UVD, I haven't seen much testing yet on the matter, but so far I'm a bit puzzled at those who see this as greatly consequential that it isn't in R600. It's an acceleration piece that's particularly nice to have for entry and midlevel type CPUs to take some of the load off. Given the presumption that R600 is going to be paired with high-end dual and quad core CPUs, the lack of UVD shouldn't make much difference in performance. My understanding today --and by all means somebody correct me if they understand otherwise-- is that not having UVD has zero impact on image quality issues of video playback.
 
If you've been listening to graphics companies quarterly conference calls the last three years or so, it was all "Vista is the inflection point. Vista will help us sell more discrete. Vista will etc, etc, etc".

No surprise as Microsoft has always said that you will have to provide a minimum graphics performance to get a “Windows Vista Logoâ€￾. Without the delay it would have been an even more bad time for IGPs.

IIRC the new hope for the IHV is that D3D10 support would become a requirement for the “Vista Gold Logoâ€￾ real soon now.
 
Geo: why do you imply that a R600 will always end up in a PC with a high-end CPU? If it was single-slot, I'd already have it in my SFF PC featuring an Athlon X2 3500+, for example. Some people might buy it for their lower-end PC's solely because of DX10 capability.

Obviously, in a lower/midrange PC, it might be that your HD-vids stutter like mad without it.
 
It's not single slot tho, is it? And it has steep power requirements, doesn't it? I guess I just don't know why anyone would buy a 215w double-slot high-end enthusiast GPU unless they were interested in high-end gaming. In which case not pairing it with a reasonable CPU is just asking for a lot of pain. I mean, go buy an 8600 if high-end gaming guy isn't who you are.

Tho I'd be curious to see how it did with your X2 3500 for video, as I would consider that a bit aged right now but not of the class of say, a single core P4 3.2GHz, or the Sempron 2800 that somebody tested it with. I did h.264 1080p quite nicely (no stuttering) with an X1800XL and an X2 4200 in early 2006. I haven't heard anyone suggest it's worse than R5xx for this stuff (but then I haven't seen it tested against R5 either).

Edit: Tho, say a year from now, when they start doing GT/GTO/GTO2 type knock-offs with this GPU this might be a much more valid point to be making.
 
What I meant is, if my PC had enough room for it (or if it was a regular tower), I'd be the first to get one (or the theoretical single-slot G80 or whatever single-slot for that matter).

New toy factor, DX10 compliance, no bottlenecks on the GFX-side of things etc. are reason enough for me and I guess many other people as well.

(well, theoretically, since I won't go Vista and thus don't care about DX10 at this point in time)
 
New toy factor, DX10 compliance, no bottlenecks on the GFX-side of things etc. are reason enough for me and I guess many other people as well.
I fail to see how the X2600 is not superior in every way for that, though. It's also a new toy, it's also DX10 compliant and with that CPU it won't be a real bottleneck. And it takes 3-4x less power, is quieter and creates less heat...
The 2900XT might be OK, but I fail to see how it is not inferior in every way possible for what you're thinking of though? :)
 
I fail to see how the X2600 is not superior in every way for that, though. It's also a new toy, it's also DX10 compliant and with that CPU it won't be a real bottleneck. And it takes 3-4x less power, is quieter and creates less heat...
The 2900XT might be OK, but I fail to see how it is not inferior in every way possible for what you're thinking of though? :)

Although I definitely understand what you're saying here Arun, I think his point is that a "flagship" level card should have all the great features of a mid-range card + more. I admit, I was really surprised when I saw both ATI/AMD and NVIDIA have better HTPC/multimedia features on their midrange cards than on their top of the line models...(Think new PureVideo engine and UVD)...

An analogy could be found with cars......Imagine if BMW offered a new engine for the 3-series.....but then didn't offer it on the more expensive M3....(ie: the 3-series would be faster than the M3...) Granted, the "flagship" GPU is designed around maximum gaming performance....but for the prices these companies are getting these days for a flagship GPU...they should come with every bell and whistle known to man...lol... ;)
 
Well, Corvettes cost a lot more than Blazers, but Blazers will do a lot better in the mud. But, yeah, it is a bit counterintuitive and part of what branding is supposed to help you manage. . .
 
I'm actually fairly happy with the industry at this point in time. We're seeing some very interesting rumors on upcoming CPUs, and while the Radeon 2xxx was a somewhat lackluster launch that could possibly signal a future that sees NVIDIA alone in the discrete, high-end graphics market, I'm pretty stoked about the 8800 GTS I borrowed earlier this week. Probably the best new video board I've played around with since the 9700 Pro.
 
I fail to see how the X2600 is not superior in every way for that, though. It's also a new toy, it's also DX10 compliant and with that CPU it won't be a real bottleneck. And it takes 3-4x less power, is quieter and creates less heat...
The 2900XT might be OK, but I fail to see how it is not inferior in every way possible for what you're thinking of though? :)

Is it already in stores, have I missed something this month? ;)

Waiting to see how it performs, it's a possible candidate for sure.
 
the Radeon 2xxx was a somewhat lackluster launch that could possibly signal a future that sees NVIDIA alone in the discrete, high-end graphics market,

"Slouching towards Gomorrah"? That was always the nightmare scenario, but I'm not there yet. I think a cigar is just a cigar on this one.
 
Back
Top