What do you expect for R650

What do you expect for HD Radeon X2950XTX

  • Faster then G80Ultra about 25-35% percent overall

    Votes: 23 16.4%
  • Faster then G80Ultra about 15-20% percent overall

    Votes: 18 12.9%
  • Faster then G80Ultra about 5-10% percent overall

    Votes: 18 12.9%
  • About same as G80Ultra

    Votes: 16 11.4%
  • Slower thenll G80Ultra about 5-10% percent overall

    Votes: 10 7.1%
  • Slower then G80Ultra about 15-25% percent overall

    Votes: 9 6.4%
  • I cannot guess right now

    Votes: 46 32.9%

  • Total voters
    140
RV670=48*5d replace x1950xt in mid-range market

if r650 or r670 fight 88ultra
and R700 will go against Nvidia G100

who fight G90(or G92)?

so cancel r650 or r670, use r680 fight G90(or G92)

This means that r650 was only r600 shrink with higher clock speed, nothing more.

55nm in highend early 2008 not sounds any good, r680 sampling date is missing, why AMD not give up the lower process racing with highend gpu's? ,delay/"hit in time" ratio is 2:0 with new lower process, and this delays affect lowend/masintream/performance GPU's too in the bad way.

Looks like we not see any performance GPU from ati before 2008Q1 (or they coming again out with highend chunked performance cards, like the x800gto/gto2 and x1800gto/gto2 was :cry: ), this is a very bad news from user aspect, they miss unreal3 and crysis without any performance GPU, most of the users will upgrade there computers for this 2 game this year.
What happend with rv670? it was scheduled for Q3, as a 65nm part.
 
I think you should forget the R670 and 680 rumors, RV670 is one thing, but R670, was there ever anything but Fuad-news on it? And 680, like someone explained already, there was a reason there was couple x20 and x80 chips on the roadmap before, there shouldn't be those now.
 
Perhaps they're cutting back on prototyping and bringup costs of lower volume GPUs as part of AMDs general austerity drive?
 
Looks like we not see any performance GPU from ati before 2008Q1 (or they coming again out with highend chunked performance cards, like the x800gto/gto2 and x1800gto/gto2 was :cry: ), this is a very bad news from user aspect, they miss unreal3 and crysis without any performance GPU, most of the users will upgrade there computers for this 2 game this year.
How many will upgrade for these games in absolute numbers? Will they upgrade to 8800GTS or higher, that is, will AMDs lack of competitiveness at the top and have anything but marginal impact?

Both the data which show market share per price interval, and the Steam data which is the best resource available for examining actual hard and software used by this group of online gamers really should be stickied.

There's too much of an ivory tower mindset in this hideout for industry insiders and technology enthusiasts.
 
It's entirely possibly R600 will see some performance improvement as time goes on, as well, guys.

Being it's based on the same supposedly future leaning, shader heavy-texture light, principles as R580.

I wouldn't be surprised if as per the rumors, it's a little faster than 8800GTX in Crysis.
 
Seeing as the marketshare of graphics cards costing $250 and up is less than 3%, these are pretty much incapable of generating much profit on their own, nevermind the fact that their materials cost is higher, yields are lower, and volumes to offset development costs for the specific part is way lower.

You could argue that they act like marketing for the lower end parts. But then again, you would have one hell of a hard time actually proving that it was particularly effective.

I agree that AMDs current situation is not of their choice. It is a result of choices they made, however, and I feel they would have been one hell of a lot better off if they had focussed on maintaining good mobile marketshare for the last couple of years, and then using those chips as the basis for OEM friendly desktop parts. But that is just my personal opinion.

How certain are you of the bolded statement?The best modern approach to marketing is generating demand for what you`re offering, this is the prime goal, because price is something we`re willing to pay in order to satisfy a demand that was created from the exterior(and if you want to discuss economics, I`d be quite glad to, but we`d derail the thread). John and Jane Doe aren`t the B3D wizkids that know that the 8600 or 2600 are actually far suckier than the 8800/2900. Most "prime-time"(so to speak) reviews focus on the top-end. Couple those two and what do you get?John staring at an 8800 or 2900 review and going:wee, that`s great, the 8800 is da bestest card available...oooh, I`m kindof short on cash, but that 8600 should be fairly close...there`s only 300 whatever points amongst them...I`ll go get that. Not to mention that top-end parts are mostly the parts that are talked about in dev-interviews, reccomended(not minimal) system specs and so on and so forth. Top end cards get a lot of free coverage, and they`re inherently sexy. If you don`t have anything there, you lose many advantages IMO.

Why do people buy Series 1 BMWs?Because the Series 7 and the Ms and so on and so forth, the high-end comparably lower volume parts are being pimped everywhere and they`ve built a certain image of quality, performance, etc. Why are producers like Renault trying to enter the high-end market(the failed Vel Satis for example)?Because the perception that having high-end stuff creates flows down encompassing lower end ones. Again, I underline, one must think outside of the box of whiz-kid who counts ROPs, RBEs, SPUs and so on...think Joe and Jane Doe, and think of marketing as more than advertising, because it certainly is.
 
There's too much of an ivory tower mindset in this hideout for industry insiders and technology enthusiasts.

I don't know why you need any more evidence of the value of performance leadership than the fact these companies pour millions of dollars into attaining it. If there was no value why would they do it?

It's all about establishing and promoting a brand. Nvidia and ATI are brands. G80 and R600 are brands. It's those same industry insiders and enthusiasts that are best equipped to acknowledge the quality of mainstream products in isolation. But a lot of regular folk buy brands. ATI didn't produce a 700M transistor part because they thought performance leadership was unimportant. And that is the only context in which the situation can be evaluated IMO.
 
Just some thoights :

Say G92has 96 Sahders, 800Mhz (Shaders at 1600+), 256Bit Bus GDDR and matches a G80 Ultra in Singlechip-Performance, while it consumes as much power ot littler more power then G71.
 
I don't know why you need any more evidence of the value of performance leadership than the fact these companies pour millions of dollars into attaining it. If there was no value why would they do it?

It's all about establishing and promoting a brand. Nvidia and ATI are brands. G80 and R600 are brands. It's those same industry insiders and enthusiasts that are best equipped to acknowledge the quality of mainstream products in isolation. But a lot of regular folk buy brands. ATI didn't produce a 700M transistor part because they thought performance leadership was unimportant. And that is the only context in which the situation can be evaluated IMO.

But the question remains, how long does it take, and how much is it really worth?
It takes years of dominance, and worth maybe and 10%...
Is this a lot in a corporate world? Sure, but its hardly wrist slitting, or the demise of the competition.
With intel entering the discrete market, and the proliferation of ultra low watt embedded solutions to the desktop market, I can only see a world of hurt comming nv's way.
 
I don't know why you need any more evidence of the value of performance leadership than the fact these companies pour millions of dollars into attaining it. If there was no value why would they do it?

It's all about establishing and promoting a brand. Nvidia and ATI are brands. G80 and R600 are brands. It's those same industry insiders and enthusiasts that are best equipped to acknowledge the quality of mainstream products in isolation. But a lot of regular folk buy brands. ATI didn't produce a 700M transistor part because they thought performance leadership was unimportant. And that is the only context in which the situation can be evaluated IMO.

You made the suggestion that somehow the ati brand is going to dry up and blow away if they don't have a competitive high end part on the market in the next six months. Intel has been competing in graphics for years without any effort at producing anything but crap and they still hold a significant portion of market share.

Would they rather be market leaders, certainly, but crucial to their survival? Hardly.
 
But the question remains, how long does it take, and how much is it really worth?
It takes years of dominance, and worth maybe and 10%...
Is this a lot in a corporate world? Sure, but its hardly wrist slitting, or the demise of the competition.
With intel entering the discrete market, and the proliferation of ultra low watt embedded solutions to the desktop market, I can only see a world of hurt comming nv's way.


Don't forget the reason nV got to the point they are is because they are market leaders. With out leadership products, ATi didn't make much headway pre 9700, or did they? I don't remember they having more then 30% marketshare with products that were second in performance.

Why would low end integrate graphics into CPU's hurt nV, come on there is no way they can take on discrete midrange and high end cards. If it was that easy IGP's we have today would have been able to achieve what discrete cards are doing.
 
Why would low end integrate graphics into CPU's hurt nV, come on there is no way they can take on discrete midrange and high end cards. If it was that easy IGP's we have today would have been able to achieve what discrete cards are doing.
Actually, with 20MiB+ of Z-RAM you could do very interesting things if your architecture had excellent performance per mm² in terms of raw processing power. The reason I focus on the latter is that if you somehow manage to significantly reduce your bandwidth bottleneck through one or multiple techniques (Z-RAM, eDRAM, TBDR, pixie dust, etc.:runaway:) then your GPU performance is exclusively limited by how many mm² you are willing to dedicate to it and your raw processing performance per mm² (at your target die size, thus taking scalability into consideration).

I think before a certain level of performance where even texturing bandwidth would become too significant, your limitation is really just how many mm² you want to dedicate to the processing part of the GPU, not bandwidth - so an IGP doesn't have an intrinsic disadvantage, except that it is much more cost-sensitive. There is also the question of whether it makes sense to integrate a large GPU along with a small CPU, but as long as your process is competitive in terms of perf/$ (which a CPU process might or might not be) then that is more of a political question than an engineering or even an economic one, imo.

Call me a cynic, but I'm not incredibly surprised that current IGPs are awful. It's literally just a high-end design ported to a segment where the original tradeoffs don't make any sense. So if I was NVIDIA, I would definitely be worried about Fusion and Intel's implementation of the same idea in the Nehalem timeframe... Looking at the bandwidth and die sizes in the ~$100-$125 segment, that's hardly impossible to beat. So in my mind, this is more a question of whether AMD and Intel *want* to deliver in terms of performance (->mm² and perf/mm²), and whether it even makes sense for them to dedicate the R&D budgets necessary for that.
 
Is this a lot in a corporate world? Sure, but its hardly wrist slitting, or the demise of the competition.

I'm not sure where I suggested that performance leadership was required for survival. ATI was doing fine before they took a stab at the high end with R200. I am merely pointing out that the prestige and mindshare associated with high-end parts has more value than was suggested. Given that even $400 parts are a miniscule part of the market would you say that everything is honky dory if ATI decides to only peddle RV630 class hardware and ignore the high-end completely? Last I checked, the R600 speculation thread was a whole lot bigger than the RV630 one. Of course, now that ATI's high-end offering is sub-par that segment is all of a sudden inconsequential :???:
 
You made the suggestion that somehow the ati brand is going to dry up and blow away if they don't have a competitive high end part on the market in the next six months.

No I didn't. Try reading my post again...slowly this time.

Intel has been competing in graphics for years without any effort at producing anything but crap and they still hold a significant portion of market share. Would they rather be market leaders, certainly, but crucial to their survival? Hardly.

What does Intel's billions of cheap, worthless integrated GPU's have to do with the discrete GPU market? Intel is a CPU and chipset company. ATI is a GPU company - they don't make discrete cards just for shits and giggles.

Again, these companies don't strive for mediocrity so it's patently obvious (to me at least) that they see more value in being a market leader than some posters here. ATI certainly aimed for leadership with R600. And I'm sure they will do so with R700 and beyond.
 
No I didn't. Try reading my post again...slowly this time.



What does Intel's billions of cheap, worthless integrated GPU's have to do with the discrete GPU market? Intel is a CPU and chipset company. ATI is a GPU company - they don't make discrete cards just for shits and giggles.

Again, these companies don't strive for mediocrity so it's patently obvious (to me at least) that they see more value in being a market leader than some posters here. ATI certainly aimed for leadership with R600. And I'm sure they will do so with R700 and beyond.



I would rather see Discrete GPU from Intel which is designed by Team member of Power VR .In addition to that , Intel has to spend on Fixed Cost of Wafer Production/Wafer Factories is huge than what we expected and to explore new application for which kind of ASIC specialized in Huge Die while increasing the Usage of Wafer Production.But how's the gross margin for product in Sub 250~300 market ? It seems to me that compared to 7900GT/7900GS at initial time frame , margin of entire G8X Family seem a bit lower on all aspects.
 
30kni1.jpg

Single GPU setup (non-crossfire setup) 1144MHz GPU over-clock on R600. Amazing :) :)

Upcoming R650/R680 or whatever they call using @ 65nm or 55nm will be fun to see, how far they can push GPU....
 
I don't know why you need any more evidence of the value of performance leadership than the fact these companies pour millions of dollars into attaining it. If there was no value why would they do it?

It's all about establishing and promoting a brand. Nvidia and ATI are brands. G80 and R600 are brands. It's those same industry insiders and enthusiasts that are best equipped to acknowledge the quality of mainstream products in isolation. But a lot of regular folk buy brands. ATI didn't produce a 700M transistor part because they thought performance leadership was unimportant. And that is the only context in which the situation can be evaluated IMO.
ATI no longer exists, the ballgame may have changed, at least a year or so from now, sufficiently far along the design pipeline to make changes in direction.

That said, it is obvious that AMD and previously ATI feels compelled to compete at the high end. It is the nature (still) of technology enthusiast reporting to focus almost exclusively on the high end, and having top performers (helped by driver optimisations) ensures that your name is glued to the top of every benchmark chart around.

But the value of this is hard to quantify. ATI was murdered in the marketplace in spite of having the 19X0XTXs, arguably the fastest parts at the time, and lost marketshare very rapidly both in mobile and stationary applications. AMD, in spite of their rebranding of Opteron->Gaming dual socket systems, have still lost a lot of marketshare to Intel. Not only are the effects of brand building difficult to assess, brand building through having a high performance offering is a special case. How useful is it really? Suffice to say, it doesn't seem as useful as actually being competitive in the market where a consumer is interested in buying, which is actually a pretty healthy sign.

And it doesn't just come down to price/performance, ATI used to compete successfully in the mobile space based not only on performance, but also on thermal engineering which simplified the design of the whole computer, enclosure and cooling, and gave the consumer quieter computers with longer battery lives. Driver quality is likewise important to consumer satisfaction, and so on.

Obviously the HD2900XT is a disaster. Late, underperforming and overheating. But even if it had matched or exceeded the 8800GTX, would that have sufficed to make it a success? Would it have made sense? Would anyone, except for a handful, give a damn regardless of the benchmark charts? This is where the marketshare data vs price level, and the Stream survey, now over half a million replies, tell a pretty definitive story - No, it wouldn't really have mattered much. The 8800, uncontested champion in performance and features for 9 months (!), has achieved a whopping 1% penetration among online gamers. (And at this low level, industry insiders are likely to significantly affect the data, not to mention that the WoW/Sims part of the market is likely even less interested in the top-end offerings)

The high end monster card path is, IMO, a dead end in terms of technology evolution.
Regardless of technology though, what is more important is that the market rejects them. People just don't buy, in spite of advertising and tech reporting hype. So maybe it would make sense for the manufacturing and reporting industries to start asking themselves what people actually desire and spend their efforts there. It's not likely to be GPGPU and Crossfire....

The market data really should be stickied - so much discussion here is based on ideas about the market that are just plain wrong.
 
Last edited by a moderator:
Single GPU setup (non-crossfire setup) 1144MHz GPU over-clock on R600. Amazing :) :)

A glimpse of what R600 might have been without power leakage and with the high clocks that could then have been possible? At one point there was a rumour of R600 running at 1000 mhz as standard, but I guess the TSMC 80nm process can't handle it for a part as complex as R600.

I hope the refresh solves those issues, otherwise it will be as unattractive as R600.
 
Back
Top