low yield for the 9600 ???????

nogamer i'd really just like to see next gen parts not just more speed bined parts . the r400 and nv40 are what i'd really like to see in fall but i doubt its going to happen.
 
jvd, perhaps I wasn't clear enough but I was refering to an extra performance battle until another for real next-gen parts.
 
sorry for adding a g into your name . I think i understood what you meant. You want to see faster parts come out for fall. I honestly don't care as my 9700pro is still performing close enough to the current highend cards. I want the next gen from ati and nvidia. I don't want to basicly pay 500$ for a small performance increase and no new features is what i'm saying .
 
Yup, these clock bumps serve a market positioning purpose vs. the corresponding nVidia offerings. Which is important.
However, if the current ATI lineup doesn't entice you into buying, somewhat increased clocks a few months down the line will hardly make you more anxious to part with your cash. To attract upgraders, they have to come out with something better.

Entropy
 
I wish ATI would bump clocks SUBSTANTIALLY, R350 didn't really do that in my opinion. 425MHz wouldn't really be worth it as a new product unless core changes gave more features and/or more speed per clock, but if they could hit close to 450 on the same ol' R350 core, that would be something worth writing home about.

Most people can't upgrade every time a new video card is released, there needs to be a significant bump in performance to stimulate people to plonk down their cash.

My worries are the market's becoming kinda over-saturated with much too similar products. It leads to difficulties choosing, especially when one generation replaces the next. Should someone go for NV31 ultra or NV36 regular, etc...

Both ATI and Nvidia have at least two different DX9 series of chips out there with at least 2 different variants of each series. Now there are multiple board makers for both ATI and Nvidia chips, which means an awful lot of cards out there, while software is still as firmly entrenched in the DX7 generation as ever, over two years after the original GF3's release...

Kinda sucky situation, don't you guys think?


*G*
 
Grall said:
Most people can't upgrade every time a new video card is released, there needs to be a significant bump in performance to stimulate people to plonk down their cash.
Well, if they won't upgrade, then there's no real point in producing a significant bump forward every release, no? ;)

I agree, though, that I'd prefer to see fewer but more stratified releases. It's probably not going to happen, though, with everyone tweaking each chip so much as processes improve and parts availability changes. Maybe ATi thinks that if they're paying for a new tape-out, they may as well be able to market a new card?

This is also a disadvantage of strong competition like we're seeing between ATi and nVidia right now: constant one-upsmanship. It' ain't that bad, though, compared to the alternative. :)
 
Grall said:
I wish ATI would bump clocks SUBSTANTIALLY, R350 didn't really do that in my opinion. 425MHz wouldn't really be worth it as a new product unless core changes gave more features and/or more speed per clock, but if they could hit close to 450 on the same ol' R350 core, that would be something worth writing home about.

But then where do you go from there? Eeking out speedbumps and process improvements with some overhead allows you to get a good life out of your product and enable you to get a good return on the massive investment it takes to get a chip to market nowadays.

Don't forget these products take a couple of years to design, and then if you are lucky they have six months as a top tier product, then six as second tier, and then they are your low end part at lower profit.

If R420 is twice as fast as R350, just six months down the line, with new features, that is still a massive rate of progress and well in advance of the software developers - is that still too slow for you? :oops:
 
Bouncing Zabaglione Bros said:
If R420 is twice as fast as R350....

That's one hell of a big if. And considering the 9700P was introduced at the end of last summer, even were the R420 to fulfill your hopes, that still isn't too impressive a rate of improvement. The R3X0 design will have enjoyed a long life at the top of ATIs product line with modest improvements.

And frankly, I find "twice as fast as R350" to be very optimistic, given the similarities in lithographic process, unless they really throw power budgets overboard. In certain circumstances, possibly, or even probably. But across the board? I doubt it. Unfortunately, it seems we won't know for certain for a quite long time.

Entropy
 
Entropy said:
Bouncing Zabaglione Bros said:
If R420 is twice as fast as R350....

That's one hell of a big if. And considering the 9700P was introduced at the end of last summer, even were the R420 to fulfill your hopes, that still isn't too impressive a rate of improvement. The R3X0 design will have enjoyed a long life at the top of ATIs product line with modest improvements.

And frankly, I find "twice as fast as R350" to be very optimistic, given the similarities in lithographic process, unless they really throw power budgets overboard. In certain circumstances, possibly, or even probably. But across the board? I doubt it. Unfortunately, it seems we won't know for certain for a quite long time.

Entropy


I used that as an example because it's what ATI have stated, and they've been pretty good at doing what they say recently. If you think that doubling the performance of a product in a year "isn't too impressive", I'd like you to name a few other products or industries that do that regularly, or even *aim* to do that regularly. Outside of chip manufacturing of one sort or another, I can't think of anything.

However you have taken those few words very much out of context, and not really addressed the main points that I made. If your product doesn't have some kind of lifespan and some kind of regular improvements, it probably isn't economically viable. I can't see the market changing so that you get one revolutionary card every three years that costs $1200 and that's all you get to buy until the next three year cycle begins.

For instance, we have the R350 six months after the R300, and it's basically an evolutionary improvement. Are you suggesting that ATI should not have launched the R300, and waited until now so that it had time to "evolve" into the R350? Obviously that would have lost ATI a lot of sales at a time when they were the only show in town, and much, much better than the competition, and thus still giving customers a great benefit.
 
Doubling texture performance wouild be a distinct possability when they move to 130nm to make an 8x2 part. Whether they will increase shader performance relative to clock speed is another matter though.
 
Bouncing Zabaglione Bros. said:
However you have taken those few words very much out of context, and not really addressed the main points that I made. If your product doesn't have some kind of lifespan and some kind of regular improvements, it probably isn't economically viable. I can't see the market changing so that you get one revolutionary card every three years that costs $1200 and that's all you get to buy until the next three year cycle begins.

For instance, we have the R350 six months after the R300, and it's basically an evolutionary improvement. Are you suggesting that ATI should not have launched the R300, and waited until now so that it had time to "evolve" into the R350? Obviously that would have lost ATI a lot of sales at a time when they were the only show in town, and much, much better than the competition, and thus still giving customers a great benefit.

Frankly, I don't understand what you are arguing.

What people have been saying in this thread is that while providing small increases in clock speed during the life time of a chip is nice and all, it doesn't really provide a sufficient step for an upgrade decision if you own a previous lower clocked variation of the same chip. I contributed that the benefit to the manufacturer was rather in the relative market postioning vs their competition, but agreed that small clock hikes didn't provide much impetus for upgrades.

Do you agree or disagree with those sentiments?

As I said, I don't really understand what you're arguing, so forgive me if I respond in a vague manner. We are talking about semiconductors here, and additionally about a class of ASICs that are amenable to parallell processing, thus directly benefitting from increases both in clock speed and circuit density when new lithographic process become available. (Thus it is quite reasonable to assume that GPUs should grow faster than CPUs in processing power. There are also natural generational steps for such devices, that coincide with the progression to more advanced lithography. It isn't as clear cut as all that of course, since there are quite a bit of differences, and evolutionary tweaking possible, using the same wavelength. But that was why I expressed a belief that it would be difficult, within the present power constraints, to provide a factor of 2 performance improvements across the board by simply moving from TSMC 0.15 to 0.13, never mind the memory technology necessary.)

There has been some talk, primarily from nVidia, about lengthening product cycles, but it's a double-edged sword. The benefit is obviously that you can spread development costs over a longer time. The less obvious drawback is illustrated by the sentiments in this thread - if you don't progress as fast, then neither do people feel the need to upgrade as often, leading to reduced sales, primarily in retail. Furthermore, generally consumer interest in your field of products will wane with a lack of interesting development.

The last may be another reason for releasing these relatively pointless clock revisions - they help maintain an impression of continous progress, and give the hardware sites/rags something to write about. Keeps the pot simmering, so to speak.

I guess I examplify the second consideration. Some time ago, I said that I would stop visiting B3D until we hit the 0.09um node, because I didn't think that ATI/nVidia would be able to come up with other than more-of-the-same until their density/power budgets changed substantially. And to me, who is not professional in this field, doing the same things only at one step higher resolution just isn't all that interesting. The benchmark cheating changed that though. Very interesting to see the reactions and responses.

Peace,
Entropy
 
Entropy said:
Frankly, I don't understand what you are arguing.

What people have been saying in this thread is that while providing small increases in clock speed during the life time of a chip is nice and all, it doesn't really provide a sufficient step for an upgrade decision if you own a previous lower clocked variation of the same chip. I contributed that the benefit to the manufacturer was rather in the relative market postioning vs their competition, but agreed that small clock hikes didn't provide much impetus for upgrades.

Do you agree or disagree with those sentiments?

Yes, I agree, but then I never disputed that. I don't know why you think I did. It doesn't make sense as my reply was to this:

Grall said:
I wish ATI would bump clocks SUBSTANTIALLY, R350 didn't really do that in my opinion. 425MHz wouldn't really be worth it as a new product unless core changes gave more features and/or more speed per clock, but if they could hit close to 450 on the same ol' R350 core, that would be something worth writing home about.

My reply was to the idea that ATI should bump the clocks up substantially. I don't think they can bump to the max straight away, either for physical reasons (cannot make the chip that fast yet), or business reasons (burning their product line up in one go before they can get a return on investment). Why turn the technolgy into one or two products when they can turn it into ten?

I think you just need to re-read what I wrote, and what I replied to in order to understand it properly.

Entropy said:
As I said, I don't really understand what you're arguing, so forgive me if I respond in a vague manner.

You then took my example comment about "if R400 is twice as fast as R350 [as ATI claim]", which was a few words at the beginning of a sentence, ignored it's context, and ran with it off at a tangent. <shrug>
 
Why turn the technolgy into one or two products when they can turn it into ten?

Well, Intel just introduced to the gaping masses..(drumroll!)... a new box design for their retail CPUs. For the second time this year. So there are even more options available if need be.
Question is - why should we care?


From a technical standpoint, making a faster product isn't as easy as all that. The improvements in featureset and performance of the R3X0 and the NV3X came at a cost that hasn't been sufficiently discussed, I feel.

Power.

It was the parameter that had to give in order to push the envelope to that degree on the process technology at hand. Their predecessors fit comfortably - GPU, memory, video, et cetera - within the 25W limit of the AGP port. Both the current high end contenders however consume several times the power of the previous generation products. But neither nVidia or ATI can pull that trick again, drawing another factor of four more power just isn't an option. nVidia has indicated that +100W products may be in the pipeline, so I guess they feel compelled to push a bit further along that path, but the big step has already been taken.

If ATI uses the high leakage process, and allow their total power draw to increase another 50-100% over the R350, I wouldn't be surprised if they could achieve roughly doubled performance across the board, using GDDR3 memory and the best TSMC can do with 0.13um at the time.

It wouldn't make for a very desireable product in my book, but then most people here do not seem to care a lot.

What happens with the RV350 and successors is arguably a whole lot more interesting since I don't think ATI will want to loose transferability to mobile solutions, nor violate AGP power budgets in this segment. Sure, it's fine if the RV360 can be higher clocked with good yields. But just higher clocks is moderately interesting. What would be really interesting would be if they chose to change the mid market rules.

Entropy
 
what i am saying is that owning a 9700pro that i spend 400 bucks for in sept . Why would i pay another 400-500 bucks for the 9800pro for a few fps more. I and most of the sane people who have bugets will not pay another 400 bucks for what amounts to nothing. Now i will spend 400 for a dx 10 or even for a ps 3 and vs 3 part that offers close to double the speed of my current card. Ati did that for me once hopefully they do it twice.
 
Entropy said:

That's a very good point. There's already talk in the IC world about what will happen when they are making circuits small enough that there will be serious issues with getting electrons down the pathways. As you've already pointed we'll run into other power issues long before that.

However, there is a basic rule that "intelligence always trumps physics" ie, when we run into insurmountable physical limitations, we change the goalposts to work around the problem. In the first instance we find better ways of doing the same jobs better (larger chip wafers, small lithography techniques, etc). Even basic clever design and good process can give us "impossible" performance from R350 on the .15 process.

Then there come larger paradigm shifts that will happen as they are needed and become cheap enough. Parallel computing, cell computing, quantum computing, nanotechnology, holgraphic storage, protein storage, etc, that are fundamental changes in "how we do stuff".

We're still within usable power limitations for current and upcoming VPU/GPUs, but as other technologies arrive to replace what we use now, I expect to see current issues (the ones that caused the paradigm shifts in the first place) to become non-issues.
 
Bouncing Zabaglione Bros. said:
Why turn the technolgy into one or two products when they can turn it into ten?

Intel is constantly tweaking their fab processes and doing minor relayouts of their chips. That's the advantage of owning your own fab. It's also part of the reason why clock speeds for CPUs move up in a generally smooth fashion, instead of in huge jumps every 18 months or so as the process geometries shrink. Of course another reason for the smooth clock ramping is that Intel spreads out its introductions in order to keep a steady pace of planned obsolescence in a market where the underlying product really only changes once every five years or so.

GPus are different because substantially redesigned GPU cores are released every 18 months or so. The reason GPU cores are overhauled so much more frequently than CPU cores derives from the fact that graphics is embarrassingly parallel while general-purpose computing is not. That makes designing a balanced CPU core a much more complex task, and pushes design schedules much longer. Plus, it's not as if ATI or Nvidia only spends 18 months on a new core; instead it's a roughly 3 year process that is "pipelined" by having two teams working in parallel and out of sync.

And even 3 years is not really enough time to get everything right, which is why there is almost always room for a "refresh" part halfway through a core's lifetime--it fixes (some of) the bugs that inevitably remain in a new GPU core when it goes to production. (CPUs ship with errata and disabled features all the time, too, but nothing on the scale of what GPUs do.)
 
Intel is constantly tweaking their fab processes

So are customer fabs - they don't have a process size and stick with it. Thats why R350 is where it is now - its not the same 150nm process that R300 was built on. Likewise, the reason RV350 would stall beyond 375MHz one month and the next month scale as high as 560MHz (in some cases) was due to 130nm process improvements.
 
DaveBaumann said:
Intel is constantly tweaking their fab processes

So are customer fabs - they don't have a process size and stick with it. Thats why R350 is where it is now - its not the same 150nm process that R300 was built on. Likewise, the reason RV350 would stall beyond 375MHz one month and the next month scale as high as 560MHz (in some cases) was due to 130nm process improvements.

Right, but the difference is that Intel tweaks their process to match the characteristics of their chip, and can tweak the chip to match the changed process. I would gather that customer fabs have much less latitude to tweak their processes when customers are already shipping working ICs that may depend on the properties of the current process.

Plus Intel is much better at it than everyone else. :)
 
IMHO
If RV350's yield are so good - why :
a) there are not many rv350 offerings
b) they are expensive - 9600pro for 200$+ ?!
 
chavvdarrr said:
IMHO
If RV350's yield are so good - why :
a) there are not many rv350 offerings
b) they are expensive - 9600pro for 200$+ ?!

a) clearing old inventory of 9500 out of the channel or all gone to OEMs.
b) Increasing the profit margins for ATI at that price point.
 
Back
Top