R9500 Clarification..

Maverick said:
Of course, maybe there will be a registry hack to turn on the deactivated pipes. If you're lucky, you could end up getting an actual 9700!
This is how rumors start folks...
 
Althornin said:
Maverick said:
Of course, maybe there will be a registry hack to turn on the deactivated pipes. If you're lucky, you could end up getting an actual 9700!
This is how rumors start folks...

true true, but this pipe deactivation thing really made sense to me.

I think someone on rage3d was talking about it, perhaps one of the ATI employees.

Basically he said they were doing it to improve the # of usable parts for ATI. He said the chips with the 2 pipes disabled would be used for "another" card being released in the near future. He also noted the clockspeeds on these chips would not be a full 325mhz as most of these chips are rejected for not reaching this speed or having defects.

This does, of course, mean the "other" chips will be a lot more flexible in overclocking. You could get a chip that made it to 310mhz (not high enough for a 9700Pro, perhaps not enough for a 9700 Vanilla) and this chip could be clocked conservatively at perhaps 280mhz. The "other" chips could turn out to be quite like the ti4200 in overclocking value.
 
And if the yields improve faster than ATI expects, there may be many 9500's out there that have usable pipes that are disabled. Whether or not they could ever be re-enabled is a good question, however.

That is, since this is something that is presumably caused by errors in the core, it is obvious that different pipelines will be disabled on each chip. Therefore, it's certainly not going to be just as simple as, "disabled pipelines 1 and 2," as might happen in a driver.

Still, the disabling of the pipelines *might* be done in the BIOS, where at bootup it runs a few checks on the pipelines, and disables those that don't pass the test, as well as any others needed to keep it so that, say, two are always disabled.

I believe I had heard of another possibility, that the core is actually modified at TSMC in order to disable the pipelines that are damaged. This would make it impossible for any modification to reenable the disabled pipelines.
 
Most posts in this and other threads on the 9500 "pipeline disabling" subject have centered on half of the pipelines being left functional for the 9500, giving a 4 x 1 part.

What are the chances that only two pipelines would need to be disabled, and the 9500 could be left as a 6 x 1 part?
 
This idea of resurrecting pieces to lesser functioning parts is a (IMHO) pipedream.

1) We don't know what the dominating defect will be
2) There's no guarantee a defect in a pipe will leave the rest of the chip capable (could be a short, for example)
3) We don't know how big a pipe is compared to the rest of the die (so its hard to judge how many parts will statistically fail in only this mode)

In other words, I think the percentage of parts failing in the exact configuration to make this possible is mighty low for ressurrection to be the jeu de jour.

If they're using this same die for double duty (i.e. 9500 & 9700) its going to be essentially rebadging with fully functional parts of the die turned off, rather than picking through dead parts to find ones that marginally work.

Maybe it makes sense economically (as in the 4600 vs. 4400 vs. 4200) to rebadge the same die, or maybe not (GF2 vs. GF2MX). Ideally, if you're disabling a featire, and that feature is a large percentage of the die, thats money you're essentially pouring down the toilet.

In the Intel celeron vs. P3, if you look at the die photos, the cache is a relatively small part of the die. I'm nearly certain that a celeron was not a PIII that only a portion of the cache passed muster. If you had a defect density that would regularly kill half the cache, you'd have so many defects that the die would just be dead. I suspect it was the same die, same features, but with the functionality disabled.

Of course, all in my opinion.
 
The only problem with the idea of having the same die, just with the functionality disabled, is that it is very expensive to produce chips that large for the performance/mainstream market.

As for the P3, don't forget that their Celerons during the early P3 era were some of Intel's earlier attempts at integrating a relatively large-scale cache into the processor die (If I remember correctly).

It may well be that they got much better yields by assuming that a certain percentage (probably not half....but closer to 20%) would fail.
 
http://www.intel.com/intel/intelis/museum/exhibit/hist_micro/hof/hof_main.htm

Look at the celeron and PIII. (though something is funny, because there was the celeron (which was PII class) and then celeronII, which was PIII class).

The pictures are identical, as far as I can tell, and the die area of the cache (the big blocks on the left) isn't so big that you'd expect a defect density that was enough to disable exactly half of the cache (or even 20%) but not scorch the rest of the die to happen statistically enough to make it viable to market a part based on rejects.

But, as I mentioned, I guess since the cache didn't dominate the area, it made sense for intel simply to rebadge.

In the case of a pixel pipe, I have no idea how big it is, so its tough to say "oh, its expensive just to disable half" or not.

It may be that the logic required for the pixel pipe is insignificant compared to other aspects of the chip, hence simply disabling and taking the margin hit is more cost effective than respinning the chip to cut out that extra area.

Or, it may be that the chip is pad limited (the I/O pads required make the chip a dictate minimum size enclosed) and that if you cut out the extra pipes, you'd still be stuck with the same size die. (Its not transistors that cost money, its die area)

Anyways, I'm sticking firmly to my thoughts that any 9500 with half the pipes on the same die is not culled from defects risen from the grave.
 
Off Topic

i got my pistol pointed glock, ready to lay shots non-stop until I see your monkey ass drop.
WTF is this about? You realize this website focuses on 3D, not idiocy, right?
 
RussSchultz said:
The pictures are identical, as far as I can tell, and the die area of the cache (the big blocks on the left) isn't so big that you'd expect a defect density that was enough to disable exactly half of the cache (or even 20%) but not scorch the rest of the die to happen statistically enough to make it viable to market a part based on rejects.

Again, here's what I think:

1. It's non-trivial to implement large amounts of cache on-die. The process used is slightly different, causing the cache on a process designed for core fabrication to be somewhat unreliable for fabbing caches (This opinion is more born out of evidence than working in the computer industry...i.e. I don't really know if there is any sort of complication like this).

2. Supposedly, what happens is that a defect in one of the cache lines (not sure how many there are...presumably a cache line is no larger than a few kilobytes) disables the entire cache line. So, it doesn't seem inconceivable for just a few defects to make a fair portion of the cache unusable. It's not like a defect in some part of the cache only disables one byte, but could easily disable a few kilobytes.
 
Here's what I think: you shouldn't talk about things you don't really seem to be knowledgable in.

While I can't give you firm numbers, (because the only real information I have is related to our products which is proprietary info) I assure you we have no problems achieving profitable yield, without disabling any RAM, in TSMC's standard processes on parts that have more SRAM than the celeron/PIII. Intel would have even less problems, since they control the fab directly, plus I believe they were in a smaller process (which gives better yield).

So, to re-iterate my point. If you've got enough defects in your SRAM to disable half the cache, you're going to have defects all over the die. (I.e. if the defect rate is enough that there's a 50% chance to have a defect in each block of the celeron cache, you'll have a pretty slim chance that there wouldn't be a defect elsewhere.) A simpler way to solve the 50% of the cache bad problem is to go with a self-repairable RAM, such as those from Virage Logic or Artisan.

That, coupled with the statistics of it (i.e. 70-90% yield for good die, leaves only 30-10% potential half-good die), means you're building a lot of good die to supply the half good ones, which is exactly opposite what the market wants.

All this leads me to one conclusion: the parts were neutered on purpose, and not harvested from the reject bin.

Oh yeah, add the fact that I've seen this exact practice in action gives me a little more confidence in my conclusion. ;)
 
The texture pipes are already the biggest things on a chip, and when you double the number of them, and expand them to 128/96-bit floating point, well...they'll probably be taking up most of the die.

I also happen to know (first-hand) that another of the 3d card manufacturers uses this technique, so it's definitely not hard for me to believe that ATI are as well.
 
Maverick said:
The texture pipes are already the biggest things on a chip, and when you double the number of them, and expand them to 128/96-bit floating point, well...they'll probably be taking up most of the die.

I also happen to know (first-hand) that another of the 3d card manufacturers uses this technique, so it's definitely not hard for me to believe that ATI are as well.

See for Yourself (as found on Anandtech):

r300die.gif


So with half the pixel-shader pipelines; half the vertex-shaders and only an 128bit Interface the Die will be less then 2/3 (=~75Mio). And now with the overclocking success of the R9700Pro @400MHz maybe we see the R9500 @ >400MHz on an regular basis (same process; lower transistor-count).

So my conclusion would be that the R9500 would use an new core instead of disabled R9700 cores.
 
Back
Top