AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Perhaps you could cut them with a laser.
Way too slow and probably a whole bunch of electrical and mechanical problems due to the high temperatures. (And if you want to do it faster, you'd have to increase laser power, which would make it worse.)

Also, in the current system, you first tape the wafer to a plastic layer and control the saw just so that it cuts through the wafer, but not through the plastic, so everything stays nicely in place. Can't do that with a laser...

I'm not saying it's impossible, but remember, you want to do this to save costs, so you want everything as standard as can be.
 
Why not just kill the non-functioning cores off in the BIOS? Sure enthusiasts would figure out how to enable them, but they'd be taking their chances on the yields and I personally still think that kind of thing adds to sales. :)
 
http://www.theinquirer.net/inquirer/news/1495726/awaiting-evergreen

Three months ago, two vendors in Taiwan indicated they were planning a customised, power-monster 4890X2 design. Even two x 8-pin GPU power connectors were thought of to allow overclocking.

At the time they told me they would only build this monster if AMD dragged its heels on producing Evergreens. They reasoned that the monster would have three or four months to sell among enthusiasts and those from cold northern climes who need an extra heater in winter. But if Evergreen ran on schedule then the custom 4890X2 would never be built.

Last week they confirmed that the custom 4890X2 idea is dead.
 
Cutting the wafer with a laser (or perhaps just the selective cuts) may not be as expensive or difficult as it might seem initially. Below are some examples of wafer cutting using lasers:

http://www.coherent.com/Applications/index.cfm?fuseaction=forms.AppLevel2&AppLevel2ID=59
http://www.synova.ch/pdf/2000_Korea_Swit.pdf

The first link mentions:

"Silicon wafer cut speeds achieved using the AVIA-X q-switched DPSS laser are comparable with mechanical methods, and offer the benefits of reduced chipping and simplified handling requirements as compared to mechanical techniques."


In the second link they mention:

"The diamond saw-blade exerts extremely high forces on the
wafer during the sawing process, which make it necessary to
fix the wafer firmly in place and can lead to parts becoming
chipped at the edges and corners and cracks occurring in the
wafer, which can subsequently lead to circuit failure.
The most significant problem of abrasive sawing is the
chipping at the upper and lower edges of the cut – chipping
across the width of the line and into the die will inevitably
lead to rejection. Laser processing on the other hand, being
virtually force-free, means that chipping is not a problem."

"The running costs of the saw are very high on account of the
consumption of diamond-edged saw-blades. Furthermore, the
manufacturing process has to be stopped for a tool change, and
the actual change performed manually. Although somewhat
more expensive to purchase, the laser is characterised by
extremely low running costs. The laser flashlamps have to be
replaced after approx. 800 to 1000 hours, and the diamond
waterjet nozzles after approx. 200 hours. All in all, this
equates to considerably lower consumption costs for the laser."

As far as disabling cores in the bios. It works of course. However, it means packaging some lower core chips in larger packages which would increase the number of SKUs for the same chip, since you would still want to create chips with fewer cores directly (since these chips are usually the higher volume ones).

In either case, I think putting multiple dice on the same piece of silicon as a means of scaling across a range of performance GPUs with a single design might make a lot of sense.
 
Hasn't the laser cutting been used quite a lot actually? I think Xx00 & 6x00 series were the last which could be softmodded, the later ones not due laser cutting?
 
=>Kaotik: IMHO that has nothing to do with laser cutting the wafers. More like blowing fuses directly in the hardware instead of disabling quads in BIOS.
Anyway. How did Intel manage with the Smithfield core?
 
=>Kaotik: IMHO that has nothing to do with laser cutting the wafers. More like blowing fuses directly in the hardware instead of disabling quads in BIOS.
Anyway. How did Intel manage with the Smithfield core?

According to intel's slidedeck "how a cpu is made" they still uses Saw instead of LASER.
 
=>Kaotik: IMHO that has nothing to do with laser cutting the wafers. More like blowing fuses directly in the hardware instead of disabling quads in BIOS.
Anyway. How did Intel manage with the Smithfield core?

Now that I think of it, you're right, I just remember reading something about laser cutting parts of the chip to disable them, but guess it was either misinformation or that the laser is just used to "blow the fuses"
 
Probably.
But back to Intel - how could they tell when to cut single and dual cores? And I believe they did not just randomly pick some pairs from the middle of the wafer, because they later sold Smitfields ridiculously cheap (althought those were slower variants and the manufacturing tech was very mature by then, so who knows...)
 
Way too slow and probably a whole bunch of electrical and mechanical problems due to the high temperatures. (And if you want to do it faster, you'd have to increase laser power, which would make it worse.)
It's hot, but not particularly so in the remaining material ... that's the beauty of flash ablation, gas doesn't conduct heat very well. As for speeding it up, there is always parallelism.
 
But considering how "modular" todays GPUs are, would it really make any significant difference designwise, and would it possible hamper the performance?
GPU work is already massively parallel. Any partition injects at least a minimal amount of restriction in how parallel the GPU solution can work, on the other side of the equation, physical implementations can't scale without bound and many of the things we implement in hardware expand in complexity faster than the benefits they provide.
There's going to be a trade-off no matter what decision is made.

Wouldn't it be possible to achieve the same benefits as an MCM approach for multiple dice without the problems (the large number of off die pin outs, high packaging costs, etc) by keeping all the dice on the same piece of silicon. The benefits of the MCM approach are a single die design for multiple performance markets with complete software transparency and single chip style scalability (as opposed to Crossfire-SLI style scalability) by treating multiple dice as if they were one larger die.
...
The same approach could be used for 4x, 6x, 8x or any other scale of multiple dice. For instance, for 4x, you would build in communications between 2x2 squares of die, and cut the wafer into 2x2 squares by default. If one of a 2x2 group is bad, you cut it, and sell a 2x chip and a 1x chip. If two are bad you cut and either sell a 2x chip or 2 1x chips depending on the configuration. If 3 are bad, you cut and sell a 1x chip. All with no more wasted silicon than if all the die were for the the lowest performance 1x chip or if MCM packaging were used.

There is a reticle limit that imposes a ceiling on the dimensions a single exposed die can have.
However, there are large chips that exceed the size that can be physically made. I believe large CMOS sensors are an example, where there are many sensor dies that are "stitched" together at a higher level of interconnect.

This has been talked about before.

http://forum.beyond3d.com/showthread.php?t=45110


I've been thinking about the same kind of thing, e.g. fabbing as pairs, then cutting into either singles or pairs. Has this ever been done before? Decades ago there was the concept of wafer-scale integration, but that doesn't seem to have gone anywhere.

http://en.wikipedia.org/wiki/Wafer-scale_integration

Wouldn't such irregular cutting be quite expensive? How much testing can be done before cutting?

Also, binning pairs of chips could be quite a problem - it could lower the ceiling on achievable clockspeeds substantially. Though you could argue this is ideal meat for a refresh pie, as the process matures and you simply bin for higher dual-die clocks in 6 months' time.

Jawed

Intel's Smithfield was a dual-core solution where Intel basically neglected to saw apart two dies.
They were wired together at the package level.
There may have been some incremental improvement in some of the electrical behavior over just having two sockets, but this was mostly a quick and dirty way to beat AMD's "native" dual core approach by a pretty good margin.
(edit: for clarity, beat to market)

For a volume product at volume prices, irregular cutting and re-sawing dies when one core tests badly didn't happen.
Pairs of cores were cut out ahead of knowing which ones would work out. It would have been cheaper to throw bad ones out than to waste time and money on trying to pick good pairs out.

Some testing is done before the die is put onto a package substrate, which might find some early faults. It would still be too late to cut the chip in half as it's an awfuly fiddly job to then take a tiny die back to be sawed in half.
Things need to be tested after the chip is on a package, and by then it's now soldered and affixed to something that cannot be cut, and now it's too late.

I suppose it could be possible to turn one core off and sell the double-dice as a single-core. I don't know if Intel did this or just tossed them.


Yields should have been already good for single cores, so it would be hoped the dual-core solution would take an acceptable yield hit for the price premium.

Intra-wafer variation usually wasn't too bad for neighboring cores, though power is pretty much doubled.
 
Last edited by a moderator:
Cutting the wafer with a laser (or perhaps just the selective cuts) may not be as expensive or difficult as it might seem initially. Below are some examples of wafer cutting using lasers:
Interesting, I didn't know that. Looks like I need to update my conventional wisdom.
 
Btw, Did anyone read Theo's response to the plagiarism part?

Theo said:
RE: Plagiarism by: Theo Valich on 7/28/2009
When it comes to a responding to false accusations, I decided not to go down to someone's level.

As a professional journalist and member of International Federation of Journalists [IFJ], there are certain standards that you don't go below, especially not getting involved in public bashing and bickering.

I have never ever, disclosed a source of a story to anyone, and my former employers know that. That policy stands for every employee of BSN*, and if people disclose sources, that will be handled internally. There were cases in this industry where revealing sources was "awarded" with those people receiving their pink slips [and I don't mean a car] because "bloggers" blogged who the source was.

BSN* was the first site on the whole Internet to publicly reveal the codename of ATI's DirectX 11 architecture [Evergreen], nd we were the first to reveal that ATI R800 family e.g. Evergreen is based upon R700 architecture and that the completely new ATI DX11 architecture is coming in 2010, with the upcoming transition of some parts to GlobalFoundries.

We are going to stay here for a long time, and there is no wonder that people will openly dislike us, those people being journalists, execs etc.

Ultimately, it all boils down to choice. BSN* does not charge for content and we are not running ads to show bias to anyone [hence, we're running Google Ads for the time being - we might change this in future, but only if it will not affect our objectivity]. Thus, it is the right of every reader to come and read our content and come up for conclusions themselves.

And now excuse me, we have more content to publish, from the well respected colleagues. After all, this site is not a one man band [like some people were spreading during Computex], there is more than 20 people in the company.

Ed.
 
That paragraph has the wrong parts highlighted:

BSN* was the first site on the whole Internet to publicly reveal the codename of ATI's DirectX 11 architecture [Evergreen], and we were the first to reveal that ATI R800 family e.g. Evergreen is based upon R700 architecture and that the completely new ATI DX11 architecture is coming in 2010, with the upcoming transition of some parts to GlobalFoundries.

Juvenile pissing contest.... :rolleyes:
 
Back
Top