So, do we know anything about RV670 yet?

Why unlikely when it works well for Xenos and eDRAM?

If you expound upon this design, natural progression is 512bit-ringbus, buffered by eDRAM "L2" cache with two main gpu processing cores behind it. Each core would be interconnected with HT3/CSI link.

In realistic terms, this would be akin to taking A64 and making it dualcore.


Now, the only question, really is whether they decide to glue everything together(CSI links), or if they conglomerate several seperately-designed parts into a single die(HT3).


The answer to that is quite obvious to me...i can hardly understand any speculation on this part. The only speculation is really whether they can pull it off. ;)


Fusion may be the leperous messenger tho...i cannot understand the delay for mainstream Fusion.

OR can I?

:LOL:



:runaway:
 
Why unlikely when it works well for Xenos and eDRAM?
Apples and oranges, I think. Neither of those two dice were designed to be used without the other. A package with two rv670 dice is one that cannot be used as a "midrange" part, whereas a single-die package can be used alone or in a pair.
 
:oops:

Maybe you missed the part of the R600 interview where it was mentioned that this gen was designed to be fully modular.


:yes:

I'm not just making stuff up...alot of information points in this direction.

Nevermind that AMD have already stated R700 would be multi-chip*(please notice no mention of dual-chip...I heard only multi from people i tend to listen to).


Factor in the many midrange dual-gpu cards that never made it to the north-american market, and we have a standing history of BOTH ATI and AMD developing multi-processing tech, albeit from different directions.


Also add in that silicon, at this point in the foundry game, really needs significant "doping" in order to be anything worthwhile, and to prevent leakage...it really seems fool-hardy to increase the size of a die. Silicon dies need to be smaller, in order to deal with leakage properly, but the market needs more functionality.

Theres only one intelligent step to take...and that's to turn the ATI designs into something alot more like the K8 gen.

Oh, wait...that's Fusion, isn't it?;-)
 
Well, I probably would agree that the gemini cards that have been floating around were probably dipping the toes in the water, even if "unofficial" in the sense they weren't ATI references.

But large amounts of edram? I don't see it. Richard Huddy is on record that baking edram into PC designs is expensive and thus doesn't scale well down the price range.
 
True, true, i don't think we'll see a very large eDRAM die either, but that was just the context to which my logic makes sense.


Now, I'd preferably see something in the tune of 150-200k transistors, added in 4xAA hardware within the cache(a la Xenos's eDRAM), however unless they go the route of NVIO, and make the "add-on" completely seperate, i don't think my ideal transistor budget is going to fly.

however, if we shrink that 10MB down to 4MB....or we replace it with 32MB GDDR4 @ 3200mhz...

It really does not make sense to me, to not take the gpu add-in card to almost complete processing system status. There are but a few elements that differ currently between VGA and cpu...actual work done is different, sure, but things aren't that far apart...

I dunno. I guess I've only ever seen cpu as a data shuffler, not a processing unit, and that's something that a cpu does do very well. A gpu doesn't tho...not in the same degree...and we really need more data flow both to the gpu, and between them.

I also really want to see alot of under-utilized diespace, that when inactive, partially acts as a heatsink for the functioning parts of the die. Given advanced power planes ...


ANyway, too far off topic. Back on topic tho, it only makes sense to me that RV670 would be the perfect gpu to start dealing with hte issues that are bound to arise from bringing Fusion mainstream. you can't just dump something like that onto the market, and expect everyone to pick it up...you need to provide developers with a working platform before they can provide you with a product that runs. Given current roadmaps, and the state of the industry, RV670 is not just a mid-range gpu...

it may be that red-headed step-child that has a mutation so strong it infects all the others.:oops:
 
Also add in that silicon, at this point in the foundry game, really needs significant "doping" in order to be anything worthwhile, and to prevent leakage...it really seems fool-hardy to increase the size of a die. Silicon dies need to be smaller, in order to deal with leakage properly, but the market needs more functionality.
Die size isn't the determining factor for leakage.
Three chips with the same total number of transistors as a one big chip will leak just as much, all else being equal.

In fact, their power consumption will likely be worse, as they have to drive signals through the interconnect between them, which is significantly more power-intensive than on-chip transmission.

The distributed nature of the silicon also makes latency handling more difficult, and heatsink design gets more complicated.

If power consumption were the reason for multi-chip systems, the move to multicore would never have happened.
 
It IS a consideration if active parts leak over into other active parts, and seperating these parts prevents this behavior, as well as allows for higher scaling in frequency. You are far too used to considering a gpu as a whole, rather than it's seperate processing units. In my mind, although these units are connected, they are not the same, nor will they behave the same when under load...thermal characteristics are not the same over the entire die, unfortunately. you end up with hotspots in highly active areas, especially when the element next to them is sitting idle..

Like leakage from cache overheats the ROPS, so half must be disabled...

:LOL:
 
It IS a consideration if active parts leak over into other active parts, and seperating these parts prevents this behavior, as well as allows for higher scaling in frequency.
Leakage isn't a contagious disease.
When a transistor leaks, transistors in a neighboring ALU don't leak in sympathy.
If they leak, they were leaky in the first place.

What can happen is that leaky silicon is hotter, and leakage has a thermal component, so higher temps cause overall leakage to increase.
Separating the silicon makes cooling it more difficult/expensive, which still increases temperature.

To top it off, any high-speed connection between the chips will burn extra power, perhaps negating or worsening the power situation.

You are far too used to considering a gpu as a whole, rather than it's seperate processing units. In my mind, although these units are connected, they are not the same, nor will they behave the same when under load...thermal characteristics are not the same over the entire die, unfortunately.
And?
The overal thermal behavior is more complex than that. Separating units doesn't make them magically immune to leakage.
The less effective cooling and extra IO will make it worse.

you end up with hotspots in highly active areas, especially when the element next to them is sitting idle..
No, an idle area next to an active one tends to cool it.
An active area next to an active area is a hotspot.

Like leakage from cache overheats the ROPS, so half must be disabled...

:LOL:
That's simply not done.
 
Heat is a major factor, IMHO, so while you bring up valid points, except cost, they all don't mitigate the problem at hand.

And yes, it was heat spread from other parts that I refer to...hence my mention of highly active parts hurting others...the only factor is heat from the leakage. I thereby fail to understand your points, in this regard.


Cooling...well, so what? More expensive? So? I can think of many places where the excess cost is recouped. Overpaid execs would be the first to come to mind...

Nevermind the price/performance considerations, as this is not a monoply market, and profits tend to be a bit higher for board partners siding with another camp...again, matching prices a bit closer can reap huge benefits overall.


Sadly, I don't see the needed business sense to pull this off from AMD, so, in the end, you may be right.

But it is more than definately possible...official spokespeople have already confirmed this is the direction things overall are going.:yes: The only question is how they tie it all together for this chip. but you can bet the bank that there WILL be a dual RV670 board, consumer-spaced or not, if even only for them to work with scheduling and bus configs.
 
Heat is a major factor, IMHO, so while you bring up valid points, except cost, they all don't mitigate the problem at hand.

And yes, it was heat spread from other parts that I refer to...hence my mention of highly active parts hurting others...the only factor is heat from the leakage. I thereby fail to understand your points, in this regard.
Because the chips will be stuck under the same cooling assembly. As such, that cooler will be no more efficient than a cooler on a larger chip, or you have a bunch of smaller coolers that are less capable than the large one.

Temps are determined by the overal heat output into the cooler, so they remain unchanged, so the leakage remains at best unchanged.

You have, however, thrown a bunch of IOs that connect all these formerly connected units, and that equals more power drawn.

Everything winds up hotter.

Cooling...well, so what? More expensive? So? I can think of many places where the excess cost is recouped. Overpaid execs would be the first to come to mind...
Or just the bottom line of the currently unprofitable company.

Nevermind the price/performance considerations, as this is not a monoply market, and profits tend to be a bit higher for board partners siding with another camp...again, matching prices a bit closer can reap huge benefits overall.

Those are the primary points for going multi chip.
They are going multi-chip because the die sizes needed for a given level of performance are impractical.

Leakage is not a reason to go multichip. You just get multiple leaky chips.

If multichip saved on power, those POWER 5 MCMs wouldn't be burning almost a kilowatt.
 
Because the chips will be stuck under the same cooling assembly. As such, that cooler will be no more efficient than a cooler on a larger chip, or you have a bunch of smaller coolers that are less capable than the large one.

I dunno about you, but all i see out there are some highly in-efficient coolers placed on stock cards, and if this wasn't so, there wouldn't be the market of after-market cooling that there is.:yes: I don't understand your line of thought here...at all. I've got a few ideas bouncing around in my head, and if the paid engineers can't come up with better solutions, there there needs to be some better engineers.:yes: This is a business we are talking about, one run by some execs who claim to be "innovative", and "customer-centric".

Leakage is not a reason to go multichip. You just get multiple leaky chips.

I never said leakage was the main reason. In fact, you then go on to quote me, and say, "this is the real reason". :rolleyes: :LOL: Welcome to the party, drinks are at the back.;)
If multichip saved on power, those POWER 5 MCMs wouldn't be burning almost a kilowatt.

LoL. Maybe if process quality at specific nodes was much better, and doping wasn't needed, then the excessive voltages you see used now, which truly create this leakage, could be lowered enough to pull power requirements in-line with what was expected a few years ago, which has now been all but forgotten. Problems with being fabless, i suppose. Maybe access to fabs can pull some wonders out of the ATI arm of AMD.

Sometimes the benefits of competition pay off, but like everything else in life, all things have consequences.

And here we sit, wondering who's gonna be faster.;)
 
I dunno about you, but all i see out there are some highly in-efficient coolers placed on stock cards, and if this wasn't so, there wouldn't be the market of after-market cooling that there is.:yes: I don't understand your line of thought here...at all. I've got a few ideas bouncing around in my head, and if the paid engineers can't come up with better solutions, there there needs to be some better engineers.:yes: This is a business we are talking about, one run by some execs who claim to be "innovative", and "customer-centric".
You claim that going multichip will prevent leakage problems.
I've stated that no, it would not. The transistors would still leak, and they would still have to pass heat through a common cooling assembly.

The solution for leakage is not to use more efficient coolers.
The chips are still leaky, regardless of the cooler they're housed under.

If the going multichip helped with leakage, they could get away with less cooling.

I never said leakage was the main reason. In fact, you then go on to quote me, and say, "this is the real reason". :rolleyes: :LOL: Welcome to the party, drinks are at the back.;)

Also add in that silicon, at this point in the foundry game, really needs significant "doping" in order to be anything worthwhile, and to prevent leakage...it really seems fool-hardy to increase the size of a die. Silicon dies need to be smaller, in order to deal with leakage properly, but the market needs more functionality.

That's good because it's not a reason at all.

LoL. Maybe if process quality at specific nodes was much better, and doping wasn't needed, then the excessive voltages you see used now, which truly create this leakage, could be lowered enough to pull power requirements in-line with what was expected a few years ago, which has now been all but forgotten. Problems with being fabless, i suppose. Maybe access to fabs can pull some wonders out of the ATI arm of AMD.
If it weren't for doping, photolithography and semiconductors would be pointless.

Nothing about multichip solutions affects voltages needed to reach the same clock speeds as a single chip.
Multiple chips still use doped silicon.

Leakage problems are a result of the shrinking geometries of transistors. Unless you also plan to push process nodes back about four years, leakage remains a problem.
 
You claim that going multichip will prevent leakage problems.
I've stated that no, it would not. The transistors would still leak, and they would still have to pass heat through a common cooling assembly.

Nothing common is "prescribed" for cooling seperate elements on the same pcb. If you cannot get this through your head, then...well, there's no point in even discussing this with you. For example, it would be more than simple enough to put two seperate coolers within a single shroud, with that shroud seperating the cooling elements. Heatpipes ensure this is possible.

The solution for leakage is not to use more efficient coolers.
The chips are still leaky, regardless of the cooler they're housed under.

Yes, but it's far harder to cool a larger die than it is a smaller die.

If it weren't for doping, photolithography and semiconductors would be pointless.

Um, hold on a minute here...do you know what doping is? Not that i question your knowledge, but we could quite possibly be talking about different things...and by your responses, we are.

Silicon is a semi-conductor. this means that under certain conditions, it's a conductor, under others it's an insulator. Leakage is a result of this, as silicon has a specific frequency of "juice" it can handle while still remaining an insulator.

Doping changes this specific characteristic.

Voltage, at a given frequency, will "leak" when it comes close to pushing the semi-conducting properties of silicon(getting close to making it a conductor). Change the frequency of the current, and it leaks less.

Silicon can also handle less current the smaller the process is. Of course, doping can negate this, but this can also affect how the crystal lattice is formed, and can, in the worst case, cause more leakage than there was before...but it can handle the current.

Changing voltage can change current, and thereby negate leakage.

Irregardless, i never said any one of these factors is the only one for going multi-chip, i merely mention them all. If you have better info, from specific sources, please introduce it and share with everyone else here, as you're getting close to flamebaiting, re-interating points you have already made, without any new info. I'd rather have a conversation, not get trolled, thanks, and we've already heard your side of the story.:???:
 
Nothing common is "prescribed" for cooling seperate elements on the same pcb. If you cannot get this through your head, then...well, there's no point in even discussing this with you. For example, it would be more than simple enough to put two seperate coolers within a single shroud, with that shroud seperating the cooling elements. Heatpipes ensure this is possible.
Within the confines of a PCB and board spec, the effectiveness of the separate coolers individually is not as great as a single large cooler.
The physical space available for the material that makes up the cooler or set of coolers is the same.

If the chips are part of a multi-chip module, then the close proximity of the chips is going to force a single shared cooler or a more complex assembly of coolers.

I will point out the POWER big iron chips that have some very heavy cooling involved in cooling the MCMs. They are too close together to cool separately.

It would be likely that a multi-chip GPU would try to keep them on the same package to minimize routing, PCB, and cooling issues.

I will point out some video cards raised concerns about board crowding on single-chip products.


At the end of all this, those chips will still be leaky if they are already leaky.

Yes, but it's far harder to cool a larger die than it is a smaller die.
There is a balance that must be struck, as there have been occassional cases where the direct opposite is true.
The top end 90nm G5 chips IBM made for Apple were water cooled due to concerns with thermal density.

http://www.anandtech.com/mac/showdoc.aspx?i=2326&p=2

Small active dies present less surface area for contact with the heatsink base, which can lead to greater inefficiencies due to the less than perfect conduction of the elements of the cooler and thermal compound.

Um, hold on a minute here...do you know what doping is? Not that i question your knowledge, but we could quite possibly be talking about different things...and by your responses, we are.

http://en.wikipedia.org/wiki/Semiconductors#Doping

If that is not the doping you are talking about, I am curious which definition you are using.

Silicon is a semi-conductor. this means that under certain conditions, it's a conductor, under others it's an insulator. Leakage is a result of this, as silicon has a specific frequency of "juice" it can handle while still remaining an insulator.

Leakage occurs when there is unwanted current flow due to a material's less than infinite resistance.
Transistor leakage is the dominant form of leakage we're talking about.

This is a property of all materials, because nothing has infinite resistance.

Semiconductors have it worse than insulators because they must offer the option of being conductive.
Conductors would leak if we ever tasked them with being insulators.

Voltage, at a given frequency, will "leak" when it comes close to pushing the semi-conducting properties of silicon(getting close to making it a conductor). Change the frequency of the current, and it leaks less.

Leakage occurs whenever there is a voltage differential.
Only something of infinite resistance can be free of leakage.
Nothing has infinite resistance, and transistors are increasingly less able to resist current flow as they shrink.

I'm not sure what you mean by changing the frequency of the current.

That sounds like you are discussing dynamic power and subthreshold leakage, but this would not impact static leakage, where there is no voltage swing.

Changing voltage can change current, and thereby negate leakage.
Most of the time reducing voltage reduces leakage.
The problem that is that transistors have resistance controlled by voltage differential. Lower voltages lead to lower differentials, so leakage can actually go up if the high and low voltages are too close together.
This is one of the barriers to further voltage reduction in future chips.

At the other end of the spectrum, higher voltages invite higher voltage differentials, which runs counter to the decreasing resistance of shrinking transistors.
 
Last edited by a moderator:
If that is not the doping you are talking about, I am curious which definition you are using.

Using wiki terms, doping is used to change the band structure of the device. Those "bands" are frequency bands, FYI. Voltage differential, and it's set frequency, are physical properties of the semi-conducting "bands" of crystals. All leakage is either due to this, or an improperly aligned lattice.

All you've done, yet again, is said the same thing I jsut said, but using different words, so I don't know what to tell ya, other than you are just emphasizing a different part of the same problem.

All these mentioned things are what is currently holding back the industry. IMHO doping is a signifigant factor in yields, and IMHO more due to fabs and thier engineers being too accustomed with doped materials, and a lack of refinement within doping levels.

In other words, I see what I call an improper approach. "OK, we did this last time, and it worked well, os let's do that again" is not always the right answer.


I'm still awaiting a 4ghz cpu in the consumer space. Many years now, and nothing. Why?


Anyway, because of the current problems, and the huge obstacles faced in either direction, AMD's faced with a decision. One they've already made, BTW.

I don't see much point to speculation in this matter. RV670 may not be consumer-spaced multi-chip, but it WILL end up on a "gemini" board. Big deal.

Forward thinking, R700, as stated by AMD, will be multi-chip. I am unsure of what this means, so, if you got an answer out of them, please let us know...no point in arguing over things past.
 
Last edited by a moderator:
Using wiki terms, doping is used to change the band structure of the device.
Yes, dopants are used to change the energy required to liberate charge carriers in the material.
If you use CMOS, which most mass-produced chips use, you will have NMOS and PMOS transistors.

http://en.wikipedia.org/wiki/MOSFET

Every transistor in a CMOS device, and by extension every GPU, is based on doped silicon.

Those "bands" are frequency bands, FYI. Voltage differential, and it's set frequency, are physical properties of the semi-conducting "bands" of crystals. All leakage is either due to this, or an improperly aligned lattice.
Those bands are measures of the necessary amount of energy needed to reach certain states.
There is link between a quantum of energy and frequency, but not in any terms useful for describing the behavior of a GPU transistor. There is no radiation involved, and transistors have a function in steady states that involve no voltage change to indicate frequency.

Using the term frequency in the context of transistors in a device leads to confusion between the idea of frequency=clock speed and frequency of a quantumn with the necessary amount of energy.

Transistor leakage is due to band gaps in the same way that car crashes are due to car engines.
Without the source of leakage you want to get rid of, the thing doesn't work.

All these mentioned things are what is currently holding back the industry. IMHO doping is a signifigant factor in yields, and IMHO more due to fabs and thier engineers being too accustomed with doped materials, and a lack of refinement within doping levels.
Then they should just stop using transistors.

I'm still awaiting a 4ghz cpu in the consumer space. Many years now, and nothing. Why?
Blame physics, corporate policy, economic factors, and the limits of consumer cooling technology.

Anyway, because of the current problems, and the huge obstacles faced in either direction, AMD's faced with a decision. One they've already made, BTW.
Just so you know, those multi-chip GPUs they make will be using doped silicon.

Multichip changes little with regards to leakage and nothing with regards to transistor manufacturing.
 
Last edited by a moderator:
Transistor leakage is due to band gaps in the same way that car crashes are due to car engines.
Without the source of leakage you want to get rid of, the thing doesn't work.


LoL. Thank you so very much for illustrating my point. This complacancy leads to idleness and a catastrophic failure of innovation. I hope you're not one of those engineers!

you've just plainly stated "don't use it, and it won't leak". :rolleyes: You've also said i'm not using the right term, because it might confuse people. lol.


Um, yeah, this is confusing. Not stuff taught in grade school, for sure.

you seem pretty defensive over this. I don't get it. Have a great day.
 
Back
Top