How big are GPUs gonna get?

nonamer

Banned
I remember a time when 200mm^2 was "huge." Now we see a 480mm^2 chip. Just how big are they going to get? The largest chip ever made is the Montecito, which is 596mm^2. Will we see a GPU bigger than that?
 
Last edited by a moderator:
I doubt it.

65nm parts will be here soon, and assuming nvidia does something like 192 scaler ops/512-bit for G90, it'll probably still be smaller than G80. After that nvidia will probably go multi-core rather than producing larger dies as them making the switch has been circulating the rumor mill for some time.

Who knows what ATi will do in the immediate future, but one can't help but assume R6xx will be smaller and use less power than R600 will, if not almost a straight shrink and just running up clocks like a 80nm G8x probably will. This could give them an opportunity to make boards using two future R6xx cores, ala a GX2, without needing a power plant to run it. It also falls in line with ATi pushing crossfire being scalable by 2 upwards towards an unknown number. Perhaps we'll see 4x (two per board) cross-fire R680's.

After this gen we know ATi is going multi-core. I can't help but wonder if one R700 core will be something along the lines of 1/2 the spec of R600 (obviously with some differences) on 45nm, with 1, 2, or 4 cores on board depending on the marketing segment. It would make sense on every scale to produce one GPU...and we already know crossfire is both infinitely scalable by figure of 2, and that two-core crossfire can be done on one board without need for a crossfire motherboard (unlike nvidia). Why not four cores per board? The size of such a core would certainly seem to make it feasible, and again we know ATi has said they will transition to 45nm is 2008...Right along the time frame of when we should expect R700.

That's where I see the future heading. Perhaps more surface area through more dies, but each core will be smaller.

Beyond the R7xx/G90 generation it's unknown. My personal belief is that we will see ray-tracing taking off within the next five or so years. When that becomes feasible through bajillion core general purpose cpus (which we know are coming) or even dedicated gfx cores on a cpu as AMD has hinted, gfx may indeed become fused into the cpu, making stand-alone cards obsolete...and then you won't have to worry about big GPU dies at all!
 
Last edited by a moderator:
A bunch of the size of Montecito is cache. In fact, I would not be surprised at all if there is more logic in the G80 than there is in Montecito.
 
Largest chip ever made? Definitely not. I know of chips that are _a_lot_ bigger than that.

At one point the reticle limit as 25x25mm, so 625mm2, but that may have changed. Do you have more info? (which chip, size) Just being curious... My guess would be big-iron router chips?
 
At one point the reticle limit as 25x25mm, so 625mm2, but that may have changed. Do you have more info? (which chip, size) Just being curious... My guess would be big-iron router chips?

According to the link above, Montecito is 27.72 mm x 21.5 mm. So even that one would be larger than 25 mm in one dim.

One kind of chip that can be quite big is image sensors. It's not the usual logic-and-memory chip, but it is a silicon chip with lots of CMOS transistors on them. Some high end DSLR cameras have a full size CMOS sensor. That means that the active image sensing area is 36mm x 24 mm. And then you'll have to add some control and bonding area around that.
They won't compete in the number-of-transistor cathegory though.

But I can't say anything about the really big chips. :p
 
All you people are concentrating on only width and length tsk tsk. Just wait for those 3d 3d chips to start rolling in :p
 
According to this page (look for "step and repeat"), it is practically possible to make "ordinary" chips of sizes up to roughly 25x35 millimeters; beyond that, there has been done a fair bit of research on "wafer-scale integration", theoretically allowing for a wafer-sized chip (30cm diameter => ~70000 mm2), however, except for very limited applications (for the most part very large CCD arrays, it appears) this idea has been largely unsuccessful.
 
How big are GPUs gonna get?

Well.....remember this thing?

monolith2001a.jpg



....yeah...well....that was a GPU from the future.
 
According to this page (look for "step and repeat"), it is practically possible to make "ordinary" chips of sizes up to roughly 25x35 millimeters; beyond that, there has been done a fair bit of research on "wafer-scale integration", theoretically allowing for a wafer-sized chip (30cm diameter => ~70000 mm2), however, except for very limited applications (for the most part very large CCD arrays, it appears) this idea has been largely unsuccessful.


Hmm..

25mm X 35mm = 875 mm2

G80 is 484 mm2 / 681 M Transistors

Number of Transistors at 22nm processes = (875/484) x (90/22)^2 x 681 = 20.6 Billion Transistors (30.2X of G80 transistors)

Accoring to this the maximum number of transistors you can get at 22 nm processes for GPUs. Assuming the chips will remain 2D until then. When will we have 22nm high-end GPUs, 2013-2014 timeframe maybe?
 
I doubt it.
After this gen we know ATi is going multi-core. I can't help but wonder if one R700 core will be something along the lines of 1/2 the spec of R600 (obviously with some differences) on 45nm, with 1, 2, or 4 cores on board depending on the marketing segment.

You get your terms backwards. core = logical thing doing one job.
chip = physical piece of silicon.

The rumour is that ATI is going (back) to graphics cores consisting of multiple chips, ie. chipsets(like voodoo1 and voodoo2).

And those chips will propably

a) not be similar chips. load-balancing and memory sharing between chips is a problem if the chips are doing symmetric work. ( reasons why crossfire and SLI usually give much less than 100% improvement, and on some games don't help at all).

Actually xenos is already a chipset consisting of 2 chips doing diffferent things, one for shaders and another for ROPS and edram.

b) Maybe come in one multi-chip package. ATI is already doing this on both xenos (two chips of the chipset) and mobility radeons( gpu and memory chips )


It would make sense on every scale to produce one GPU...
and we already know crossfire is both infinitely scalable by figure of 2

"inifinitely scalable" ? yes, you can add many chips. But your performance improvement is far from scaling linearily with number of chips.

, and that two-core crossfire can be done on one board without need for a crossfire motherboard (unlike nvidia).

??? (then what is GF7950GX2?)

but anyway these inabilities to work on competitors dual-16x-pcie boards are just marketting / product segmentation stupidity. They just want to sell both the motherboard and gfx cards so they market them together. It has nothing to do with what they technically can do with multi-chip solutions today and even less what they can do in the future.


Why not four cores per board? The size of such a core would certainly seem to make it feasible, and again we know ATi has said they will transition to 45nm is 2008...Right along the time frame of when we should expect R700.

That's where I see the future heading. Perhaps more surface area through more dies, but each core will be smaller.

Beyond the R7xx/G90 generation it's unknown. My personal belief is that we will see ray-tracing taking off within the next five or so years. When that becomes feasible through bajillion core general purpose cpus (which we know are coming) or even dedicated gfx cores on a cpu as AMD has hinted, gfx may indeed become fused into the cpu, making stand-alone cards obsolete...and then you won't have to worry about big GPU dies at all!

ROP units needs lots of bandwith to framebuffer memory. (so for high performance, the memory needs to be directly connected to the chip containing the ROPs).
And to avoid load balancing problems, there should be only one framebuffer.

This practically means that for high performance gpu, you want to have only one chip (with a wide memory bus) containing the ROP units.

Texture memory might also be a problem if you have many TMU-containing chips ( SMP or NUMA; and if NUMA, then duplicating data on all memories or accessing it slowly from other memories? ).
 
1. Apologies on the incorrect verbiage. Chips it is. Although in my guesstimasynopsis (no, that's not a word) each chip would work as a core, ala a single chip being a 'x3300', a dual-chip card being a 'x3600', and a quad-chip card being perhaps a 'x3800' so-to-speak.

2. I was thinking the GX2 required an SLI motherboard like former similar parts before that...forgot they fixed that, but even furthermore proves multi-chip without sli/xfire is going mainstream...And yes, yes...I know it's all bullshit, but nvidia DOES disable peer-to-peer rewrites (or something to that effect...whatever the main component of crossfire/sli is) through their drivers when it doesn't see an SLI chipset or nvidia cards on it's motherboard (apparently besides the Gx2). It wasn't so much an SLI/Crossfire thing I was trying to state as much as is it is you used to need a SLI board to run a dual-chip gfx card from Nvidia up until the GX2. Same for ATi, although it was possible of course to run SLI on a ATi board through the hacked drivers. This is not the case with the x1950 duals that are being announced now, nor the GX2's. I'm just saying, they are creating the market with these products. Single-slot, multi-chip, no premium motherboard requirement (although they themselves seem to be becoming mainstream with the 570/650 and such) It very well could be the future. This is not to say both companies won't go the same route as the GX2, seeing it as a single card/core even when it's clearly not, to allow for the two or more cards connected together for multi-cards with multi-chips per card (quad-chip and further) on their specific platforms.

3. Yes, I know it's not 100% efficient performance-wise to use multi-chips...That's not the point. The point is it's being done both on single cards and through multiple cards as we speak. What if one chip was approx the power of Xenos (dx10-style) consolidated, and they were daisy-chained? One chip, perhaps 8 ROPs, connecting to 4 ram chips through a 128-bit bus (or multi 32/64-bit buses as R600 rumors would have us believe). It's not the most efficient performance-wise for reasons you mention (comparable to as far as crossfire/sli can be optimized) but it is cost-effective while solving the size/heat problem for the high-end.

You are very right in stating it's possible they may go the G80/Xenos route and go multi-die through separating the ROPs or other parts to individual chips; that's indeed another possibility, although clearly not as easily scalable. The main point is that we will see multi-chip cards in the future, rather than single, large, concentrated cores, and that's how the problem will be solved. On that I believe we agree. :)
 
Last edited by a moderator:
I ponder how cost effective traditional AFR and SFR are, given the extra memory costs. Ideally, you'd want a midly big bus between the two chips, and a smart system that'd try to get rid of the duplication of textures that aren't being used in the current frame, or haven't even been used in a short while. In addition to that, you'd also want to figure out a smart way to minimize framebuffer duplication. hmm.


Uttar
 
ITRS has predicted 800mm2 ASICs would be feasible by 2008. Dunno we'll see that in discrete gpu's tho!
 
I remember a time when 200mm^2 was "huge." Now we see a 480mm^2 chip. Just how big are they going to get? The largest chip ever made is the Montecito, which is 596mm^2. Will we see a GPU bigger than that?

Montecito cannot be biggest cmos chip ever made.
Canon's CMOS sensors for full frame DSLR's are bigger ( 36 x 24 mm ).

But their manufacturing requires expensive multi-pass lithography technologies which make them very expensive. ( canon says 22x15mm chip is about 10 times cheaper to manufacture than full frame )

And there are also even bigger CCD sensors for medium-format digital cameras. these are not CMOS-based but they are still silicon chips.
 
Ok, I was wrong to say that the Montecito is the biggest chip ever. However, is it still the biggest ASIC chip?
 
What process is used on those sensors, though?

Only "90nm is common, 65nm is coming" is said but it's said about lithography in general. But propably means canon is using 90nm on most of it's sensors.

"largest stepper/scanner is 26 x 33 millimeters" is said on canon's pdf, so that is the current limit of biggest chip than can be done without mfg costs skyrocketting. (it will still be quite expensive though).

But maybe someone will make bigger steppers some day..
 
Back
Top