How big are GPUs gonna get?

I wonder what they'll do about noise. Photosensors at that size are bound to be vulnerable to even small perturbations.
 
AFAIK digital camera CCDs pixel size in the (several to tens of) micron regime. Well, do the maths, they're in the ~10-20 megapixels on a 36x24mm CCD.

Anyway the CCDs I've been dealing with are 13um pixel size, those are 16Mpix CCDs (granted, they're not CMOS AFAIK). But pixel size for imaging sensors is determined by other factors I believe (eg. well depth, etc.) rather than the usual stuff that is important for logic. So they're maybe not directly comparable.
 
Hmm..

25mm X 35mm = 875 mm2

G80 is 484 mm2 / 681 M Transistors

Number of Transistors at 22nm processes = (875/484) x (90/22)^2 x 681 = 20.6 Billion Transistors (30.2X of G80 transistors)

Accoring to this the maximum number of transistors you can get at 22 nm processes for GPUs. Assuming the chips will remain 2D until then. When will we have 22nm high-end GPUs, 2013-2014 timeframe maybe?

It'll be quite less than the calculation in reality because the scaling is worse than 60% every process generation. CPUs like Pentium M is close to 70%.

I don't think its really fair to compare 596mm2 in a complex CPU like Montecito to a CCU.

Is it also fair to say big die size in CPU is moot because its made of lots of cache while GPUs have logic?? From what I heard and what it looks like, GPUs also have fair amount to cache-like devices, or much simpler logic circuits.

I also doubt Nvidia(or ATI) would want to make anything bigger than 8800GTX for their sakes. How do they justify the costs associated with 480mm2 GPU for consumer segments anyway??
 
Last edited by a moderator:
I ponder how cost effective traditional AFR and SFR are, given the extra memory costs. Ideally, you'd want a midly big bus between the two chips, and a smart system that'd try to get rid of the duplication of textures that aren't being used in the current frame, or haven't even been used in a short while. In addition to that, you'd also want to figure out a smart way to minimize framebuffer duplication. hmm.


Uttar

True, that's what I wonder as well. At what point is using AFR/SFR through multi-chip more feasible cost/performance-wise than creating a large chip? Obviously Lucid (through Intel) has found a way to do it with a 'traffic chip' alongside the 'up to four cores' per card and driver probably similar to what's used for SLI/crossfire. I wonder if ATi/nvidia will take a similar approach, or if they can find a way to do the traffic calculations solely through the electronics on-die, traces on the pcb, and a driver? I still keep thinking to myself the 'on-die crossfire' and ability to send/receive on the newer ATi cards (rather than one-way on nvidia cards) must have something to do with efficiency in load balancing now and in the future. I could very well be wrong though.
 
Last edited by a moderator:
Back
Top