What sort of speeds are you expecting out of Next-gen GPU's?

GwymWeepa

Regular
Well?

What made me start thinking about this question was the speed of Cell. Its an enormously complex chip (200+ million transistors) running at 4ghz...yet not a single PC GPU has touched 1ghz, can there really be that big a gulf between the CPU and GPU speed? Or will Sony work its magic with Nvidia on the GPU and get it running above 1ghz?
 
GwymWeepa said:
cthellis42 said:
I'm hoping next generation will leave "speeding locomotive" far behind.

I want maglev speeds lol. But seriously, would 1ghz be too much to ask?

I wouldnt think so. It seems to this day ATI and NVIDIA took a different route in the PC market, by adding units, and get the chip to run as fast as they can given the architectures they produced...

I also think that it comes down to the way a CPU works compared to a GPU but i might be wrong.

We'll see, i don't see much of a problem in getting a GPU to run at 1GHz, if proper cooling is used.
 
well i dont see why tey cannot brake the pipelines to even smaller pieces. but as i dont know what they are braking, i really dont know.
 
L_i_n_k said:
well i dont see why tey cannot brake the pipelines to even smaller pieces. but as i dont know what they are braking, i really dont know.

Because you get rapidly diminishing returns when you increase the number of pipeline stages.

Between each stage you need registers. For a given silicon process these registers require that the results of the calculations preceding them be stable for a certain time period before the clock transition. Let's say that is R nanoseconds, which for sake of argument, we'll assume to be 2 nS. If the clock rate is, say, 100Mhz, that means you have 10 nS to do everything.

2 of are needed for the registers which leaves 8 for actual calculations.

If you target a higher clock speed, e.g. 200Mhz, you then have 5ns for everything. Take out the register time and you are left with 3 for actual calculations.

That means that you need to have ~8/3 times as many pipeline stages and/or use faster algorithms (=> many more transistors).

If you keep on going you'll end up with an enormous chip that is 99.9% full of registers. :p
 
ERP already indicated why GPUs aren't as fast: you get more power through parallelization than by upping clockspeed. I also think there are some architectural differences than play up the clockspeed gulf (pipeline length, etc.).

So really, we should be asking how parallel next gen GPUs will be. I suspect 16-24 processors will be tops for the first wave. PC GPUs will surpass that within a year, I bet.
 
Thanks for the explanation, sort and simple, the way i like it.
buy the way i would certainly want to see the gdc presentations geared forfard multicore game-engines. Cell in it self sound promising start for this paraller era, asuming it has the enought thought throught flexibility to make this multithread programing model happen. I think Ibm,sony,sti,nvidia,toshiba knew what they were doing, even as it seems scary the performance can be tapped. rest is up to the software guys. at least they will have a challenge newer seen before.
 
Inane_Dork said:
ERP already indicated why GPUs aren't as fast: you get more power through parallelization than by upping clockspeed. I also think there are some architectural differences than play up the clockspeed gulf (pipeline length, etc.).

So really, we should be asking how parallel next gen GPUs will be. I suspect 16-24 processors will be tops for the first wave. PC GPUs will surpass that within a year, I bet.

There is even a chance that PC GPUs might even surpass it before these consoles launched. Well the top end GPUs anyway.
 
Also, it seems that up until not too long ago, PC GPUs were manufactured by people who are just not on the same level as Intel (to give one example). But i could be wrong.
Maybe IBM can do a better job, although it doesn't show with the NV40 (IIRC they're manufacturing it, right?).
We'll see. In the end , GPUs have other priorities, even though a good clockspeed would help speeding up things a little bit.
 
ATI/NV will use Tensica tech and ATI will use Fast 14 too, the second is to get 4X faster the first we dont know so I am (edit)at least(edit) expecthing 2Ghz for ATI gpu (edit) in their site they already results as 10X faster (edit)
 
pc999 said:
ATI/NV will use Tensica tech and ATI will use Fast 14 too, the second is to get 4X faster the first we dont know so I am expecthing 2Ghz for ATI gpu
In a console ? How much wattage is that going to burn and what cooling will be needed ?

I"m expecting mabye a 700mhz 16x1 or 32x1 at the most for the xenon gpu and about the same from the nvidia gpu
 
One of the things Fast 14 claims is that because of their multiphase overlapped clocking scheme, latches and registers are not required at cycle boundaries. Can someone explain to me how this is possible (their whitepaper is not particularly clear on this, and I don't know a whole lot about the physics involved)?

Cheers,
Serge
 
Ludicrous Speed!


maybe something on the order of 600Mhz give or take a couple hundred Mhz.
 
I think power consumption and heat are the limiting factors. 6800 Ultra cards already require two power connectors and a HUGE heatsink+fan cooling solution. And that's only at 400 MHz! I doubt they'll be able to get their NV50 chips anywhere near 1 GHz.
 
BOOMEXPLODE said:
I think power consumption and heat are the limiting factors. 6800 Ultra cards already require two power connectors and a HUGE heatsink+fan cooling solution. And that's only at 400 MHz! I doubt they'll be able to get their NV50 chips anywhere near 1 GHz.

Well ati is at 520mhz with single slot cooling and only one monlex power connector .

I think that with the drop they can hit 700mhz ish .
 
Back
Top