Why doesn't anyone use a fpga?

Basically, everything but the kitchen sink was implemented in FPGA.
I imagine your 'everything' wasn't all that much, in the big scheme of things. :) Even the now fairly modest and unassuming GPU in the PS3 is what, 550something million trannies? There's not one single FPGA on earth that can model something like that...lest it be one manufactured by space aliens perhaps. :)

Much less the beastly ASICs used in the radeon 5870/geforce 480 series of boards...
 
I imagine your 'everything' wasn't all that much, in the big scheme of things. :) Even the now fairly modest and unassuming GPU in the PS3 is what, 550something million trannies? There's not one single FPGA on earth that can model something like that...lest it be one manufactured by space aliens perhaps. :)

Much less the beastly ASICs used in the radeon 5870/geforce 480 series of boards...

Atmel's mAgic performs 1 gflop at 100mhz.

http://www.design-reuse.com/news/57...-domain-dsp-soft-core-1-0-gflops-100-mhz.html

Stratix IV(they are at V now) is over 2.5 billion transistors.


Stratix 3's can do 45Gflops (at 400mhz)and thats an almost six year old chip design .

www.alteraforum.com/forum/showthread.php?t=3868
www.altera.com/literature/wp/wp-01028.pdf
 
What's your point Flux? CPUs and GPUs are more than just flops. Even Microsoft's scaler chip in the Xbox 360 is an ASIC because in high volumes it's the cheaper way to go.
 
What's the point in full scale FPGA simulation? Don't they have equivalence provers for timing accurate C models nowadays?
 
What's the point in full scale FPGA simulation? Don't they have equivalence provers for timing accurate C models nowadays?
I'm not sure exactly what you are asking here, but FPGA simulation is sometimes used because it's an order (or two) magnitude faster than running the simulation on a CPU.

If it's a "small" design then its also useful to be able to show to potential customers without having to spend a fortune on a test chip.
 
I imagine your 'everything' wasn't all that much, in the big scheme of things. :) Even the now fairly modest and unassuming GPU in the PS3 is what, 550something million trannies? There's not one single FPGA on earth that can model something like that...lest it be one manufactured by space aliens perhaps. :)

Much less the beastly ASICs used in the radeon 5870/geforce 480 series of boards...

Of course, it wasn't a CPU/GPU. Duh...

All I am saying is that if it had been implemented in ASIC, it would have taken much less area and would have clocked much higher. And no, it wasn't something trivial, unless you want to compare it with chips that have ~10 competitors in the whole world in terms of transistor count.
 
I'm not sure exactly what you are asking here, but FPGA simulation is sometimes used because it's an order (or two) magnitude faster than running the simulation on a CPU.
Well for instance NVIDIA AFAIK works downwards from C models all the the way to timing accurate models, if you can prove equivalence between C and RTL for a block simulating at the RTL level for the entire chip becomes an epic waste of time.

The FPGA is faster for simulating gate level descriptions, but the C description running on a CPU is almost certainly still faster.
 
Well for instance NVIDIA AFAIK works downwards from C models all the the way to timing accurate models, if you can prove equivalence between C and RTL for a block simulating at the RTL level for the entire chip becomes an epic waste of time.
Doesn't that at least assume that
  • The C (presumably system-C ?) is correct and
  • that you can actually do such a proof.
That "if" seems like a pretty big one to me. For things like, say, a floating-point multiplier I can see formal verification being usable, but for a completely arbitrary, full-scale system can you always do it? <shrug>
The FPGA is faster for simulating gate level descriptions, but the C description running on a CPU is almost certainly still faster.
Not always. We have "systems" written in C that still take several hours to run on the CPU but take only tens of seconds on an FPGA.
 
Doesn't that at least assume that
  • The C (presumably system-C ?) is correct and
  • that you can actually do such a proof.
That "if" seems like a pretty big one to me. For things like, say, a floating-point multiplier I can see formal verification being usable, but for a completely arbitrary, full-scale system can you always do it? <shrug>

Not always. We have "systems" written in C that still take several hours to run on the CPU but take only tens of seconds on an FPGA.

Wouldn't a (strictly?) functional language - say Haskell - be a better fit for such a purpose?

After all, circuit blocks are

a. a pure functions of their inputs.
b. composable

Laziness could be an issue, but still, on balance, I see functional languages being way better than C.
 
Btw, something I've pondered pretty much since I first heard of these chips more than ten years ago: do FPGAs ever wear out either through use, or through reprogramming, or are they essentially as reliable as any traditional combination of silicon transistors?
 
Wouldn't a (strictly?) functional language - say Haskell - be a better fit for such a purpose?
The hardware blocks have mutable state, which they carry across clocks ... you don't want a language which makes expressing that kind of thing unnatural.

Occam would be a better fit than Haskell, hell Javascript would be a better fit than Haskell.
 

It would be stupid to put a FPGA in a console to help with raytracing!

The reason that people have used FPGA for raytracing prototypes is because it would cost a hell of a lot of money to have a chip custom fabbed!

FPGAs are slower, bigger and cost a lot more than a normal chip and I am getting sick of people wanting them in a console!
 
It would be stupid to put a FPGA in a console to help with raytracing!

The reason that people have used FPGA for raytracing prototypes is because it would cost a hell of a lot of money to have a chip custom fabbed!

FPGAs are slower, bigger and cost a lot more than a normal chip and I am getting sick of people wanting them in a console!

you do realize that's Sony's CTO talking about having programmable logic & DSPs in the console to help out with the processing right?
 
you do realize that's Sony's CTO talking about having programmable logic & DSPs in the console to help out with the processing right?

A bad idea is a bad idea!

FPGAs are pointless in a console!
They are not some special chip to help with ray tracing, they are mostly a cheap way to prototype things, if they wanted to have ray tracing hardware they would be morons not to make a custom high speed chip.
 
A bad idea is a bad idea!

FPGAs are pointless in a console!
They are not some special chip to help with ray tracing, they are mostly a cheap way to prototype things, if they wanted to have ray tracing hardware they would be morons not to make a custom high speed chip.

maybe they want to use it for more than just graphical tasks or they want it to be programmable so it can take on new tasks as needed in the future.

& I think the Chief Technology Officer of Sony would have a better idea of what makes sense inside of a Console than you, unless you are a CTO of a big company yourself. are you?
 
Last edited by a moderator:
maybe they want to use it for more than just graphical tasks or they want it to be programmable so it can take on new tasks as needed in the future.

FPGAs make about as much sense as spending the majority of you silicon resources on a barely programmable collection of crippled SIMD pipelines and then proclaiming it will be the future...

& I think the Chief Technology Officer of Sony would have a better idea of what makes sense inside of a Console than you, unless you are a CTO of a big company yourself. are you?

First of all, based on past performance, we can say that this isn't true.

Second, the advantages of FPGAs aren't in being reprogrammable within a box. It is fair to say that basically all the FPGAs sold are only used for a fixed problem. AKA, company A buys them to do X, company B buys them to do Y, company C buys them to do Z. No one is buying them to dynamically switch between XY&Z. The thing about FPGAs, is that it is a lot cheaper to buy one and program it to do your low volume workload than it is to fab a chip, hence their use in the industry. As something that you reprogram from application to application, they are basically still born.

In something like a console, it makes much more sense to use application specific programmable logic (GPUs, physics, ray trace, etc) or custom logic or general purpose logic(AKA cpus) than it does to use FPGAs since you are already doing a custom design from the start.

Also, the skill sets needed to design an FPGA and the skill sets needed to design a program are significantly different. General and even graphical programming is a small small subset of the skills required to program FPGAs. At least EE had some similarities to normal programming. FPGAs have none.
 
FPGAs make about as much sense as spending the majority of you silicon resources on a barely programmable collection of crippled SIMD pipelines and then proclaiming it will be the future...



First of all, based on past performance, we can say that this isn't true.

Second, the advantages of FPGAs aren't in being reprogrammable within a box. It is fair to say that basically all the FPGAs sold are only used for a fixed problem. AKA, company A buys them to do X, company B buys them to do Y, company C buys them to do Z. No one is buying them to dynamically switch between XY&Z. The thing about FPGAs, is that it is a lot cheaper to buy one and program it to do your low volume workload than it is to fab a chip, hence their use in the industry. As something that you reprogram from application to application, they are basically still born.

In something like a console, it makes much more sense to use application specific programmable logic (GPUs, physics, ray trace, etc) or custom logic or general purpose logic(AKA cpus) than it does to use FPGAs since you are already doing a custom design from the start.

Also, the skill sets needed to design an FPGA and the skill sets needed to design a program are significantly different. General and even graphical programming is a small small subset of the skills required to program FPGAs. At least EE had some similarities to normal programming. FPGAs have none.

what past performance? if you're talking sells they have sold over 300 Million & if you're talking about the performance of the consoles PS1 , 2 & 3 all put out pretty nice games for their time. so I'm not sure what you even meant by that.

& as for the FPGA maybe he was talking about another form of programmable logic as you said (GPUs physics, ray trace, etc)

the point was that he was talking about using DSPs & programmable logic to help out with the processing & I was asking could they be using it for some type of ray-tracing.
 
what past performance? if you're talking sells they have sold over 300 Million & if you're talking about the performance of the consoles PS1 , 2 & 3 all put out pretty nice games for their time. so I'm not sure what you even meant by that.

EE for one. It was widely regarded as a hard to program dead end before it was released and lived up to that expectation perfectly.

& as for the FPGA maybe he was talking about another form of programmable logic as you said (GPUs physics, ray trace, etc)

FPGA != GPU, physics hardware or ray tracing hardware.

the point was that he was talking about using DSPs & programmable logic to help out with the processing & I was asking could they be using it for some type of ray-tracing.

There is NOTHING a DSP can do that a CPU cannot. DSPs are basically only used for extremely low cost solutions. At this point, they are pretty much a dead man walking.
 
DSP's still do manipulation of waves way more accurately( when trying to replicate an analogy system) then CPU's ( FMA has actually helped CPU's a lot here) but i can still get closer sounds out of my DSP's (line 6 POD and Axe FX).

most high end Networking gear is all FPGA's. They used to be custom now all the heavy lifting on the high end platforms is FGPA based.

There are still high performance markets where FPGA's are what the designers are choosing
 
Back
Top