Why doesn't anyone use a fpga?

Flux

Regular
Why doesn't anyone use a fpga/ASIC in video game consoles?
 
Last edited by a moderator:
Yeah. What is a FPGA/ASIC? They're 2 different things. And use it for what? Regardless of the reason for this question the answer is likely to be cost and performance.
 
Answer 1: Of course they do, e.g. here's a FPGA-based console: http://nothings.org/projects/

Answer 2: Of course they do, this is the secret SEGA plan to return to hardware with a bang!

Answer 3: nah, forget about answer 3, it's too boring.
 
Actually, I remember that Ati/Nvidia used to use FPGA's for validation and testing before they made the actual chips.
 
The Amiga CD32 had an FPGA. It didn't get anywhere. If this tech was worth using, someone would be using it for sure. Hence your question answered - they don't provide anything worth having for the cost.
 
TBH I don't know for sure. When CD32 was annoucned, magazines described it as featuring an FPGA which could be used for planar to chunky conversion or repurposed for other tasks,. They could have reported it wrong, I could be remembering it wrong, but it's described here near the bottom as a 'gate array,' though not an FPGA.

Chunky-To-Planar - The new magic hardware

The new Amiga CD32 contains a rather special new piece of hardware called a chunky-to-planar gate array. If you program games you'll instantly go 'wow!', but if you don't here's how it works, and why it's so good:
 
FPGA's are economical for low volume devices only.

Why when they only cost a few dollars/cents in bulk anyway? FPGAs and ASIC are far more powerful than conventional CPUs at doing a limited range of tasks. A stratix 3 at 400MHz is 70-80 Gflops double percision of floating point performance and thats on chips from 2005. The "RPU" is powered by an Xilink II Vertex 6000 that runs at 66MHz but can draw 187 million raytraces at 15fps at a resolution of 640x480.

Just wondering why none of the main console makers ever use the fpga to accelerate a limited range of tasks like say drawing polygons or and/if's for AI.
 
Why when they only cost a few dollars/cents in bulk anyway? FPGAs and ASIC are far more powerful than conventional CPUs at doing a limited range of tasks. A stratix 3 at 400MHz is 70-80 Gflops double percision of floating point performance and thats on chips from 2005. The "RPU" is powered by an Xilink II Vertex 6000 that runs at 66MHz but can draw 187 million raytraces at 15fps at a resolution of 640x480.

Just wondering why none of the main console makers ever use the fpga to accelerate a limited range of tasks like say drawing polygons or and/if's for AI.

A few dollars * 50 million consoles = hundreds of millions of dollars.
 
Why when they only cost a few dollars/cents in bulk anyway? FPGAs and ASIC are far more powerful than conventional CPUs at doing a limited range of tasks. A stratix 3 at 400MHz is 70-80 Gflops double percision of floating point performance and thats on chips from 2005. The "RPU" is powered by an Xilink II Vertex 6000 that runs at 66MHz but can draw 187 million raytraces at 15fps at a resolution of 640x480.

Just wondering why none of the main console makers ever use the fpga to accelerate a limited range of tasks like say drawing polygons or and/if's for AI.
I haven't priced FPGAs about 8 years, but at a previous job we used one (Xilinx Vertex II) for a video board. ASICs at the time were easily running double the clock rate for far less money. Since the market price for our board was at least few thousand dollars and low volume an expensive FPGA made sense.

For the volumes Xbox and Playstation sell at ASICs are more flexible and will cost less in the long run. If performance isn't an issue developing on a FPGA sure beats an ASIC though. What's that you found a bug in my design? No problem let me reflash that for you. ;)
 
Why when they only cost a few dollars/cents in bulk anyway? FPGAs and ASIC are far more powerful than conventional CPUs at doing a limited range of tasks. A stratix 3 at 400MHz is 70-80 Gflops double percision of floating point performance and thats on chips from 2005. The "RPU" is powered by an Xilink II Vertex 6000 that runs at 66MHz but can draw 187 million raytraces at 15fps at a resolution of 640x480.

Just wondering why none of the main console makers ever use the fpga to accelerate a limited range of tasks like say drawing polygons or and/if's for AI.

Not to mention that the same design implemented in ASIC is typically 10x smaller and clocks about 2x higher.
 
Actually, I remember that Ati/Nvidia used to use FPGA's for validation and testing before they made the actual chips.
That's common enough. It's a lot faster than using software emulation!
Not to mention that the same design implemented in ASIC is typically 10x smaller and clocks about 2x higher.
I would think, but I'm no expert, that you would usually be able to clock your design a lot faster, perhaps >10x, in an ASIC.
 
That's common enough. It's a lot faster than using software emulation!

I would think, but I'm no expert, that you would usually be able to clock your design a lot faster, perhaps >10x, in an ASIC.

I am not an expert either. But the slowest FPGA system I have worked with was 25MHz. So a 10x speed-up seems questionable, especially with fully automated P&R and related stuff.
 
I am not an expert either. But the slowest FPGA system I have worked with was 25MHz. So a 10x speed-up seems questionable, especially with fully automated P&R and related stuff.
How many FPGAs did you need to emulate the ASIC? I'd imagine that if you needed dozens of them to handle the design, then routing on and off multiple FPGAs is going to limit your clock speed <shrug>. Again, I haven't had to work with FPGA systems for many years so take my comments with the usual helping of salt.
 
How many FPGAs did you need to emulate the ASIC? I'd imagine that if you needed dozens of them to handle the design, then routing on and off multiple FPGAs is going to limit your clock speed <shrug>. Again, I haven't had to work with FPGA systems for many years so take my comments with the usual helping of salt.

In addition if you start having to split your design across multiple FPGA's you can start to run into issues with how you partition the design and how you do the interfaces between the blocks on the different FPGA's.

CC
 
How many FPGAs did you need to emulate the ASIC? I'd imagine that if you needed dozens of them to handle the design, then routing on and off multiple FPGAs is going to limit your clock speed <shrug>. Again, I haven't had to work with FPGA systems for many years so take my comments with the usual helping of salt.

Oh. I wasn't speaking of emulating an ASIC design with a bunch of FPGA's. I was speaking of one FPGA handling lot's of stuff going on. Basically, everything but the kitchen sink was implemented in FPGA.

But the point you made are very valid.
 
Back
Top