Bolt's Zeus GPU

If the software support is good, it seems like it would be a great value for professional 3D visualization and simulation. Gaming is probably not the market they are targeting.
 
This is a start-up out of California and it seems they're wanting to use RISC chips instead of proprietary hardware.

The article from Tom's Hardware kinda gave me BitBoys vibes with the grandiose claims. What do y'all think?

Startup claims its Zeus GPU is 10X faster than Nvidia's RTX 5090: Bolt's first GPU coming in 2026
I don't know what to think. Last year they introduce their Thunder GPU and it's nowhere...
Now they push their claims even higher with Zeus but it won't ship before another 2 years...
And all the Zeus numbers are good today but real competition is Rubin Ultra and its 576GB 12S HBM memory system + optics interconnect. Good luck

So it looks like another startup with the only goal is to be acquired and make the fortune of it's funder. I may be very wrong of course
 
given the worrying techniques and misleading advertising AMD and nVidia are doing with their prices, another competitor would be nice to have. The drivers is what can make a huge difference, not just the raw performance.
 
After a quick read, it sounds to me like a large collection of small CPUs, akin to Larrabee and Cell. Is that the case?
Yes, exactly. It's an HPC GPU that operates mainly with FP64. It doesn't have any fixed functions units or any texturing support.

There is one major catch: Zeus can only beat the RTX 5090 GPU in path tracing and FP64 compute workloads because it does not support traditional rendering techniques. This means it has little of no chance to become one of the best graphics cards.

They achieved the 10x speed up by utilizing 4 chiplet modules connected together (1 chiplet is 2.5x faster than 5090) and by using copious amount of on chip FP64 hardware (absent from consumer GPUs), as well as by using copious amounts of VRAM. They also most likely used scenes that don't fit the 5090 VRAM, as the 5090 is seen delivering 2x the path tracing performance of the 4090, which isn't possible unless the 4090 is VRAM limited.

They also avoided comparing against Quadro GPUs with 48GB of VRAM, or any data center GPU: H100/H200/B200/MI300 ... etc.
 
Last edited:
The biggest issue is that their performance lead is only obtained at FP64, a dying/niche format replaced nowadays by lower precision AI algorithms, even in traditional HPC apps.
In case of path tracing, it's even worst, as current pipelines are all FP32 including industry standard cinéma level rendering. What's the point of using a much slower and resources hungry FP64? Their slides don't provide an answer...
 
Back
Top