This photonic computer is 10X faster than NVIDIA GPUs using 90% less energy

Dont blame me for the title...

https://lightmatter.co

You'd think Intel/AMD/Nvidia would be happy to invest/try to buy these guys just out of the sheer possibility that it works near as well as represented. Had no idea photonic computers were this close that "shipping by the end of the year" might be a thing. I mean, screw TSMC the switching speeds and power efficiency you can get out of photonics is way faster than anything they're even dreaming of having on their roadmap.

Assuming of course it's anything like the claims being made.
 
I'm super hyped. Quantum computing always seems out of reach and exotic, but this sounds all practical. Awesome - good luck with that! :D

Edit: I see another dark age of 'brute force beating clever control flow' rising ;)
 
Last edited:
Worth noting this is analogue computing.

Their performance and power efficiency claims are all over the place, but it appears it's at least 10x higher performance per watt than the best conventional.

https://www.wired.com/story/chip-ai-works-using-light-not-electrons/

"The company says its chip runs 1.5 to 10 times faster than a top-of-the-line Nvidia A100 AI chip, depending on the task. Running a natural language model called BERT, for example, Lightmatter says Envise is five times faster than the Nvidia chip; it also consumes one-sixth of the power."

On their own site they claim ">5x faster than NVidia A100 on BERT Base in the same power footprint".

There's about another 50x boost available from using multiple colours of light concurrently. It seems to me that's the end of this particular road though.

I suppose the relatively low power density means that 3D stacking would be very productive. But I don't understand the power consumption implications of using coloured light - how much extra power consumption would there be with 50 colours?

I suppose it'll be a few years before they make something suitable for training.
 
Worth noting this is analogue computing.
A math coprocessor, needing traditional transistors for branches and control flow.
First example coming to mind would be path finding. It can not run greedy Dijkstra or A-Star, but it can diffuse energy from the goal and then find the shortest paths from any point using gradient descent. And likely such brute force optimization problem can be solved more efficiently than the work efficient greedy algorithms in many cases.
Surely not worthless. Spectral rendering for free? Better physics simulations. Maybe even proper audio synthesis becomes possible.
 
There's about another 50x boost available from using multiple colours of light concurrently. It seems to me that's the end of this particular road though.

I suppose the relatively low power density means that 3D stacking would be very productive. But I don't understand the power consumption implications of using coloured light - how much extra power consumption would there be with 50 colours?

It sounds like their hardware is capable of processing multiple streams of different wavelengths. I think the different "light" mentioned in the video is mostly in reference to the electromagnetic radiation spectrum ...
 
A Xilinx Alveo FPGA, said Peng, was able to achieve a 100-times increase in the amount of throughput in terms of words recognized correctly per second in the Google Speech test, relative to an Nvidia V100 GPU. The data had been disclosed by Numenta back in November.
The ImageNet test on ResNet-50 had not previously been disclosed by either party. In that instance, the throughput in images per second recognized was accelerated by three times on the Xilinx part, called Versal, versus an Nvidia T4 part.
https://www.zdnet.com/article/xilin...c-speed-up-of-neural-nets-versus-nvidia-gpus/

Claims were also big from Numenta, albeit its founder is a known and legit industry veteran. And Xilinx is no startup.

https://www.eenewseurope.com/news/neureality-starts-showing-prototype-inference-platform

NR1-P is NeuReality's first implementation of the company's AI-centric architecture, with other implementations to follow. The company is now focused on developing the NR1 system chip that is claims will bring a 15x improvement in performance per dollar compated to GPU-based inference engines.
Another claim from NeuReality
 
Surely not worthless.
Oh I'm definitely not knocking analogue computing. Our brains are a literally wonderful mixture of analogue and digital, synchronous and asynchronous :)

It sounds like their hardware is capable of processing multiple streams of different wavelengths. I think the different "light" mentioned in the video is mostly in reference to the electromagnetic radiation spectrum ...
Absolutely. Different colours being processed simultaneously :)

https://www.zdnet.com/article/xilin...c-speed-up-of-neural-nets-versus-nvidia-gpus/

Claims were also big from Numenta, albeit its founder is a known and legit industry veteran. And Xilinx is no startup.

https://www.eenewseurope.com/news/neureality-starts-showing-prototype-inference-platform

Another claim from NeuReality
I think there's mileage in FPGAs, yes. But I doubt it's in the "matrix multiplication", since dedicated silicon will always win there.

Sparsity is a win no matter GPU or FPGA.

I'm a huge fan of Numenta and it gratifies me to see that more and more groups are doing serious work along the same lines as Numenta (often independently of Numenta). Numenta is learning from these groups too, it's certainly not a one-way street. Jeff's focus has always been the neuroscience. It's my belief that the broader AI/ML community repeatedly chases down dead ends because it ignores neuroscience.

I'm no expert on the support for sparsity and "diagonal" matrices in NVidia's architecture: how proficient that is and how much benefit researchers are obtaining from these aspects. So I don't know what proportion of the comparisons that Numenta makes are strictly valid.

Numenta is starting to set benchmark performance levels on cutting-edge problems:


Dendritic computation in brains is real and the AI/ML people are utterly ignorant of this.

I think place cells and grid cells are the missing major piece for making progress and lie at the heart of Jeff's current research. These types of cells are how animals understand the 3D world, and how animals build virtual models of more complex worlds, such as social relationships.
 
nice to see this finally happen. I recall in my university days, in my 4th year my professor had the patent to 'photonic memory' or at least one of the patents. We were supposed to help research this domain. Friggen impossible. Nice to see a product that is nearing completion likely before quantum computers.

The bypassing of branch statements etc; effectively removing a big part of memory management and just focusing on processing; that's a critical factor for this chip.
 
I'm always suspicious when a startup claims xxx performance improvement with their future product against the previous gen of an established player. Last example with Graphcore, the big looser of last MLPerf :
https://www.servethehome.com/graphcore-celebrates-a-stunning-loss-at-mlperf-training-v1-0/

I have no idea about the companies mentioned in this topic but it won't so easy for the new players. Nvidia is not standing still. Even within the same generation, ML perf improve a lot by software optimization. For example A100 is roughly 2 times faster than last year, thanks to the new libraries.

On the other side, ML is so new that huge gains can be done by innovative algorithms. So everything is possible and that's the beauty of this field !
 
Back
Top