This photonic computer is 10X faster than NVIDIA GPUs using 90% less energy

Discussion in 'Architecture and Products' started by Davros, Jul 5, 2021.

  1. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,468
    Likes Received:
    4,871
  2. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    798
    Likes Received:
    463
    You'd think Intel/AMD/Nvidia would be happy to invest/try to buy these guys just out of the sheer possibility that it works near as well as represented. Had no idea photonic computers were this close that "shipping by the end of the year" might be a thing. I mean, screw TSMC the switching speeds and power efficiency you can get out of photonics is way faster than anything they're even dreaming of having on their roadmap.

    Assuming of course it's anything like the claims being made.
     
  3. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,519
    Likes Received:
    1,763
    I'm super hyped. Quantum computing always seems out of reach and exotic, but this sounds all practical. Awesome - good luck with that! :D

    Edit: I see another dark age of 'brute force beating clever control flow' rising ;)
     
    #3 JoeJ, Jul 5, 2021
    Last edited: Jul 5, 2021
  4. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,538
    Likes Received:
    1,910
    Location:
    London
    Worth noting this is analogue computing.

    Their performance and power efficiency claims are all over the place, but it appears it's at least 10x higher performance per watt than the best conventional.

    https://www.wired.com/story/chip-ai-works-using-light-not-electrons/

    "The company says its chip runs 1.5 to 10 times faster than a top-of-the-line Nvidia A100 AI chip, depending on the task. Running a natural language model called BERT, for example, Lightmatter says Envise is five times faster than the Nvidia chip; it also consumes one-sixth of the power."

    On their own site they claim ">5x faster than NVidia A100 on BERT Base in the same power footprint".

    There's about another 50x boost available from using multiple colours of light concurrently. It seems to me that's the end of this particular road though.

    I suppose the relatively low power density means that 3D stacking would be very productive. But I don't understand the power consumption implications of using coloured light - how much extra power consumption would there be with 50 colours?

    I suppose it'll be a few years before they make something suitable for training.
     
    Lightman and Frenetic Pony like this.
  5. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,519
    Likes Received:
    1,763
    A math coprocessor, needing traditional transistors for branches and control flow.
    First example coming to mind would be path finding. It can not run greedy Dijkstra or A-Star, but it can diffuse energy from the goal and then find the shortest paths from any point using gradient descent. And likely such brute force optimization problem can be solved more efficiently than the work efficient greedy algorithms in many cases.
    Surely not worthless. Spectral rendering for free? Better physics simulations. Maybe even proper audio synthesis becomes possible.
     
    BRiT likes this.
  6. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    484
    Likes Received:
    580
    It sounds like their hardware is capable of processing multiple streams of different wavelengths. I think the different "light" mentioned in the video is mostly in reference to the electromagnetic radiation spectrum ...
     
  7. https://www.zdnet.com/article/xilin...c-speed-up-of-neural-nets-versus-nvidia-gpus/

    Claims were also big from Numenta, albeit its founder is a known and legit industry veteran. And Xilinx is no startup.

    https://www.eenewseurope.com/news/neureality-starts-showing-prototype-inference-platform

    Another claim from NeuReality
     
    BRiT likes this.
  8. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,538
    Likes Received:
    1,910
    Location:
    London
    Oh I'm definitely not knocking analogue computing. Our brains are a literally wonderful mixture of analogue and digital, synchronous and asynchronous :)

    Absolutely. Different colours being processed simultaneously :)

    I think there's mileage in FPGAs, yes. But I doubt it's in the "matrix multiplication", since dedicated silicon will always win there.

    Sparsity is a win no matter GPU or FPGA.

    I'm a huge fan of Numenta and it gratifies me to see that more and more groups are doing serious work along the same lines as Numenta (often independently of Numenta). Numenta is learning from these groups too, it's certainly not a one-way street. Jeff's focus has always been the neuroscience. It's my belief that the broader AI/ML community repeatedly chases down dead ends because it ignores neuroscience.

    I'm no expert on the support for sparsity and "diagonal" matrices in NVidia's architecture: how proficient that is and how much benefit researchers are obtaining from these aspects. So I don't know what proportion of the comparisons that Numenta makes are strictly valid.

    Numenta is starting to set benchmark performance levels on cutting-edge problems:



    Dendritic computation in brains is real and the AI/ML people are utterly ignorant of this.

    I think place cells and grid cells are the missing major piece for making progress and lie at the heart of Jeff's current research. These types of cells are how animals understand the 3D world, and how animals build virtual models of more complex worlds, such as social relationships.
     
    Deleted member 90741 and BRiT like this.
  9. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    2,304
    Likes Received:
    1,743
    Location:
    France
    But can it runs (sofware/cpu only) Crysis ?
     
  10. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    14,155
    Likes Received:
    17,583
    Location:
    The North
    nice to see this finally happen. I recall in my university days, in my 4th year my professor had the patent to 'photonic memory' or at least one of the patents. We were supposed to help research this domain. Friggen impossible. Nice to see a product that is nearing completion likely before quantum computers.

    The bypassing of branch statements etc; effectively removing a big part of memory management and just focusing on processing; that's a critical factor for this chip.
     
    Man from Atlantis likes this.
  11. xpea

    Regular Newcomer

    Joined:
    Jun 4, 2013
    Messages:
    496
    Likes Received:
    607
    I'm always suspicious when a startup claims xxx performance improvement with their future product against the previous gen of an established player. Last example with Graphcore, the big looser of last MLPerf :
    https://www.servethehome.com/graphcore-celebrates-a-stunning-loss-at-mlperf-training-v1-0/

    I have no idea about the companies mentioned in this topic but it won't so easy for the new players. Nvidia is not standing still. Even within the same generation, ML perf improve a lot by software optimization. For example A100 is roughly 2 times faster than last year, thanks to the new libraries.

    On the other side, ML is so new that huge gains can be done by innovative algorithms. So everything is possible and that's the beauty of this field !
     
    Jensen Krage and iroboto like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...