News

Computing at the speed of light with optics

Published on December 16, 2025
Category Nanoscale Imaging and Metrology

What if we could run AI on light in computer chips instead of electricity? This would have tremendous advantages. First of all, the computation would be more efficient since light requires less energy than moving electrons through wires. Secondly, the computation would be faster since it would operate at the speed of light. Researchers from Centrum Wiskunde & Informatica (CWI), ARCNL and Photosynthetic embarked on a three-month pilot project to explore the challenges and possibilities.

Optical vs digital computing

Nowadays computers can be taught to recognize images of, for example, cats or dogs, or to pinpoint a tumor on a CT-scan. This is done by training a convolutional neural network (CNN). A CNN is a complex algorithm with many layers of filters that is “trained” using tons of data. When it sees an image, it passes the image through these filters. Each layer gets better at recognizing certain features (lines, edges, ears, whiskers, etc.) until it can finally say, “That’s a cat.” The way it processes a digital image and produces the output is shown below. Such digital neural networks have proven to be very powerful and versatile, but they consume a lot of energy and moreover are limited by the processing speed of the computer on which they are executed.

Schematic depiction of a convolutional neural network classifying an image of a cat.

Enter optical neural networks (ONNs). Instead of simulating a neural network in software, an ONN is built directly from optical components such as lenses, mirrors, and phase modulators. The shapes and positions of these optical elements are the network, and the computation happens as light flows through the setup. The concept is not new and has recently gained new attention. In principle, computations performed in this manner require less energy and are much faster – working at the speed of light rather than the clock frequency of a digital computer.

Schematic depiction of an optical neural network classifying an image of a cat

The idea is that the optical elements can be designed to recognize the image it sees. Suppose we shine a picture of a cat or a dog through an optical neural network. To an ONN, a picture is a grid of pixel intensities, a unique wavefront full of tiny variations (brightness, spatial structures, frequencies and edges). Because of how the optical elements are designed, the light will end up focusing on a particular spot. For example, if the image is a cat, the light focuses on a dot on the left. If it’s a dog, the light focuses on a dot on the right.

A challenge for physicists and mathematicians

The challenge in designing ONNs is that they have very different design restrictions than their digital counterparts. Unlike digital neural networks, optical networks can’t stack many layers of processing steps. “An optical ‘layer’ is a physical object,” explains ARCNL group leader Lyuba Amitonova, “and as light behaves mostly linearly, stacking more optical layers without strong nonlinear elements between them does not add more intelligence.” And because an ONN designed on a computer must eventually be built in the real world, even tiny imperfections can reduce its theoretical performance.

An appealing alternative is to train the ONN physically, using real light rather than simulations. But this is difficult. Optical systems cannot easily implement the ‘backpropagation algorithm’ which is the method digital networks use to learn from their mistakes and thus the cornerstone of Machine Learning. And adding switching optical elements to create step-by-step processing introduces even more loss and complexity. Because of this, the answer had to be found in much simpler neural networks, with only a single layer.

A pilot project

This brings us to the joint project of CWI, ARCNL, and Photosynthetic. The pilot project brought together the mathematics and computation knowledge of Tristan van Leeuwen and Vladyslav Andriiashen at CWI; the optical physics expertise of Lyuba Amitonova and Jakub Kraciuk at ARCNL; and the microfabrication capabilities of Photosynthetic’s Alexander Kostenko.

Part of the team posing in front of the optical setup. From left to right: Jakub Kraciuk, Lyuba Amitonova, Tristan van Leeuwen, Vladyslav Andriiashen.

As CWI postdoctoral researcher Vlad said, while he pointed at the screen: “Sometimes all you need are 4 dots.” The outcome of this three-month rollercoaster project was a physical proof-of-principle ONN. The team first trained a one-layer CNN digitally in a simulation of the optical setup, then built its optical counterpart using a spatial light modulator (SLM). After carefully tuning, the setup was able to classify digits. The 4 dots on the screen showed that the optical “lens” could reliably distinguish the number 3 from other numbers. Dots were shown at the top and bottom if the input was a ‘3’ and dots at the left and right if it was another number.

Much more valuable than the 4 dots, however, the project gave a glimpse of the future, where the boundaries between hardware and computation, and physics and mathematics are blurred… like 4 dots that paint one coherent picture. Imagine what large-scale ONNs in the future could do: data centers running AI faster and with a fraction of the electricity, smart devices performing tasks without power-hungry chips and autonomous vehicles and drones making instant decisions using light-speed computation. To realize this vision, though, more research is needed. And it will require a team of experts from all four fields: mathematics, physics, computer science, and manufacturing.