UNIVERSITY PARK, Pa. — A key problem facing artificial intelligence (AI) development is the vast amount of energy the technology requires, with some experts projecting AI datacenters to be responsible for over 13% of global electricity usage by 2028. According to Xingjie Ni, associate professor of electrical engineering at the Penn State School of Electrical Engineering and Computer Science, the key to addressing this roadblock could lie in computers powered by light instead of circuitry.
Ni and his team recently developed a prototype device that can accelerate and dramatically reduce the energy cost of AI computation, which they detailed in a paper published today (Feb. 11) in Science Advances. Their system routes light through an “infinity mirror” like loop of tiny optical elements, encoding data directly into the beams of light and capturing the resulting light patterns with a microscopic camera. AI models powered by this light-processing unit run faster and require far less energy than conventional electronic computing systems to complete tasks and perform calculations.
In the following Q&A, Ni discussed optical computing, how this new approach is more efficient than previous optical systems and the impacts this research could have on the future of AI and computing technology.
Q: What is optical computing? How is it different from traditional computing technology?
Ni: Traditional computers encode data into binary 1s and 0s and perform operations with electronic circuits, a very flexible and reliable approach, but one that consumes significant energy and generates a lot of heat. Optical computing is a way to process information using light instead of electricity — rather than relying on billions of electronic transistors to do calculations step by step, systems feed light through carefully designed optical components like lenses or mirrors, encoding calculations and relevant answers directly into these patterns of light.
Optical computing offers key advantages for certain math-heavy tasks because photons, the atomic building blocks of light, don’t interact with each other under normal conditions. This means many light signals can pass through the same system simultaneously, allowing optical computers to process large data sets incredibly quickly. These transformations happen at the speed of light, leading to very low latency, and they can be highly energy efficient because much of the computation can be performed with minimally powered or even passive optical components.
Q: How has optical computing been used in AI previously? How does your approach improve its implementation?
Ni: Since light can process many signals at once and travel extremely fast, these systems can, in principle, execute tasks like pattern recognition at high-speed using little energy. This is why optical computing has been explored as an AI accelerator that performs the “heavy math” at the core of many AI models. In most prior demonstrations, however, light handles only the linear, or straightforward, part of computation, where doubling the input doubles the output, and multiple inputs combine predictably.
The decision-making that makes AI powerful is nonlinear in nature – meaning the output isn’t proportional to the input, or that you could input a little bit of information or power and receive a much larger response. This behavior, which drives the highly complex functions AI models can execute, has previously been done electronically or by using specialized optical materials and high-input power. However, that means these actions require extra conversions between optical and electronic signals — resulting in slower, more complex, power-hungry hardware.
Our approach targets this bottleneck directly. Instead of relying on high optical power and special materials to create the needed nonlinear behavior, we use a compact multi-pass optical loop, like an “infinity mirror,” in which the light pattern effectively “builds up” a nonlinear relationship between the input data and the output over repeated passes between the mirrors. The core of our system is built from widely available components — like what’s used in everyday LCD displays and LED lights — rather than exotic materials or high-power lasers. By arranging these familiar elements in a multi-pass loop, we can produce the energy AI needs, while remaining incredibly compact and efficient.