Light-Speed Chip Boosts 6G Wireless with AI at the Edge

Researchers at MIT have developed a novel photonic processor capable of classifying wireless signals in nanoseconds, potentially streamlining operations for future 6G networks and edge devices. This optical chip performs deep-learning computations at the speed of light, offering a significant acceleration – approximately 100 times faster – compared to current digital alternatives, while maintaining over 95% accuracy. The device’s compact size, reduced energy consumption, and scalability position it as a promising solution for real-time data analysis, with applications extending beyond signal processing to areas like autonomous vehicles and medical devices. The research, published in *Science Advances*, details a new architecture – the multiplicative analog frequency transform optical neural network (MAFT-ONN) – designed specifically for efficient wireless signal processing.

Researchers are actively pursuing strategies to enhance the computational capacity of the multiplicative analog frequency transform optical neural network (MAFT-ONN) through the implementation of multiplexing schemes. These schemes will facilitate the execution of a greater number of computations and enable further scalability of the device. Furthermore, they aim to extend the capabilities of their architecture to encompass more complex deep learning models, including transformer models and large language models (LLMs).

The MAFT-ONN operates by processing wireless signals and extracting relevant information, such as signal modulation, which allows edge devices to automatically infer the type of signal and extract the carried data. This architecture achieves high accuracy – initially 85 percent in a single measurement, rapidly converging to over 99 percent with multiple measurements – while maintaining exceptionally low latency, completing the entire process in approximately 120 nanoseconds. This speed is a significant advantage over state-of-the-art digital radio frequency devices, which typically require microseconds for similar machine-learning inference.

A key innovation enabling scalability is the encoding of all signal data and machine-learning operations within the frequency domain before digitization. This approach, combined with the design of the optical neural network to perform both linear and nonlinear operations in-line, allows the researchers to achieve significant efficiency. Unlike other methods requiring a dedicated device for each computational unit (neuron), the MAFT-ONN requires only one device per layer, enabling the integration of 10,000 neurons onto a single chip and performing necessary multiplications in a single operation.

This enhanced efficiency is further achieved through photoelectric multiplication, a technique that dramatically boosts performance and allows for readily scalable optical neural networks without incurring substantial overhead. The researchers emphasize that the speed advantage – nanoseconds or even picoseconds for optics compared to microseconds for digital devices – is maintained even as accuracy is increased through extended measurement times, as the inference speed of the MAFT-ONN is sufficiently rapid to accommodate this trade-off.

Funding and Collaborators

This research benefited from financial support from multiple sources, including the U.S. Army Research Laboratory and the U.S. Air Force, demonstrating a commitment to advancing technologies with potential national security applications. MIT Lincoln Laboratory and Nippon Telegraph and Telephone also contributed funding, highlighting collaborative efforts between academic institutions and industry partners. Further support was provided by the National Science Foundation, underscoring the fundamental scientific merit of the project and its alignment with broader research priorities.

The project team comprised researchers from diverse affiliations. Ronald Davis III, the lead author, completed his PhD at MIT in 2024. Zaijun Chen, formerly a postdoctoral researcher at MIT, now holds a position as an assistant professor at the University of Southern California, signifying the dissemination of expertise beyond the originating institution. Ryan Hamerly contributed as a visiting scientist at MIT’s Research Laboratory of Electronics (RLE) and concurrently as a senior scientist at NTT Research, illustrating a strong industry-academia link.

Dirk Englund, a professor in the Department of Electrical Engineering and Computer Science at MIT, served as a principal investigator on the project. His affiliation with MIT Lincoln Laboratory further emphasizes the interdisciplinary nature of the research. The collaborative effort between these researchers, spanning multiple institutions and disciplines, was crucial to the project’s success and the development of the multiplicative analog frequency transform optical neural network (MAFT-ONN).

More information
External Link: Click Here For More

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

January 14, 2026
GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

January 14, 2026
Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

January 14, 2026