Light-Speed AI: New Nanophotonic Chip Boosts Efficiency & Accuracy

by Anika Shah - Technology
0 comments

University of Sydney Researchers Develop Ultra-Compact AI Chip Using Light

Researchers at the University of Sydney have created a nanophotonic AI chip that utilizes light instead of electricity to perform neural network calculations, potentially revolutionizing energy efficiency in artificial intelligence computing. The prototype, built at the Sydney Nano Hub, could significantly reduce the energy footprint of future computing systems as global demand for AI continues to grow.

How the Chip Works: Photonics vs. Traditional Electronics

Traditional computer chips rely on electricity, moving electrons through wires. This process inevitably generates heat, requiring energy-intensive cooling systems. The nanophotonic chip prototype bypasses this limitation by using light – photons – to process information. Light travels through materials with minimal resistance, drastically reducing heat generation.

As light passes through nanoscale structures within the chip, these structures automatically perform calculations. These nanostructures are arranged as a neural network, mimicking the human brain’s pattern recognition and classification abilities. The chip operates on the picosecond timescale, or trillionths of a second, leveraging the speed of light for computation.

High Accuracy in Biomedical Image Classification

To validate the technology, the researchers trained the nanophotonic chip to classify over 10,000 biomedical images, including MRI scans of the breast, chest, and abdomen. The chip achieved a classification accuracy of approximately 90–99% in both simulations and experiments. University of Sydney

Inverse-Designed Photonic Neural Networks

The chip’s architecture utilizes inverse-designed photonic neural networks (PNNs). This approach involves reconstructing spatial fields through optical coherence using a wave-based inverse-design method and three-dimensional finite-difference time-domain simulations. Each subwavelength voxel functions as a trainable degree of freedom, resulting in a computational density of approximately 400 million parameters per mm². Nature Communications

The researchers demonstrated two inverse-designed PNN accelerators, achieving on-chip MNIST and MedNIST classification accuracies of 89% and 90% respectively, within footprints of 20 × 20 µm² and 30 × 20 µm². Nature Communications

The Future of Energy-Efficient AI

Professor Xiaoke Yi, from the School of Electrical and Computer Engineering and director of the Photonics Research Group, emphasized the importance of this research in addressing the growing energy demands of AI. “Artificial intelligence is increasingly constrained by the energy consumption. This research performs neural computation using light, enabling faster, more energy-efficient and ultra-compact AI accelerators,” he stated. University of Sydney

The team is now focused on scaling the technology towards larger photonic neural networks, paving the way for sustainable AI infrastructure that can meet future computing demands without substantial increases in power consumption. The Quantum Insider

The nanostructure on the chip is tens of micrometres in size, comparable to the width of a human hair. Interesting Engineering

Related Posts

Leave a Comment