Avicena’s live demonstration at ECOC 2025 marks a turning point in the race for low-power, high-efficiency optical interconnects. By showcasing a fully operational microLED-based optical link with record-low transmitter power of 200 femtojoules per bit at <1E-12 raw BER—without relying on forward error correction—the company has made a strong case that its LightBundle™ platform could redefine how future AI and HPC systems are architected. Unlike traditional laser-based optical links that require threshold currents and intricate compensation schemes, Avicena’s use of microLEDs allows reliable high-speed operation at extremely low drive currents, in this case only 0.25mA at 4Gb/s. Combined with a high-sensitivity receiver derived from a high-volume camera sensor process, this architecture pushes the boundaries of both energy efficiency and scalability. What makes this demonstration particularly important is the direct parallel transmission model of LightBundle. Instead of forcing low-speed on-chip signals to be serialized into ultra-fast optical channels, the chiplet platform transmits raw parallel data directly through massive arrays of microLEDs. This simplification reduces complexity, lowers latency, and makes scaling far more practical. In co-packaged optics (CPO), on-board optics (OBO), pluggable modules, or wide memory interconnects, this flexibility provides system designers with a universal building block. The implications stretch far beyond GPU-to-GPU connectivity. HBM memory, with its hunger for wide parallel buses and minimal latency, is one of the most promising frontiers for optical interconnects. Avicena’s LightBundle is uniquely suited to bridging the gap between the relentless growth in AI workloads and the bottlenecks of conventional electrical and optical links. Bardia Pezeshki, CTO, underscored that the breakthrough lies in combining a minor modification of an existing high-volume process with the natural strengths of microLEDs. Marco Chisari, CEO, went further, pointing to the opportunity of breaking memory bandwidth barriers and opening the door to entirely new system architectures. These remarks underline a clear strategy: Avicena wants to move optical interconnects beyond niche deployments and make them a cornerstone of AI and HPC infrastructure. By lowering power consumption while enabling massive scaling across racks and thousands of GPUs, their platform offers hyperscalers a chance to push performance without hitting the limits of power and heat. This demonstration is more than a technical feat—it is a signal to the AI data center ecosystem that a post-laser interconnect era is coming. If microLED-based LightBundle arrays can be deployed at scale, the entire balance of power in next-generation AI systems could shift, with memory and GPU clusters no longer bound by legacy electrical limits. For an industry that increasingly measures progress by joules per bit, Avicena’s 200 fJ/bit demonstration will resonate loudly with system architects and data center operators seeking both performance and sustainability.
Leave a Reply