Lumai unveils optical computing server for real-time large language model inference
Updated
Updated · GlobeNewswire · Apr 28
Lumai unveils optical computing server for real-time large language model inference
7 articles · Updated · GlobeNewswire · Apr 28
The Lumai Iris Nova server, available now for evaluation, runs billion-parameter LLMs in real time using a hybrid optical-digital architecture developed in Oxford, UK.
This system delivers up to 90% lower energy consumption and higher efficiency than traditional GPU-based servers, addressing data center power and scalability challenges as AI inference workloads surge.
Lumai’s technology, supported by ARIA and recognized with multiple innovation awards, marks a shift beyond silicon-based computing, enabling sustainable AI scaling for hyperscalers, enterprises, and research institutions.
Can optical computing unlock new AI capabilities impossible for silicon?
How will developers adapt existing AI software for this new photonic hardware?
What hidden costs accompany Lumai's promised 90% energy savings for AI?
How does this technology accelerate the development of AI 'world models'?
As photons replace electrons, what is the next physical barrier for AI computation?
Could a UK firm's optical chip disrupt the global dominance of US and Asian tech giants?