08. September 2025
We couldn’t pass up the chance to share a truly spontaneous conversation with you.
While our CPO, Ogi Brkic, was attending this year’s ISC conference in Hamburg, Lisa Spelman couldn’t resist pulling him aside for an impromptu interview. The CEO of Cornelis Networks wanted to hear our perspective on the growing convergence of HPC and AI workloads, and, of course, discuss how our next‑generation interconnects are set to shape the future of computing.
They explored a key challenge facing today’s infrastructure leaders: How do we unlock the full potential of high-performance compute systems when the real bottleneck is no longer processing power, but data movement?
In this interview, they discussed shifting performance limits, the rise of inference, and how Black Semiconductor is advancing photonics using graphene to fundamentally rethink how data moves across compute architectures.
It used to be all about how much compute we could pack in, comparing teraflops and petaflops. But now compute is waiting. It is sitting idle because it is starved for data. The real bottleneck today is data movement. That is why optimized networking, like the kind you are building at Cornelis, is so important. It is not just about adding power, it is about unlocking it.
We are building integrated photonic devices using graphene, the strongest material on Earth. What makes our approach unique is that we are embedding these devices directly into standard CMOS processes. That allows us to bring photonics closer to compute, which accelerates data movement and reduces energy use. Our technology is designed to support the next generation of compute infrastructure and help companies like Cornelis scale faster and smarter.
In the near term, we are focused on proving that our technology is reliable and scalable. But our long-term vision is to build an optical system on glass panel. We want to integrate switches, processors, and memory onto a one-by-one meter glass panel. Imagine collapsing an entire server rack into a single optical surface. That is the kind of innovation we believe is needed to overcome the physical limitations of today’s compute infrastructure.
AI and HPC are no longer separate. In fact, AI was born in HPC. Training is a high-performance computing task, and inference is quickly catching up in terms of complexity and scale. To support this, we need flexible, workload-aware networking. Features like congestion control and out-of-order packet handling are essential. The infrastructure must adapt to the workload, not the other way around.
Inference is how we consume AI. That is where the real economic value is delivered. Inference workloads are growing fast, connecting more compute devices and demanding lower latency. Optimizing for this means building for speed, efficiency, and scale, not just raw compute.
Ogi Brkic is the Chief Product Officer at Black Semiconductor, a company pioneering graphene-based photonic interconnects to enable the next generation of compute and connectivity.
Lisa Spelman is the CEO of Cornelis Networks, a leader in high-performance networking solutions for compute-intensive systems.
Want to stay updated on our progress? Follow us on LinkedIn to hear more about our technology, milestones, and partnerships.