
Before It Had a Name: Corespan Built Scale Across Early
Bill Koss - CEO and President of Corespan Systems
In August 2025, Jensen Huang took the stage at Hot Chips and introduced the world to "scale across" NVIDIA's vision for connecting distributed data centers into unified, giga-scale AI super-factories using Spectrum-XGS Ethernet. The term instantly entered the lexicon. Ciena, DriveNets, and others quickly followed with their own scale-across narratives. The industry collectively nodded as though a new frontier had been discovered.
In Nashua, New Hampshire, the team at Corespan Systems had been building exactly this architecture since 2021. No buzzword. No branding campaign. Just a photonic core, PCIe over optics, and a software-defined interconnect designed from day one to let compute resources span racks, rows, rooms, and buildings as a single, composable system. Corespan was building scale across before the term existed.
The Problem Nobody Was Solving
To understand why Corespan’s approach was ahead of its time, it helps to recall what AI infrastructure looked like in the early days of the company. In 2021, scaling followed a binary paradigm: scale up by adding resources to a single system, or scale out by distributing workloads across many machines over Ethernet or InfiniBand. Both models had fundamental constraints.
Scale‑up hit physical limits. There is only so much GPU you can fit in one node, and the PCIe bus inside a server has finite bandwidth and lane counts. Scale‑out solved that by distributing workloads, but at a cost: every hop through a leaf‑spine hierarchy added latency, power draw, and contention. The tiered electrical fabric of top‑of‑rack, spine, and core switches became both a performance bottleneck and a major line item on the data center P&L.
Few were asking a different question. Could the low‑latency, lossless behavior of intra-machine connectivity be extended across the data center without a conventional packet‑switched network in between? That “third pillar” is connecting resources across physical boundaries while preserving direct‑attach performance, which did not yet have a name, but it had an architect.
A Photonic Core from the Beginning
Corespan’s first two years were devoted to a deceptively hard engineering goal: porting the PCIe protocol onto an FPGA to prototype an interface card capable of carrying PCIe traffic over single‑mode fiber through a photonic cross‑connect. The result, the iFIC 1000, debuted alongside a third‑party optical switch in a SuperMicro rack at Supercomputing 2022 in Dallas.
This was neither Ethernet nor InfiniBand. It was native PCIe — the bus of every GPU, FPGA, and NVMe device transported optically, switched photonically, and orchestrated by software. The packet‑switched middle layer was gone. No leaf‑spine, no protocol conversion, no CPU‑bound network stack. Just circuit‑switched optical paths between endpoints, reconfigurable in real time to match workload demands.
By late 2022, Corespan was demonstrating what it called software‑defined infrastructure enabled by PCIe/CXL over photonics: rack‑scale, elastic, disaggregated resource pooling. GPUs across racks, rows, or rooms could be dynamically allocated via software across an optical fabric that used a fraction of the power of traditional packet switching.
Software-Defined Interconnect: The Missing Layer
What set Corespan apart was not just optical switching but the software intelligence above it. Optical switches are inherently stateless; without orchestration, they are little more than automated patch panels.
Corespan built the missing brain. The Corespan Composer software stack provides a stateful control layer at the network edge. It knows every port and endpoint in the fabric, understands workload traffic matrices, and dynamically composes topologies tuned to each AI training job. As workloads shift, the network reconfigures in real time.
This is software‑defined interconnect in the purest sense. The physical layer is reconfigurable photonics; the logical layer is an orchestration engine that composes and decomposes compute clusters on demand. GPUs, CPUs, memory, and storage are no longer bound to their original chassis. They become pooled, distributable resources, allocated wherever workloads need them.
Co-Packaged Optics: Commercializing the Future
Corespan’s architecture required bandwidth density beyond the reach of pluggable optics. The answer was co‑packaged optics (CPO) integrating optical engines directly onto the fabric interface card of the host system.
The FIC 2500, launched in early 2025, is widely recognized as one of the first commercial CPO deployments. Each iFIC 2500 couples the analog PCIe domain to the digital optical fabric, using four 800G optical engines to deliver 3.2 Tbps of bandwidth per card via 32 lanes of 100G optics. Host systems can scale to 32 GPUs per node, setting a new benchmark. With CPO, the Series 2500 achieves superior performance‑per‑watt because integrated optical engines draw far less power than pluggable modules.
Crucially, Corespan did not wait for CPO to mature; it built around it from the start, developing firmware, thermal design, and orchestration software years before industry adoption caught up.
Scale Across Before the Name
When NVIDIA coined scale across at Hot Chips 2025, it referred to extending lossless, high‑speed connectivity beyond a single data center so distributed sites could function as one AI training facility. Spectrum‑XGS Ethernet tackles this at the network layer through congestion control and telemetry over long distances.
Corespan solved the same problem from the opposite direction. Instead of adapting a packet network to handle distance, it used photonics, inherently distance‑agnostic, as the foundation. Single‑mode fiber does not care if the path is two meters or 150 meters. By running PCIe over that fiber through a reconfigurable optical core, Corespan created an architecture that spans racks, buildings, and campuses without the latency and overhead of electrical switches. That is scale across only without the label.
The Problem Nobody Was Solving
Today, Corespan positions itself not just as a hardware company but as the systems platform for composable AI infrastructure. It integrates GPUs, xPUs, FPGAs, NVMe, servers, and optical switches under a single software‑defined control plane. The platform supports PCIe and RDMA over photonics today, with a roadmap to fully disaggregated CPU and memory pools.
The name says it all: “Core” for the engines of computation, “Span” for the fabric that unites them.
The broader industry now agrees that scale up, scale out, and scale across form the three pillars of AI infrastructure. Corespan had been building all three under a unified photonic architecture and software control plane long before the third pillar was named. In the race to define the interconnect of the AI era, the company that started with a photonic core and PCIe over optics may have been running the longest.