Corespan 2500 Series Datasheet

Corespan 2500 Series Overview

Disaggregate and dynamically compose GPU, storage, and accelerator resources with photonic connectivity and real-time orchestration. The 2500 Series delivers high-density PCIe Gen5 pooling, low-latency paths, and elastic allocation for AI/ML, HPC, private cloud, and GPU-as-a-Service environments.

Slide 1

Key Features

High-Density, PCIe Gen5 Resource Platform

High-Density, PCIe Gen5 Resource Platform

• Up to 12 PCIe Gen5 x16 device slots per PRU chassis • Support for heterogeneous devices (GPUs, FPGAs, NICs, storage) • Four host FIC slots for flexible fabric bandwidth

Photonic Fabric Integration

Photonic Fabric Integration

• Ultra-low latency and high bandwidth across pooled resources • Direct device-to-device PCIe paths (GPU-GPU, GPU-storage) • Optical connectivity reduces hops and CPU bottlenecks

Dynamic Composition & Orchestration

Dynamic Composition & Orchestration

• Real-time attach/detach of PCIe devices • Policy-driven provisioning for multi-tenant and service offerings • Centralized pool visibility and health monitoring

Multi-Vendor Support

Multi-Vendor Support

• Works with leading GPU and accelerator brands • Composable infrastructure without vendor lock-in

Use Cases

GPU Pooling for AI Training

GPU Pooling for AI Training

Shared GPU capacity across servers.

Learn More
Burstable Inference Clusters

Burstable Inference Clusters

Scale GPUs only when needed.

Learn More
HPC Cluster Flexibility

HPC Cluster Flexibility

Reconfigure nodes on demand.

Learn More
Multi-Gen Accelerator Support

Multi-Gen Accelerator Support

Use any GPU, any generation.

Learn More

Technical Specifications

PCIe Support:PCIe Gen5 x16 device slots (12 per PRU)
Host Interconnect:FIC 2500 (4 slots per PRU)
Device Support:GPUs, FPGAs, SmartNICs, NVMe, accelerators
Hot Swap:Real-time attach/detach
Compatibility:Multi-vendor PCIe devices

Benefits

The 2500 Series delivers a photonic-native, composable PCIe Gen5 infrastructure that fundamentally improves how data center resources are deployed and utilized. By disaggregating GPUs, storage, and accelerators from fixed servers and pooling them over low-latency photonic connectivity, the platform eliminates stranded capacity, reduces power and cooling overhead, and extends hardware lifecycles through modular upgrades. Direct device-to-device communication bypasses traditional server bottlenecks, enabling higher throughput and predictable performance for AI, HPC, and cloud workloads while supporting seamless scaling without disruptive forklift upgrades.

User Benefits

For operators and end users, the 2500 Series translates into faster deployment, greater flexibility, and lower total cost of ownership. Infrastructure teams can provision and reassign GPU and accelerator resources in real time, simplify maintenance through non-disruptive hot-swap capabilities, and confidently support multi-tenant environments with secure, isolated resource paths. End customers gain access to right-sized, high-performance resources on demand—accelerating AI and HPC workflows, improving service reliability, and enabling new consumption models such as GPU-as-a-Service without overprovisioning or long lead times.

User Benefits

Corespan 2500 Series Products

Illustrates Corespan 2500 Series PRU 2500 and FIC 2500

Corespan 2500 Series
Footer background image

For More Details

Access more documentation by downloading the Corespan 2500 Series datasheet.

Need more help? Get in touch with our sales or support team.