Corespan Blog

Explore insights, innovations, and perspectives from the Corespan team.

NVME Scratch Pad Diagram

Disaggregated NVMe Scratch Pad: Breaking the GPU Memory Barrier

Corespan’s disaggregated NVMe scratch pad creates a shared, high-performance storage tier that extends GPU memory, enabling scalable AI workloads with better utilization and predictable performance.

Read more about Disaggregated NVMe Scratch Pad: Breaking the GPU Memory Barrier
Disaggregated GPU Memory Pools

Disaggregated GPU Memory Pools

Past 8 GPUs, network hops stall syncs and strand vRAM. Corespan disaggregates GPU memory into one photonic PCIe pool, so hosts draw needed capacity on demand—higher utilization, lower cost, at scale.

Read more about Disaggregated GPU Memory Pools
Shift to Inference Economics Diagram

AI’s Second Wave: From Training Hype to Inference Reality ​

AI’s second wave shifts from model training to inference efficiency—optimizing cost-per-token and energy use. Dynamic, composable GPU fabrics unlock stranded capacity and maximize utilization.

Read more about AI’s Second Wave: From Training Hype to Inference Reality ​
Dynamic AI Infrastructure for Energy

Dynamic AI Infrastructure for Energy

AI in energy often fails in the field due to legacy infrastructure, siloed data, and I/O limits. Corespan pools GPUs and NVMe into a composable PCIe fabric for low-latency, reliable AI.

Read more about Dynamic AI Infrastructure for Energy
Drut Becomes Corespan Systems

Drut Becomes Corespan Systems

Drut Technologies is becoming Corespan Systems—a name that reflects our focus on intelligent compute cores and the high-bandwidth spans that connect them into unified, high-performance systems.

Read more about Drut Becomes Corespan Systems
Dense data center

Redefining ROI in AI Hardware: Not Every GPU Has to Be Expensive

AI ROI isn’t about buying the most expensive GPUs. It’s about utilization, right-sizing, and dynamic resource allocation to match real workload demands.

Read more about Redefining ROI in AI Hardware: Not Every GPU Has to Be Expensive
Beyond Static Servers: Dynamic Infrastructure for AI Efficiency

Beyond Static Servers: Dynamic Infrastructure for AI Efficiency

Static servers can’t keep up with AI. Dynamic, composable infrastructure improves utilization, reduces cost, and adapts in real time to changing workloads.

Read more about Beyond Static Servers: Dynamic Infrastructure for AI Efficiency
Composable Infrastructure for AI & Modern Data Centers