Overview

Compute Express Link (CXL) 1.0 is a high-speed, low-latency interconnect standard built on top of the PCIe 5.0 physical layer, designed to enhance communication between CPUs and accelerators, memory, or I/O devices.

The CXL 1.0 IP Core enables efficient cache coherency and memory sharing, offering a unified interface for heterogeneous computing systems. It supports three sub-protocols—CXL.io, CXL.cache, and CXL.mem—to provide coherent and non-coherent access, allowing devices to share memory resources seamlessly and efficiently.

  • Compliant with CXL 2.0 Specification: Fully aligned with industry standards for seamless interoperability.
  • Supports All Three Protocol Types: Includes CXL.io (PCIe 5.0), CXL.cache, and CXL.mem for flexible, high-speed interconnect.
  • Switching Architecture Support: Enables multi-host, multi-device communication through CXL switches.
  • Memory Pooling & Sharing: Supports disaggregated memory access across multiple hosts.
  • Persistent Memory with RAS: Integrates persistent memory along with advanced reliability features.
  • Data Integrity & Error Handling: Features ECC, parity checks, and link integrity verification.
  • AXI-Based Interfaces: Simplifies integration with standard AXI user interfaces.
  • SR-IOV Support: Allows resource sharing across virtual machines.
  • Scalability for Hyperscale Data Centers: Easily support complex topologies including multiple hosts and memory devices using switch-based fabric architecture.
  • Future-Proof Architecture: Prepared for the composable and disaggregated data centers of tomorrow.
  • Low Latency and High Bandwidth: Designed to meet stringent performance demands of AI, HPC, and cloud workloads.
  • Interoperability: Seamless operation with PCIe 5.0 and backward compatibility with CXL 1.1 devices.
  • Flexible Integration: Delivered with configurable options and interfaces to speed up SoC integration.
  • Cloud Infrastructure and Datacenter SoCs: Enables flexible resource pooling and dynamic scaling for modern data centers.
  • AI/ML Accelerators and GPUs: Supports high-speed, coherent memory access for training and inference workloads.
  • High-Performance Computing (HPC) Platforms: Provides low-latency interconnects for parallel processing and simulation tasks.
  • Smart NICs and Memory Expanders: Facilitates efficient I/O virtualization and memory sharing across devices.
  • Composable Infrastructure: Powers modular system designs with disaggregated compute and memory resources.
  • FPGA Platforms: Intel® Stratix® 10, Agilex™, Xilinx® UltraScale+™, Versal®.
  • ASIC Implementations: Available as synthesizable RTL for a wide range of process nodes.
  • Protocol Validation: Verified with industry-leading verification environments and emulators.
Get the Detailed Product Brief here
Download

Get a quote