New Open Compute Project specification is designed to make accelerators and I/O devices more scalable, efficient and cost-effective in modern virtualized data centers.
Today, Intel contributed the Scalable I/O Virtualization (SIOV) specification to the Open Compute Project (OCP) with Microsoft, enabling device and platform manufacturers access to an industry standard specification for hyperscale virtualization of PCI Express and Compute Express Link devices in cloud servers. When adopted, SIOV architecture will enable data center operators to deliver more cost-effective access to high-performance accelerators and other key I/O devices for their customers, as well as relieve I/O device manufacturers of cost and programming burdens imposed under previous standards.
The new SIOV specification is a modernized hardware and software architecture that enables efficient, mass-scale virtualization of I/O devices, and overcomes the scaling limitations of prior I/O virtualization technologies. Under the terms of the OCP contribution, any company can adopt SIOV technology and incorporate it into their products under an open, zero-cost license.
In cloud environments, I/O devices including network adaptors, GPUs and storage controllers are shared among many virtualized workloads requiring their services. Hardware-assisted I/O virtualization technologies enable efficient routing of I/O traffic from the workloads through the virtualization software stack to the devices. It helps keep overhead low and performance close to “bare-metal” speeds.
I/O Virtualization Needs to Evolve from Enterprise Scale to Hyperscale
The first I/O virtualization specification, Single-Root I/O Virtualization (SR-IOV), was released more than a decade ago and conceived for the virtualized environments of that era, generally fewer than 20 virtualized workloads per server. SR-IOV loaded much of the virtualization and management logic onto the PCIe devices, which increased device complexity and reduced the I/O management flexibility of the virtualization stack. In the ensuing years, CPU core counts grew, virtualization stacks matured, and container and microservices technology exponentially increased workload density. As we transition from “enterprise scale” to “hyperscale” virtualization, it’s clear that I/O virtualization must evolve, as well.
SIOV is hardware-assisted I/O virtualization designed for the hyperscale era, with the potential to support thousands of virtualized workloads per server. SIOV moves the non-performance-critical virtualization and management logic off the PCIe device and into the virtualization stack. It also uses a new scalable identifier on the device, called the PCIe Process Address Space ID, to address the workloads’ memory. Virtualized I/O devices become much more configurable and scalable while delivering near-native performance to each VM, container or microservice it simultaneously supports.
These improvements can reduce cost on the devices, more efficiently provide device access for large numbers of VMs and containers, and provide more flexibility to the virtualization stack for provisioning and composability. SIOV gives strained data centers an efficient path to deliver high-performance I/O and acceleration for advanced AI, networking, analytics and other demanding virtual workloads shaping our digital world.
Standards and Open Ecosystems Fuel Growth, Innovation
As Intel CEO Pat Gelsinger recently wrote, open ecosystems built upon industry standards accelerate industries and give customers more choices. In this spirit, Intel and Microsoft developed, validated and donated the SIOV specification to the Open Compute Project, where we expect it will spark innovation in CPUs, I/O devices and cloud architectures that improve service performance and scale economics for everyone. We look forward to the OCP community’s adoption and continuous improvement.
“Microsoft has long collaborated with silicon partners on standards as system architecture and ecosystems evolve. The Scalable I/O Virtualization specification represents the latest of our hardware open standards contributions together with Intel, such as PCI Express, Compute Express Link and UEFI,” said Zaid Kahn, GM for Cloud and AI Advanced Architectures at Microsoft. “Through this collaboration with Intel and OCP, we hope to promote wide adoption of SIOV among silicon vendors, device vendors, and IP providers, and we welcome the opportunity to collaborate more broadly across the ecosystem to evolve this standard as cloud infrastructure requirements grow and change.”
SIOV technology is supported in the upcoming Intel® Xeon® Scalable processor, code-named Sapphire Rapids, as well as Intel® Ethernet 800-series network controllers and future PCIe and Compute Express Link (CXL) devices and accelerators. Linux kernel upstreaming is underway with anticipated integration later in 2022. Key players in the device, CPU and virtualization ecosystem have been briefed and are excited to integrate SIOV in their roadmaps.
With SIOV, the cloud, network and data center industries have a unified launch pad for hyperscale-era virtualization.