- Jul 2020
-
docs.openstack.org docs.openstack.org
-
In the case of PCI passthrough, the full physical device is assigned to only one guest and cannot be shared.
-
- Jul 2017
-
www.nextplatform.com www.nextplatform.com
-
evolution from PCI 1.0 through PCI-Express 5.0
While the evolution of PCIe speed is definitely of interest, especially as it keeps pace with network speeds, the total number of PCIe lanes also a significant barrier to I/O for many systems... Especially in HPDA.
We can effectively double network throughput by dropping in another 16x NIC. This becomes less possible if there are not enough slots (or perhaps more importantly if available PCIe lanes are oversubscribed). This becomes even more of an issue, as the author points out, with the advent of NVMe.
Intel has a vested interest in keeping the number of PCIe lanes at 40 with Xeon and holding back implementation of PCIe 4.0. They provide proprietary high speed I/O to their Xeon Phi coprocessor and Optane memory products. This doesn't allow GPUs, FPGAs and competing NV memory products to compete on equal footing.
AMD is somewhat breaking the stalemate with Zen Naples offering 128 PCIe 3.0 lanes. Will have to see if OEMs build systems that expose all of that I/O.
-