- Feb 2018
-
www.thegreywaterguide.com www.thegreywaterguide.comTexas1
-
Effective January 6, 2005
-
- Jul 2017
-
www.nextplatform.com www.nextplatform.com
-
evolution from PCI 1.0 through PCI-Express 5.0
While the evolution of PCIe speed is definitely of interest, especially as it keeps pace with network speeds, the total number of PCIe lanes also a significant barrier to I/O for many systems... Especially in HPDA.
We can effectively double network throughput by dropping in another 16x NIC. This becomes less possible if there are not enough slots (or perhaps more importantly if available PCIe lanes are oversubscribed). This becomes even more of an issue, as the author points out, with the advent of NVMe.
Intel has a vested interest in keeping the number of PCIe lanes at 40 with Xeon and holding back implementation of PCIe 4.0. They provide proprietary high speed I/O to their Xeon Phi coprocessor and Optane memory products. This doesn't allow GPUs, FPGAs and competing NV memory products to compete on equal footing.
AMD is somewhat breaking the stalemate with Zen Naples offering 128 PCIe 3.0 lanes. Will have to see if OEMs build systems that expose all of that I/O.
-
It is a bit like playing Whack-A-Mole. But the vendors that comprise the PCI-SIG organization that creates and commercializes the PCI-Express bus are not just sitting there are Moore’s Law advances compute, storage, and system interconnect networking at an aggressive pace.
Sentence doesn't quite make sense.
-