- Get link
- X
- Other Apps
- Get link
- X
- Other Apps
PT KONTAK PERKASA FUTURES - That’s a significant rate of improvement compared with the relatively pokey speeds we’ve lived with for the past sixteen years. When I got started in computing, the now-venerable ISA bus offered 8.33MB/s of bandwidth. By the time I’d hit college, PCI and its dedicated graphics cousin, AGP, were both the standards of choice, but PCI’s bandwidth limits quickly became major limits once 3D video cards came on the scene. PCI-Express, which debuted in 2003, initially offered tremendous bandwidth improvements, but for many years, it wasn’t clear what could practically stretch the interface. Single graphics cards have never pushed PCIe particularly hard, and while dual-GPU configurations showed superior scaling in matched x8/x8 slots, that had more to do with chipset latency caused by hanging a lopsided x16/x4 configuration off the southbridge.
Two things have happened to change the market since then. First, solid state drives have moved to PCI Express as a storage medium, which means improvements in storage performance became chained, at least in part, to improvements in the underlying standard. Clearly, nobody who has chafed under the restrictions of a PCIe 3.0 x4 configuration is going to have reason to chafe much longer. If PCIe 6.0 hits shelves in 2022, an x1 connection would offer the same bandwidth as a top-end x8 PCI-Express SSD today. The handful of drives that even offer this kind of interface today are enterprise products.
Ramping PCIe 6.0 bandwidth all the way to 256GB/s of bidirectional bandwidth would put an x16 connection on par with a lower-midrange GPU of today. It’s significantly more bandwidth than what we’d expect even DDR5 to provide in 2022 in a dual-channel configuration. I wouldn’t overstress the comparison — latency is going to be vastly different between the two solutions — but this is part of why companies like Intel are pushing towards the idea of non-volatile storage taking over some of the roles that have principally been held by DRAM.
While latency-sensitive workloads will always respond well to high-speed caches and fast interfaces, there’s a proven window of opportunity for high-bandwidth, high-latency products as well. GPUs are often cited as an example of products that are not particularly latency sensitive. PCI-Express 6.0 is also likely to be useful in self-driving systems, the industrial IoT, and any system that juggles multiple sensors to combine input from multiple peripherals into a cohesive whole. Increasing per-pin bandwidth can give a manufacturer the flexibility to perform the same work with a smaller amount of die size or wiring dedicated to the bus itself.
If the PCI-SIG pulls this off, it’ll deliver an 8x effective bandwidth increase in just four years. Considering it’s taken 15 years to deliver that kind of gain to date, the leap would be considerable. We wouldn’t expect consumer GPUs to necessarily be top beneficiaries, but storage arrays and non-volatile memory could be poised for major gains, along with AI accelerators and FPGAs in bandwidth-limited deployments. If PCIe 6.0 is actually ready by 2022, we may see fewer companies adopt 5.0 or 4.0 — or hell, they may just drop one new product a year to pick up the easy increase in performance. Either way, if PCI-SIG hits this delivery cadence, storage controllers are going to be the limiting factor in storage perf once again.
Source : extremetech.com
- Get link
- X
- Other Apps