In this week’s episode, Martin and Chris discuss Computational Storage with Tong Zhang, Chief Scientist and co-founder at ScaleFlux. Computational Storage devices add value to traditional NAND by offloading data processes directly onto the storage media. ScaleFlux offers two families of CSDs, including the CSD 2000 series, which implements inline data compression to improve endurance and logical device capacity.
In this conversation, Tong covers the benefits of using CSDs as well as some of the challenges of implementation. It’s likely we will se CSDs being used for AI/Analytics pre-processing, especially in the public cloud.
Elapsed Time: 00:43:39
- 00:00:00 – Intros
- 00:04:00 – What is the ScaleFlux view of Computational Storage?
- 00:06:45 – What will the drivers of Computational Storage be?
- 00:08:40 – Compression can increase endurance and capacity
- 00:11:00 – The CSD2000 does inline compression/decompression in FPGA
- 00:13:05 – Why aren’t all vendors doing inline compression?
- 00:14:45 – Databases make a good use case for CSD
- 00:17:10 – Compression can be 2:1 or as high as 5:1, depending on data
- 00:20:15 – SSDs do get hot!
- 00:21:30 – FPGAs will be replaced by ASICs in the future products
- 00:23:00 – AI/Analytics is a big target for Computational Storage
- 00:25:00 – Advanced functionality may require APIs
- 00:27:00 – NVMe will be used as a protocol to push code to CS drives
- 00:30:20 – Compute and storage are going to have to work closer together
- 00:35:00 – Computational Storage adoption will be evolutionary
- 00:38:00 – How will RAID/erasure coding be affected with CS?
- 00:40:20 – ScaleFlux will support PCIe-4/5 and possibly PLC NAND
- 00:43:00 – Wrap Up
Related Podcasts & Blogs
- #190 – NVIDIA BlueField SmartNICs and DPUs
- #180 – SmartNICs – Pliops Storage Processor
- #177 – SmartNICs and Project Monterey
- #96 – Discussing SmartNICs and Storage with Rob Davis from Mellanox
Copyright (c) 2016-2022 Unpacked Network. No reproduction or re-use without permission. Podcast episode #8vwo.