




spainter_vfx - stock.adobe.com
Data storage and data processing have always been separate functions, but what if they could be unified and achievemuch better performance? That’s the promise of computational storage.
Although media have changed and capacity has grown, the core functionality of data storage has remained unchanged for decades. The magnetic disk drives and tapes of the 1980s often fulfil the same function as flash storage today, and in more or less the same architecture. However,computational storage is set to redefine how we approach storage and data processing. So, what are its potential benefits and challenges?
Data processing typically involves data movement in small batches across theinput/output (I/O) bridge from storage device to processor, before writes back to storage. However, the I/O channel is typically slower than data transfer speeds that can be achieved closer to the storage. This slows down the rate at which data can be handled and results in bottlenecks.
This lag in data processing can inhibit real-time operations. By the time data has been processed, the crucial moment may have passed. When this occurs in time-sensitive environments, money can be lost and operational difficulties may arise.
Computational storage incorporates processing capability into the storage system. It’s akin to a mini server built directly into a hard drive. This means data no longer needs to move to processors dedicated to compute, because processing power has been attached to the storage system. It’s only been since the advent of solid-state drives (SSDs) that this has been possible.
This allows data to be processed far more quickly than ever before. Since data is processed in situ, computational storage is ideal for workloads that process massive amounts of data. The subsequent reduction of core processing power enables systems to operate more efficiently and reduce energy consumption for processors and associated cooling systems.
An additional potential benefit of computational storage is the reduction of network traffic, because some data can be processed on the storage device itself rather than being sent to the processor.
Currently, there aretwo different types of computational storage:
Despite the potential benefits of computational storage, there remain significant challenges to be addressed. The underlying technology of computational storage is in its infancy, and it’s currently only available from a limited number of manufacturers.
While SSDs are interchangeable and can easily interface with each other, this interoperability is lost with computational storage. Each supplier’s approach to computational storage is sufficiently different that interchangeability is not yet currently available.
As recently as late 2022, theStorage Networking Industry Association (SNIA) released new hardware and software architectural standards, as well as a preliminary standard for theapplication programming interface needed to access computational storage devices.
The potential security risks of computational storage are also not yet fully understood. The implications of potential threats and the security requirements of having a processor built into the storage device are yet to be fully considered.
To take full advantage of computational storage, existing applications and services may need to be re-factored to integrate with the new systems. This could make it difficult to use computational storage with existing applications, and development teams will need a thorough understanding of potential pitfalls. Modifying code always carries the risk of unintended consequences that some organisations may be unwilling to chance.
The question remains whether applications will be adapted to use computational storage in the future. If they are, then computational storage could reduce processing time. However, whether applications will be written to take advantage of on-board processing is dependent on how widespread computational storage becomes.
Computational storage will not be a universal performance cure-all, as a single computational device will only offer performance gains in specific areas and could be costly. However, the onboard processors of computational storage make them ideal for specific data-intensive processing tasks. Some examples include real-time data analysis, machine learning and video compression for content distribution networks.
Given the limited processing power available to computational storage, tasks that are compute-intensive, such as modelling complex simulations, remain best performed by a dedicated processor.
Some of the companies that have already adopted computational storage include Tesla, Google, Facebook and Yahoo Mail. Some large-scale datacentres that offer processing resources, such as AWS and Alibaba, have also adopted computational storage.
Computational storage devices are already commercially available, and any system equipped with this new technology will potentially be less CPU-intensive than conventional architectures.
With data-intensive tasks performed by the computational storage processor, the system processor can focus on compute-intensive segments of the workload. Managing a system in such a way increases overall performance, and allows tasks to be conducted faster and more efficiently. This would also enable time-critical workloads to become more viable.
In The Current Issue:The current AI hype era resembles the dot-com bubble era in some ways, but there are significant differences as well.
Rimini Street's CIO explains how he deployed agentic AI for research and service -- and how an AI steering committee governs ...
Agentic AI is forcing CIOs to rethink IT strategy. Success depends on identifying key use cases, assessing data readiness, ...
MLSecOps ensures AI security by automating controls, fostering collaboration and addressing vulnerabilities early in the ...
Next-generation firewalls are critical tools in today's evolving threat landscape. Learn how to evaluate and select an NGFW that ...
Elevating cybersecurity to a state of resilience requires a security team to adapt and strengthen defenses. The result should be ...
FWA delivers wireless broadband internet to remote regions, temporary setups and other locations not suitable for wired ...
The internet would be different today without DNS anchoring digital communications. Companies can take some basic steps to ensure...
Cisco's entrée into 102.4 Tbps silicon boasts in-place programmability and new AgenticOps features as enterprise AI ...
Many organizations want to simplify or scale down their data centers -- but they won't disappear. Admins can examine as-a-service...
Follow this step-by-step guide on decommissioning a data center, covering planning, inventory management, data security, and ...
Smart data centers reduce costs and enhance grid stability, enabling operators to evolve from passive consumers to active ...
AI applications won't produce reliable results -- and could create compliance and business ethics risks -- without strong data ...
Establish a multi-phased approach that turns a risky situation into a managed process with several departments working in ...
New Agentic Data Plane features enable users to create a governance layer for agents and could help the vendor differentiate ...