Excelero unveils NVMesh on Azure for IO intensive workloads like AI/ML/DL, HPC and analytics

By | 7:13 PM Leave a Comment

Excelero has added public cloud storage support to its flagship NVMesh elastic NVMe software-defined storage solution.

Available first for the Microsoft Azure platform, and later for other major public clouds, NVMesh expands public cloud capabilities by addressing the massive gaps experienced by thousands of organizations that face major performance challenges while attempting to transition their demanding IO-intensive workloads to public clouds at a reasonable cost.

By leveraging Excelero’s field-proven scalable, elastic, low-latency software-defined storage on standard cloud compute elements, beta use has shown that NVMesh on Azure provides up to 25x more IO/s and up to 10x more bandwidth to a single compute element – while reducing latencies by 80% from a truly protected storage layer.

Using standard instances for storage on cost-effective NVMe drives, enterprises can get the most value out of their data leveraging their cloud pricing and discounts.

For converged environments, with applications running on the same virtual machines that run the storage, total cost of ownership (TCO) is further improved since the storage is embedded into the compute at almost no additional cost.

“Many of our customers require low latency and high throughput storage for their IO-Intensive workloads,” said Aman Verma, product manager, HPC at Microsoft Azure.

“Excelero’s NVMesh on Azure’s InfiniBand-enabled H- and N-series virtual machines provides an exciting new scalable, protected storage option for several high growth segments of the market, including HPC and AI workloads.”

With Excelero NVMesh, data scientists achieve efficient and cost-effective model training through high bandwidth and ultra-low latency and rates of millions of file accesses per second.

Database and analytics workloads and high performance computation can be run on CPUs and GPUs without stalling for I/O and at a reasonable cost. The same methods can be employed with the same software stack deployed on-premise and on public clouds.

With data protection becoming essential for IO-intensive applications, Excelero NVMesh on Azure protects data by mirroring across local NVMe drives.

The solution allows data to be spread across availability zones for an additional level of protection. Self-healing and advance warning functionality assist in ensuring data longevity. Enterprises have no concerns over data compliance and security as data is stored on nodes within their account.

In container-native settings, Excelero’s Kubernetes CSI driver and IBM Red Hat OpenShift integration provide a second simple means of rolling out NVMesh on Azure enabling hybrid cloud deployments, for instance for burst-oriented workloads.

“Gaps in public cloud storage capabilities prevent many demanding applications from running on public clouds, regardless of the obvious cost and scalability advantages, and force enterprises to endure a latency penalty should they go there,” said Eric Burgener, research vice president in the Infrastructure Systems, Platforms and Technologies Group at IDC.

“These gaps also are preventing public cloud providers from fueling their own growth. Solutions that remove these barriers are emerging, and they are an exciting development to watch.”

“Too many of our customers are struggling with IO-intensive workloads that they would prefer to move to the public cloud, yet public cloud providers are grappling to deliver the cost-performance their customers need with these storage workloads,” said Yaniv Romem, CEO of Excelero.

“Excelero’s new NVMesh on Azure bridges the gap between what the market offers and what enterprises require, helping them avoid costly overprovisioning of storage so they can embrace hybrid- and multi-cloud strategies assuring performance, agility and cost control. Look for continued innovation from us in this space across the coming months.”

Excelero NVMesh on Azure is now publicly available.


from Help Net Security https://ift.tt/3vyqNH3

0 comments:

Post a Comment