Skip to content Skip to footer

Integrations with Kubernetes separates compute and storage

Image: Adobe Stock / Connect world

MapR Technologies has accelerated the compute journey with innovations to its MapR Data Platform.

Deep integrations with Kubernetes core components for primary workloads on Spark and Drill make it easier to better manage highly elastic workloads, while also facilitating in-time deployments and the ability to separately scale compute and storage.

Organisations restructuring their applications or building next-generation real time data lakes will benefit from these new capabilities in a Kubernetes model (with Spark and Drill) by easily leveraging the elasticity and agility of such clusters.

supporteddeploymentmodelssdm

“Having run a recent survey on organisations’ use of containers to support AI and analytics initiatives, it is clear that a majority of them are exploring the use of containers and Kubernetes in production,” said Mike Leone, senior analyst, ESG.

“We are also seeing compute needs are growing rapidly, due to the unpredictability of compute-centric applications and workloads.

“MapR is solving this need to independently scale compute, while also tightly integrating with Kubernetes in anticipation of organisations’ rapid container adoption.”

In early 2019, MapR enabled persistent storage for compute running in Kubernetes-managed containers through a CSI compliant volume driver plugin.

With this announcement, MapR further expands its portfolio of features and allows the deployment of Spark and Drill as compute containers orchestrated by Kubernetes.

This deployment model allows end users including data engineers to run compute workloads in a Kubernetes cluster that is independent of where the data is stored or managed.

“MapR is paving the way for enterprise organisations to easily do two key things: Start separating compute and storage, and quickly embrace Kubernetes when running analytical AI/ML apps,” said Suresh Ollala, SVP engineering, MapR.

“Deep integration with Kubernetes core components, like operators and namespaces, allows us to define multiple tenants with resource isolation and limits, all running on the same MapR platform.

“This is a significant enabler for not only applications that need the flexibility and elasticity but also for apps that need to move back and forth from the cloud.”

Picture Credit: Jukka Zitting

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.