NetApp has developed a new backup service called NDAS or NetApp Data Availability Services. NDAS is based in the public cloud and provides the ability to run analytics against secondary data in AWS S3, without having re-hydrate through a backup platform. Chris met with Charlotte Brooks (Technical Marketing Engineer) and Joel Kaufman (Director of Technical Marketing) to discuss how NDAS works and what customers are doing with their cloud-based data. NDAS provides some interesting features that make the product slightly different from existing backup software. The solution runs in AWS using the customer’s account. This …
#111 – The Cohesity Marketplace with Rawlinson Rivera
This week Chris is in Silicon Valley and catches up with Rawlinson Rivera, Field CTO at Cohesity. The company recently released a new feature called Marketplace that enables customers to run data-focused applications directly on the Cohesity platform. The idea of running applications on data protection hardware has some benefits and potential disadvantages. Naturally, the focus is to provide a single point of truth for secondary data, reducing the risk of having many teams and departments storing their own data copy. But is DataPlatform capable of delivering the performance requirements of AI and ML? Rawlinson …
#110 – Storage Vendor Consolidations & Acquisitions
We’ve started to see the consolidation of storage vendors as some startups and long-term players in the market get acquired. Is the reason for this buying spree one of positive growth, or a defensive position to maintain survival? Chris and Martin discuss the issues and the vendors doing the buying. Who’s been buying? Violin Systems acquired part of X-IO (specifically the ICE) products as that company changed focus to their edge device (Axellio). DDN acquired Tintri and Nexenta. StorCentric, founded from Drobo and Nexsan has acquired Retrospect and Vexata. Are we seeing a move to …
#109 – An Overview of ObjectEngine with Brian Schwarz
In this episode, Chris talks to Brian Schwarz, VP of Product Management for FlashBlade and ObjectEngine at Pure Storage. ObjectEngine is a scale-out de-duplication engine that efficiently writes data to either FlashBlade or public cloud object stores. The solution developed from the acquisition of StorReduce in 2018. ObjectEngine was conceived when Pure Storage observed customers using FlashBlade for backup data. The FlashBlade platform was originally developed for high-performance file-based applications like analytics. De-duplication wasn’t integrated natively as an initial design decision. Combining ObjectEngine with FlashBlade enables space saving ratios of around 8:1 or greater. You …
#108 – Druva Cloud-Native Data Protection with Curtis Preston (Sponsored)
In this week’s episode, Chris talks to W. Curtis Preston. Curtis is a long-time and well-known industry expert in the backup area and now Chief Technologist at Druva. Data protection in a multi-cloud world introduces new challenges compared to traditional on-premises backup. As a result, Druva has developed a cloud-native platform that protects on-premises, cloud, endpoint and SaaS applications. What does cloud-native actually mean? Chris and Curtis discuss the benefits of using native AWS public cloud services like S3, DynamoDB, RDS and EC2 instances. Compared to on-premises backup, where hardware is procured to meet high …
#107 – Should IBM Quit the Storage Hardware Business?
IDC recently released their latest quarterly storage sales figures. The data shows, yet again, that IBM sales continue to decline. In this week’s podcast, Chris and Martin discuss the state of IBM’s storage business. Is it time for IBM to quit? IBM has an embarrassment of riches in storage software and hardware (or a nice portfolio as Martin puts it). Many of these solutions have evolved from other technology, like SVC and XIV. With the acquisition of Red Hat, IBM customers will have even more storage choice. Does this mean more flexibility or confusion? Re-using …
#106 – Introduction to VAST Data (Part II) with Howard Marks (Sponsored)
In this second episode on VAST Data, Chris and Martin continue the discussion with Howard Marks. You can find the previous episode at #105 – Introduction to VAST Data (Part I). This time, the conversation continues where the discussion left off, with Howard finishing the explanation of wide striping. To explain exactly how data is accessed on the platform, Howard introduces the concept of v-trees. These are like b-trees but flatter and wider. The v-tree is used to hold both metadata and data. One interesting aspect of the discussion is in understanding exactly how Optane …
#105 – Introduction to VAST Data (Part I) with Howard Marks (Sponsored)
This week, Chris and Martin talk to Howard Marks, Chief Storyteller at VAST Data. You may know Howard as an independent analyst and author for a range of online publications. Howard recently joined VAST to help explain and promote understanding of their data platform architecture. The VAST Data platform uses three main technologies that have only recently emerged onto the market. QLC NAND flash provides long-term, cheap and fast permanent storage. 3D-XPoint (branded as Intel Optane) is used to store metadata and new data before it is committed to flash. NVMe over Fabrics provides the …
#104 – Creating a Data Management Strategy with Paul Stringfellow
This week, Chris talks to Paul Stringfellow, Technical Director at Gardner Systems, about the process of creating a data management strategy. As we adopt additional, sometimes disparate services from SaaS and IaaS vendors, increasingly businesses are seeing their data being dispersed across multiple platforms. With such a valuable asset at their fingertips, how do businesses go about building a strategy for storing, managing and securing their information? The conversation starts with a look at three layers – physical infrastructure, smart storage management and data management. Storage platforms are pretty fully functional these days, so features …
#103 – Data Management and DataOps with Hitachi Vantara (Sponsored)
This week Chris speaks to Jonathan Martin (CMO) and John Magee (VP, Portfolio Marketing) at Hitachi Vantara. This episode was recorded live onsite at the new Hitachi Vantara offices in Santa Clara. As data becomes ever more valuable to organisations, the process of building data pipelines will continue to be a time and resource intensive task. In order to keep up with demand, Hitachi Vantara believes that businesses will need to implement automated processes that deal with the analytics pipeline – a concept called DataOps. DataOps is a methodology rather than any specific product. It …