This week, Chris catches up with Clay Ryder from the DCS (Data Centre Systems) group at Western Digital. The discussion focuses on the challenges for today’s CIO, with an emphasis on storage and data. The division of responsibility between CTO and CIO isn’t always obvious. As we learn in this discussion, there’s a lot of overlap and a lot of interaction between the two. CIOs are focused on data and more specifically information, whereas the CTO has to deliver the right platform to meet application needs. The conversation covers challenges around technical debt, determining what …
#111 – The Cohesity Marketplace with Rawlinson Rivera
This week Chris is in Silicon Valley and catches up with Rawlinson Rivera, Field CTO at Cohesity. The company recently released a new feature called Marketplace that enables customers to run data-focused applications directly on the Cohesity platform. The idea of running applications on data protection hardware has some benefits and potential disadvantages. Naturally, the focus is to provide a single point of truth for secondary data, reducing the risk of having many teams and departments storing their own data copy. But is DataPlatform capable of delivering the performance requirements of AI and ML? Rawlinson …
#108 – Druva Cloud-Native Data Protection with Curtis Preston (Sponsored)
In this week’s episode, Chris talks to W. Curtis Preston. Curtis is a long-time and well-known industry expert in the backup area and now Chief Technologist at Druva. Data protection in a multi-cloud world introduces new challenges compared to traditional on-premises backup. As a result, Druva has developed a cloud-native platform that protects on-premises, cloud, endpoint and SaaS applications. What does cloud-native actually mean? Chris and Curtis discuss the benefits of using native AWS public cloud services like S3, DynamoDB, RDS and EC2 instances. Compared to on-premises backup, where hardware is procured to meet high …
#104 – Creating a Data Management Strategy with Paul Stringfellow
This week, Chris talks to Paul Stringfellow, Technical Director at Gardner Systems, about the process of creating a data management strategy. As we adopt additional, sometimes disparate services from SaaS and IaaS vendors, increasingly businesses are seeing their data being dispersed across multiple platforms. With such a valuable asset at their fingertips, how do businesses go about building a strategy for storing, managing and securing their information? The conversation starts with a look at three layers – physical infrastructure, smart storage management and data management. Storage platforms are pretty fully functional these days, so features …
#103 – Data Management and DataOps with Hitachi Vantara (Sponsored)
This week Chris speaks to Jonathan Martin (CMO) and John Magee (VP, Portfolio Marketing) at Hitachi Vantara. This episode was recorded live onsite at the new Hitachi Vantara offices in Santa Clara. As data becomes ever more valuable to organisations, the process of building data pipelines will continue to be a time and resource intensive task. In order to keep up with demand, Hitachi Vantara believes that businesses will need to implement automated processes that deal with the analytics pipeline – a concept called DataOps. DataOps is a methodology rather than any specific product. It …
#101 – Datrium Automatrix with Brian Biles and Tim Page (Sponsored)
In this podcast episode, Chris talks to Brian Biles (Chief Product Officer and co-founder) and Tim Page (CEO) from Datrium about the announcement of Automatrix. The Datrium Automatrix platform implements five important components needed to deliver a consistent approach to application mobility. These are primary storage, backup, disaster recovery, encryption and data mobility. Automatrix brings together existing products that include DVX and Cloud DVX with the general availability of ControlShift (previously Project CloudShift). ControlShift provides full automation of the disaster recovery failover and failback process, currently between on-premises DVX instances and by the end of …
#100 – Optimising Unstructured Data with Krishna Subramanian
This week, Martin and Chris talk to Krishna Subramanian, President and COO at Komprise. We mentioned the Komprise technology back on episode #75 (It’s ILM All Over Again) as part of a discussion on managing the movement of files to and from archive storage. Krishna joins this discussion to fill in some background on the challenges of managing unstructured data, including how to implement solutions with as little lock-in as possible. Why do we need to manage data in the first place? With around 75% of all data having no access in over 12 months, …
#93 – Myspace Loses 12 Years' of Music
This week, Chris and Martin discuss the issues at Myspace, which recently disclosed that 12 years’ worth of user content had been lost during a (failed) server migration. The once-mighty Myspace was the largest social networking site from 2005 to 2009 (according to Wikipedia) and had estimated revenues of $109 million in 2011. So, how could a company with such as large valuation and solid revenue manage to lose data so easily? In 2005, News Corporation purchased Myspace for $580 million, later selling he company in 2011 for a rumoured $35 million. Would there have …
#88 – Nigel Tozer returns to talk about Ransomware
This week Chris and Martin talk to Nigel Tozer, Solutions Marketing Director for EMEA at Commvault. Nigel was a guest about 12 months ago and on that episode he talked about GDPR. This time the discussion is about ransomware and what businesses can do about it. The challenges of protecting data from theft or extortion are greater than ever. So, can we identify a common attack model? Are specific operating systems more vulnerable? Most important, how do you develop a plan that protects your data and systems? The process is more than just patching primary …
#83 – Introduction to NetApp MAX Data
On this week’s podcast, recorded live at NetApp Insight 2018, Chris talks to Greg Knieriemen and Rob McDonald about the introduction of Memory Accelerated Data, commonly called MAX Data. The MAX Data solution is a software product that implements a local file system on a server using local persistent memory such as Intel Optane. Of course, this is what DAS (Direct Attached Storage) used to offer 20 years ago and this is definitely not what MAX Data provides. Protection against loss within a server is achieved with a feature called MAX Recovery that synchronously replicates …