In this podcast episode, Chris talks to Brian Biles (Chief Product Officer and co-founder) and Tim Page (CEO) from Datrium about the announcement of Automatrix. The Datrium Automatrix platform implements five important components needed to deliver a consistent approach to application mobility. These are primary storage, backup, disaster recovery, encryption and data mobility.
Automatrix brings together existing products that include DVX and Cloud DVX with the general availability of ControlShift (previously Project CloudShift). ControlShift provides full automation of the disaster recovery failover and failback process, currently between on-premises DVX instances and by the end of 2019, to and from VMware on AWS.
In this discussion, Brian explains the importance of combining the five components into a holistic solution. Without the ability to implement a consistent data plane across clouds, implementing application mobility becomes complex and expensive. Automatrix provides the data plane, onto which virtualisation can be layered, either in the form of VMware or through containers. Today container support means Docker, but will be extended to Kubernetes in the future.
Find out more about Datrium and Automatrix at https://www.datrium.com.
Elapsed Time: 00:35:50
- 00:00:00 – Intros
- 00:01:00 – Background on Datrium – DVX
- 00:04:10 – What is Cloud DVX?
- 00:05:10 – What is ControlShift
- 00:06:55 – How much of a problem was automated DR for customers?
- 00:08:30 – DR is complex, with runbooks and compliance requirements
- 00:11:30 – How do you know where you backed up an application 6 months ago?
- 00:13:30 – Reducing DR costs represents a big saving for enterprises
- 00:15:30 – Surely primary and secondary data not the same platform is a problem?
- 00:18:00 – DR & backup/restore are pretty similar – just differing on scale
- 00:21:50 – What are the new announcements from Datrium?
- 00:23:00 – Five components, primary storage, backup, encryption, mobility, workflow orchestration
- 00:25:55 – Insurance shouldn’t be more expensive than buying the product itself
- 00:26:50 – ControlShift is now GA for on-prem to on-prem workloads
- 00:27:40 – ControlShift offers much more than DR – optimisation, consolidation
- 00:31:00 – Virtualisation layers are standardising – VMware & Kubernetes
- 00:31:54 – What’s the future? VMware on AWS support, primary storage on AWS, Azure, Kubernetes
Transcript[bg_collapse view=”button-orange” color=”#4a4949″ expand_text=”Show More” collapse_text=”Show Less” ]
Chris Evans: Hi. This is Chris Evans, recording another Storage Unpacked Podcast. Today, I’m in the offices of Datrium with Tim and Brian. Hi, guys.
Brian Biles: Hey, Chris.
Tim Page: Hey, Chris.
Chris Evans: How are you doing?
Tim Page: Great, thanks. Thanks for coming in.
Chris Evans: Brian, Tim, who would like to start and introduce themselves? Then we’ll get straight into the product discussion.
Tim Page: My name’s Tim Page, the CEO of Datrium. Been onboard for about eight months now. We’ve gotten a lot of good things done.
Chris Evans: Great.
Brian Biles: I’m Brian Biles here. I’m the Chief Product Officer here and a founder. Great to meet you.
Chris Evans: Okay. Right, so we’re here because you have a new release of product out this week. In fact, I think based on the time we’re recording, it was actually yesterday.
Brian Biles: It’s going to be actually shipping this morning.
Chris Evans: It’s going to be shipping this morning.
Brian Biles: Yes.
Chris Evans: Even more prescient that we’re here right at the right time today. Why don’t we start with a bit of background, and then we can then sort of lead that into the discussion about the product itself.
Chris Evans: Brian, perhaps you can start and just give people the background as to what the company has developed and what products you’ve released so far, and then we can show how that evolves into what the announcement has been this week.
Brian Biles: Sure. Datrium if you’ve been following us, is a company that’s doing a combination of things in the converged infrastructure space with a hybrid Cloud focus the last year and a half. I was a founder of a company called Data Domain that was an acquisition for EMC a while ago in the backup space. Our CTO team, when we first got together, met a couple of guys who are in the top 12 early VMware hypervisor development team to create Datrium. Our vision, at the time, was to do something that was sort of like Cloud storage that we were seeing in the market. Like Amazon, there was a sort of EC2 side that did performance, and there was an S3 side that did data protection and capacity. We felt that that was a great architecture for a future system.
Brian Biles: Our first product, the DVX, was built like that. There’s a performance layer that runs on hosts and there’s a capacity scale-out layer that’s like an object store, and together it provides this combination of super high performance based on local flash in every host with a lot of isolation and scale-out capacity. That model is especially defined for workload-centric apps in the Cloud style. For VMs or containers, at that granularity you can snapshot and replicate policies based on workloads. That part of the system got developed about two years ago.
Brian Biles: About a year and a half ago, we did our first port to Amazon of the capacity layer, and we’ve been using that as a backup vault. Two years ago we added, when we did the snapshotting and replication and blanket encryption and so on, we also added policy-based mechanisms for building catalogs and backup information. All of this became a sort of hybrid Cloud backup as a service approach that starts with incredible performance on-prem, but includes a Cloud vault.
Brian Biles: Now, we’re extending that further with Cloud DR and a bunch of sort of announcements for what’s coming soon to sort of complete our port to public Clouds and then extend it to, from Amazon to other public Clouds.
Chris Evans: Okay. Let’s just pause for a second there and just start from the very lowest level. One thing that people would compare your technology to is to HCI, as an example, because there isn’t like a SAN type technology sitting at the bottom of this. This is a distributed storage layer across a number of compute nodes, but with centralized storage.
Brian Biles: Yeah, that’s right.
Chris Evans: It’s a slightly different architecture, say, to SAN and to HCI.
Brian Biles: Right.
Chris Evans: It’s, obviously, done for very good reasons in the sense that you put faster storage into the hosts, for acceleration of the IO for the host, but you use a shared storage component in order to give you that resiliency.
Brian Biles: Yes. That’s where the HA is in the capacity layer.
Chris Evans: That’s all software though.
Brian Biles: That’s all-
Chris Evans: It’s not dependent on the hardware, but initially, you delivered it as a hardware solution.
Brian Biles: We, yeah, so we prepackage hardware for the HA layer, because HA has to be taken very seriously. Our software is portable and on hosts, it runs on any leading host vendor, and it has far fewer constraints on configuration than HCI does. You don’t need a certain number of drives. It’s much more flexible so it can run on blade servers, for example. It just needs a certain degree of, it needs some flash for speed and that’s app dependent, and everything else is stateless, so if all the hosts go down, you don’t care for as far as the data’s concerned.
Chris Evans: Okay. I interrupted you, I apologize. DVX and then Cloud DVX, which was the ability to back data up into the public Cloud.
Brian Biles: Yeah. Cloud DVX is a combination of that capacity layer, ported to run on Amazon over S3, so S3 provides the backing store, but then all of the replication efficiency that we have with dedup over the wire and blanket encryption and sort of dedup everywhere allows you to replicate from multiple sites, centrally, dedup into one pool and that provides a backup vault that you can then restore from for individual guest files or for VMs back on-prem.
Chris Evans: Okay. When that was done, that restore capability was to where at that point?
Brian Biles: To the on-prem converged infrastructure.
Chris Evans: Right, so that was into the Cloud as a backup target, and then back out again.
Brian Biles: That’s right.
Chris Evans: Right, okay. I think that’s all in place. That all was …
Brian Biles: Yeah. That’s for a year and a half.
Chris Evans: That’s been for a year and a half. Okay. Then CloudShift.
Brian Biles: Then CloudShift. We have a bunch of things coming out in this announcement. The first was actually just about Cloud DVX. Cloud DVX moved from a 30 terabyte usable capacity for dedup compressed snapshots of VMs to more than a petabyte per customer, so almost a 40x expansion. That’s massive. Part of the reason we’re doing it is we have much bigger deployments going on, where Tim can tell you we’re much more selling into larger enterprise deployments than we used to be.
Chris Evans: Right.
Brian Biles: Part of it, because it’s going to be the foundation for a lot of work we do in the future.
Chris Evans: Okay.
Brian Biles: ControlShift is the first example of that. It’s kind of a capstone to our overall approach. It’s about workload automation or orchestration for migrating data from one place to another at a workload granularity, and then restarting it in the right order with the right IP mappings, and so on. You can compare it to products that do DR orchestration. There are some in the market that are pretty well known. SRM, for example, from VMware. The difference is really two major things, technically, and then it has a bunch of benefits that follow.
Brian Biles: The first is it’s SAS-based. So one of the failure modes of many kinds of DR orchestration is that it has to be on … It has to correlate the revision of the software to the revisions of all the things that it’s touching to be able to restart workloads. It has to touch storage, it has to touch … Some of these run on a Windows system. You have to get the Windows version right and the software right.
Chris Evans: Absolutely, yeah.
Brian Biles: In the end, it’s running in a data center, so if the network for the data center goes down, it can’t send a signal to the vendor saying, “There’s a problem.” If you run it in SAS, all those problems go away. We maintain it and it’s running in an exterior way, so you can monitor from the outside.
Chris Evans: As you talked to customers before this release came out, how much of a pain point was the need to be able to automate the whole DR solution? Because it’s a complicated task. It’s not an easy task at all to achieve.
Tim Page: Yeah, interesting. If you’ll look at how Datrium, Brian and team, started the company, it was based on the premise of automation of really hard things in the company. We’ve talked about other instances of how you actually look at data mobility, so data can take advantage of different services being offered inside private and public Clouds. That was the original thesis.
Tim Page: The kind of fifth leg to what we were putting in the platform was DR, which was probably the biggest need that we’re hearing from our customer base in the enterprise. Because they’ll tell you, “It’s hard. If it works, it’s hard. We have compliance checks now that are hard and costly, let alone having to duplicate infrastructure and replace.”
Tim Page: It feels like the enterprise has moved to not being afraid to do DR in the Cloud. It moved from, “I want to keep it in my data center,” to, that’s why we moved to even the petabyte now. We can probably go 4 or 5x that, we just haven’t tested it, but as customers want to go, we’ll test it and go. DR is a big pivot point for us, for sure.
Chris Evans: You made an interesting point there Brian, about the DR process. I look at DR and think, “Your DR for your environment, if you couldn’t just give me a book and say, ‘Here’s the instructions, off you go,’ then I’m afraid that your DR isn’t good enough.” I would look at it from that point of view. Because you can’t rely on all the people’s people’s site knowledge to help you do that DR process. You need structure around it that allows you to maybe automate it, or do other things to make sure that actually completes successfully.
Brian Biles: Yeah. Historically this was called runbooks. These were long documents that said, “For this particular component of the data center, consider the following, do the following steps to revive it somewhere else.” The more components there are, the harder that is. Just as an example, in the current generation of products before ControlShift, there are typically primary storage that has to be touched to revive things. There are networks that have to be mapped. There are VMs that have to be re-registered with a vCenter or some equivalent on some other site. There are backup images.
Brian Biles: An interesting stat about, a little between 20 and 25% of the data center failures that require restart these days come from cyber crime. In a cyber crime incident, it’s typically not noticed until about six months in. If you’re going to restore things, you have to restore things that are old. That means you have to have a tie to backup, and most of the systems that are in this space don’t. We had to, so as bad as the current generation of products is in just orchestrating a lot of moving parts, doing a lot of file copies across different systems, many times and so on, they don’t include backup, typically. Or it’s a backup only thing that doesn’t include primary, so you have a very long RPO.
Brian Biles: The moving parts problem is an issue. The kind of efficiency of getting data back and forth between sites is an issue. Security is an issue, and just multiple parts.
Chris Evans: Let me tell you a little interesting story. I did a lot of DR, sadly. Spent spent a lot of time sitting in the data center waiting for those things to happen. Probably, one of the craziest ones we ever had was we failed over the entire data center to another location, so automated, all sorts of things working and all that sort of stuff. Then we found out that we only had one WAN server and the WAN server was in the source data center that we just lost, so the entire DR test was failed. In order to-
Brian Biles: Not a-typical.
Chris Evans: Yeah, absolutely. I guess, that’s why you test. Somebody had to go in and pull that physical box out, run it down the road, plug it back in, so we could just continue the test and then failback, but it was still a fail. Yeah, there’s so many moving parts in a DR test. It is an incredibly complex process.
Brian Biles: We’ve talked with a lot of customers about this, as you might guess. Most, they all have to tell their auditors that they’re ready to go. When you actually press them on, “How long did the last test take you before you were complete?” There can be some horror stories. Mostly, it doesn’t. It’s just not, it’s never worked the first time.
Brian Biles: There’s a huge win in starting with virtualization, because that is a convenient point to encapsulate workloads, establish methodologies and that makes it a lot easier, but coordinating all the data systems around it remains a problem. Our approach to convergence between primary and backup are integration of dedup everywhere and the backup efficiency everywhere for data mobility, gave us a huge headstart.
Chris Evans: You mentioned one thing in there, and then we can talk a bit more in a second about what you’re actually announced. You’ve mentioned one thing in there, I thought, that was really really interesting. That’s the idea of ransomware taking applications down, but not needing a restore until, say, six months later.
Chris Evans: One of the use cases I’ve seen that’s been a big issue is if you want to become more mobile with your applications, and so you move something onto the public Cloud, how do you know where you’re going to restore that from six months later, if the backup might have been done somewhere completely different? Now you’ve got to have some sort of tracing mechanism that can go back and say, “Well, when we ran that six months ago, we think we backed it up in this location …”
Brian Biles: That’s right.
Chris Evans: “… Therefore, the backup’s probably on that system there. Oh, did we decommission that? Did we move that? Is that on the same network to get the restore in place?” Now you-
Brian Biles: It’s a giant mess.
Chris Evans: It becomes a huge mess, so you do have some sort of integration between the primary and the secondary in order to make sure you don’t hit, for example, that sort of scenario.
Brian Biles: That’s right. As well as a good catalog, so you can actually find things, and on and on. Because we had done the basics right for backup, we had a catalog system, we had both on-prem, multiple prems replicating efficiently in the Cloud version, most of the starting points were in place for us to attack the problem. We built a SAS framework for an application to run in the Cloud to guide this automation. Then ControlShift is the outcome.
Brian Biles: We did it with a strong eye to being able to restart the workloads in the VMware Cloud. If you can … Our first release is going to be for prem-to-prem. The SAS offering will run in the Cloud, and it’ll guide restart of workloads from one DVX to another DVX site. Later in the year, we’ll be able to restart in the VMware Cloud from the Cloud DVX images. Same automation system, same runbook automation approach, but at that point, it’s enormously efficient. Because, for example, to do a DR test, you can use on-demand instances in the Cloud, so you don’t have to own that infrastructure all year, and that’s incredibly compelling, economically.
Chris Evans: Yeah. Tim, in terms of CIO conversations, when you go to them and say, “What’s your most expensive thing? What’s the biggest pain point?” The cost of building out an entire replica for a DR environment must come pretty high up.
Tim Page: It’s really high up. It’s actually, COs will use the term “Cloud first.” What they mean by that is, “I want to move my data to where I can get the best service for it,” including backup and DR, two completely separate services. Just the cost of that infrastructure or multiple infrastructures talking to each other is pretty expensive. With us, we allow them to port, not just the software but the cost of that, on-prem or in the Cloud during the course of their subscription. That’s a big deal.
Brian Biles: We’ve done tests, just on spreadsheet analysis for customers of how much they save by integrating backup with primary in our conventional offering. That can be a 2x savings. If you add data domain and a bunch of backup software, and the people to run it, and so on, it really gets out of control. We did a survey as part of this announcement, 540 enterprise IT folks about what they think about IT transformation.
Brian Biles: A lot of them, 85%, would love to be investing more in IT transformation. Over five years, they will be doing it. The thing that gets in the way is, first of all, budget, and second of all, management of legacy infrastructure, and all the people they need to do that. It’s just hard to get out of that. I think they all understand convergence is the path out, it’s the way to simplify.
Brian Biles: Our approach of integrating primary and backup is novel. They just haven’t seen that before. We have a rich catalog system, policies and so on, but deeply into the product, while we’re running the world’s highest performance primary storage, you can see on the IOmark website, we’re also a full, complete, organic backup system, so that saves about 2x, if you go down that path. To then enable DR to the Cloud with on-demand instances can be astonishingly different. We’ve seen 5 to 10x price reduction.
Chris Evans: Isn’t then, a little bit of a reticence to say, to post primary and secondary data together, because the old school learning is that you keep your primary data and your secondary data separate, because sort of the rules tell us, the rules that people talk about are that we should physically separate this, so that if you lose either one or the other, you haven’t taken both out in that scenario.
Brian Biles: Yeah. It’s an interesting question. I’ve been in the industry a long time, I think longer than you, so I’ve heard this. I was surprised by the outcome of our poll. First of all, things have changed, with online backup to a data domain, then what hyper-converged guys have done, and partial solutions of this is getting people to re-think it.
Brian Biles: They really do need to figure out how to get to IT transformation. In the poll, we asked, “Would you prefer to have backup and primary converged, if you didn’t have to pay for a separate backup system?” The votes were 10 to 1, “Yes,” which was, I think, a signal of where we are.
Chris Evans: You’re saying 90% said they would, and 10% said they wouldn’t.
Brian Biles: It was …
Chris Evans: Or even higher than that.
Brian Biles: It was 70% said they would, 7% said they wouldn’t, and 20% were neutral.
Chris Evans: Didn’t know either way.
Brian Biles: “Just tell me what you’re talking about.”
Chris Evans: Okay, yeah. Well, that … Yeah, but that’s a huge number and that, to me, seems a surprise.
Brian Biles: 70% was astonishing.
Chris Evans: Yeah.
Brian Biles: I think it’s just where the world is. They can’t afford to invest in these optimized approaches anymore, and the technology’s gotten really good. You can isolate behavior to such an extent that it’s safe.
Chris Evans: Yeah, okay.
Brian Biles: We still recommend replicating, and if you want a really, truly separate file system, replicate to Cloud where it’s storing in S3.
Chris Evans: Yeah. Then that’s a fair point, so maybe there’s a division there that talks about separate and the concept of backup and restore from VC, DR as in continuing your business, and having replicas for other reasons, I guess.
Brian Biles: Right.
Chris Evans: You got to make sure you’re using the right terminology in that respect.
Brian Biles: I feel certain that we’re selling into an interested market. Now, it’s just a question of execution. I think we’re doing the right things. DR has this nice property that if you align with the idea that you need to recover kind of short RPO windows for modern, the most recent data, as well as old data from backups.
Brian Biles: The policies for establishing data capture, “For what frequency, and where do you want to replicate it? How long do you want to retain it?” can largely be the same, it’s just a difference in restart. Backup, you’re restarting a single VM, maybe. In DR, you’re restarting like a thousand of them.
Chris Evans: In an environment. Yeah, a whole- [crosstalk 00:18:09]
Brian Biles: In a particular sequence, and so that part’s pretty different, but the storage layer and the policy layer can be largely overlapping.
Chris Evans: Okay.
Tim Page: Chris, a data point on this release of this, we’ve done, in 60 days, because we track them, we’ve done over 100 demos.
Chris Evans: Right.
Tim Page: Just the fact that-
Chris Evans: Who had to do that?
Tim Page: We have three guys that were doing it.
Chris Evans: Wow. Must have been very busy.
Tim Page: Well, it actually made us, now we’re hiring full-time just to do it, because we realize the market is willing. These are, a lot of these are non-customers. We pre-sold to a number of our customer base, validation. Then we went out to non-customer base, and we’ve gotten no negative reaction. I think it’s a known issue. They have to see it to believe that it feels like the same management operational stack, whether it’s on or off-prem. Then when they see it, they believe it.
Brian Biles: Yeah, and the numbers are sort of staggering. When I was in an account in Florida, not too long ago, who had a unified storage system with VMware, and they paid a lot of money to do a DR orchestration rollout with a different vendor. They got it all … They spent half a million dollars, and got it all started. Hit the button, and it failed over, and everybody cheered. Then they hit the button to failback, and it didn’t work. They had to run out of their DR site for like six months and it was, lawsuits followed.
Chris Evans: That’s a classic error that we used to have, as we said, before we started, this is mainframe days, where a lot of the mainframe failover was to a managed service provider. Most of the time, when you failed over it was great, because the storage environment would allow you to failover, but it wasn’t built to be incremental to come back. Therefore, when you did turn it around, you’d then have to, effectively, replicate the whole thing back, so you were in days, weeks, months of replication in order to be back to where you started. In that time, you’re now not protected.
Brian Biles: Right.
Chris Evans: Going DR became a bit of a scary thing to do.
Brian Biles: It remains a scary …
Chris Evans: Absolutely. Changing the paradigm and making sure that that isn’t scary is obviously going to … It has to have benefits, whatever, [crosstalk 00:20:07] we operate.
Brian Biles: Right, and let me just say two points about that. The first was the kind of replicating back. We have a huge natural advantage in our architecture about WAN optimization and storage because it’s deduped everywhere. We have a content address file system, so when we replicate, we send fingerprints of what’s changed. If the destination has that fingerprint already, it has the capacity, it has the data already with great certainty, so we don’t have to send the data.
Brian Biles: As you modify things in a fail oversight, we then have to replicate those back. It’s very efficient. From the Cloud to on-prem, that would be known as egress costs, so we make that super-
Chris Evans: Significant impact if you don’t do it properly.
Brian Biles: We make that very low, and it goes much faster. The second thing is maybe the more powerful point, and that’s that you just have a lot of things to double-check, and this is where a converged approach is super important. We have, basically, the same stack on Cloud and on-prem and in multiple prem cases.
Brian Biles: If you’re doing failover, failback, there are a hundred little things to check that are in place. You need to make sure the VMs are registered with vCenter. You need to make sure the storage is accessible, the network is up, the backup images are … All this stuff, based on policy. Because we have a single stack and we’re calling this the Automatrix, a sort of data plane across Clouds. We can run sensors to know that everything’s in place every half an hour.
Brian Biles: Think of that as a recovery compliance objective. Like no one uses that term because six months is such a bad answer, that you don’t know when you’re compliant. We can know every half an hour, which is just like breakthrough. No one’s ever seen that before.
Chris Evans: Actually, and that’s a fact, most people are not compliant most of the time.
Brian Biles: No, that’s right. They’re compliant during the audit [crosstalk 00:21:53] and that’s about it.
Chris Evans: Let’s talk about the new announcements that you made this week. Why don’t you take us through the detail, Brian, and explain exactly what you’ve been asked and what you’ve released.
Brian Biles: Sure. We had three announcements. I think the heart of what I’ll be talking about next is just one of them. The three announcements were about our vision, which is, how will you use these products over time? The second one was about a survey of 540 enterprise IT guys to talk about IT transformation and what the challenges are, what they’d like to see. The third part was our product introductions and that had two major parts.
Brian Biles: The first was that we’ve got a platform called Automatrix, which includes the DVX system we’ve had since the beginning. It includes Cloud DVX, which we’ve had for a year and a half. It has the basis of our sort of parallel Cloud infrastructure. We’ve added ControlShift, which is shipping today. A workflow automation system for DR mobility.
Chris Evans: Okay, so is that now what you’re calling Automatrix?
Brian Biles: Yes.
Chris Evans: Is that the brand name for all of these components together?
Brian Biles: For all of these together, picturing a multi-Cloud deployment, where there’s a common data plane, including primary, backup, mobility, encryption and DR, across-
Chris Evans: You mentioned five things there, so-
Brian Biles: That’s right.
Chris Evans: Let’s do those again, a bit slower.
Brian Biles: High-speed, primary storage.
Chris Evans: Primary, yup.
Brian Biles: Backup, with backup efficiency everywhere. Blanket encryption, in-flight, at rest, in-use. Mobility with WAN optimization, and DR orchestration, or mobility orchestration, workflow automation.
Chris Evans: Right, In terms of encryption, let’s pick that one as we go through, because I think that one might stand out, and people might thing, “Why is encryption important?” I mean, they know why it’s important [crosstalk 00:23:33] in terms of it being your primary data, but why would it be important to incorporate it into your platform?
Brian Biles: In a data center deployment, there are three places where you want encryption, because it sort of represents everywhere. You want it in-use, you want it it on the host. If there’s any local capacity you’re using, you want it to be encrypted there. If there’s data in-flight, you want it to be encrypted in-flight, and there’s encryption at rest, so no one can pull a drive and find the data.
Brian Biles: Historically, there’s a fight between those things. If you encrypt in-use, then you might get encryption in-flight, but if you’re using, for example, an all flash array, the economics of that kind of purchase require you to get data in the clear, so that it can then do data reduction, dedup and compression.
Chris Evans: Absolutely.
Brian Biles: If it’s encrypted, all the patterns are messed up, so it can’t do it.
Chris Evans: Yeah. That’s a big problem for some of the vendors, without going into detail, where the encryption happens at both ends, and then they get the data and it’s, effectively, random to that. It becomes very difficult to do any actual saving for them.
Brian Biles: Right. In our approach, in our stack, we do the encryption at the ingestion point, at the host, so it’s then, we do fingerprinting and compression, and then encryption right there in the same pipeline, so it’s encrypted in-use, in local flash, on the network, and at rest in a separate data pool.
Chris Evans: That just, basically, makes sure that you can, A, encrypt, but B, do it properly so that you’re not losing out on the optimization features of the platform itself.
Brian Biles: Correct, so we can maintain, always on dedup compression or erasure coding, all this stuff, while having blanket encryption.
Chris Evans: Okay.
Brian Biles: You jumped into the middle of the five.
Chris Evans: I did, yeah.
Brian Biles: With primary storage …
Chris Evans: I jumped into that one because, and the reason I did is because primary, secondary seems like it might make quite a lot of sense, but then you’re talk encryption, you think, “Oh, hold on. Why is that going to be relevant to this discussion?”
Brian Biles: Yeah.
Chris Evans: Clearly, there’s a very good reason for it. That’s what I was just try to make sure we didn’t lose.
Brian Biles: Thanks, yeah. To even imagine primary and backup convergence, you have to, with VM-centric policies and so on, you have to maintain the speed people expect from flash primary storage. We’re faster than any flash array because we do it on hosts with local latency and it scales very well.
Brian Biles: Backup is a combination of VM-centric accountability and policies, but also all of these data reduction features. That’s a lesson we learned in the data domain days. It’s, backup is insurance, and so you don’t want to pay more for that than you do for your primary storage. Doing backup without all these features, and especially on flash is kind of a nonstarter.
Chris Evans: Yeah. That’s a good way of looking at it. Insurance shouldn’t be more expensive than buying the product again.
Brian Biles: Right. Mobility came naturally for us when we did replication because we had, everything was content-based and fingerprinted, and so we can do these WAN optimization tricks naturally, and we have encryption anyway.
Chris Evans: We mentioned the reason why that was useful, earlier. The whole idea of being able to have the data that’s six months behind still there, so that when you do do that restore, you’re not having to go looking for where that backup might be.
Brian Biles: That’s right.
Chris Evans: The ability to be mobile with both primary and secondary becomes a key consideration.
Brian Biles: That’s right. Then DR orchestration is just sort of an additional layer of policy and mapping and scripting, and so on, that can leverage all of the features that we’d already talked about. It can leverage the policy setting, the VM-centric access, the granular snapshots, encryption, all that stuff.
Chris Evans: ControlShift is key.
Brian Biles: Yeah.
Chris Evans: Which means that that now we’ll do that automation piece for me?
Brian Biles: Yes.
Chris Evans: If I’m going from primary-to-primary … Sorry. From on-prem to on-prem, should we say?
Brian Biles: Yes. That’s shipping today, and on-prem to Amazon will be shipping at the end of the year with links to the VMware Cloud, so you can, without conversion of the VM type or how vCenter works or whatever, we can do a failover to an on-demand data center.
Chris Evans: Okay, so that could be public Cloud, we’re talking about Amazon?
Brian Biles: Yeah, on Amazon. Yeah.
Chris Evans: We’re literally saying, taking my primary data center, failing it over to the public Cloud, fully automated, fully scripted and with the ability, every 15 minutes, to guarantee that those compliance checks have been done to make sure it’s all in place.
Brian Biles: 30 minutes, and yes.
Chris Evans: 30 minutes. Okay.
Brian Biles: That’ll … It has these enormous economic benefits of storing super efficient data in S3, while you’re waiting. Then, when you want to test DR, it uses on-demand compute. It’s just way, way less expensive than having a Peer data center.
Chris Evans: Yeah. Let’s talk about purchasing models and the cost of this. Because the whole DR cost is ridiculous, when you think about it, having two data centers. It strikes me that there’s a lot more other things you could be doing here. For example, I might want to reorganize my data center. My data center could be in a mess, or I might want to consolidate my data center. I might want to do any of those sort of scenarios where I’m, effectively, want to move a workload from x to y, semi-permanently, temporarily in order to then do something and then move it back.
Chris Evans: It seems to me that you become a perfect solution for that. Also, at a cost that really is only about the cost of using it while you need it there.
Tim Page: That’s true, and that’s big, compelling reason why people are talking to us today. Not only, not just cost, it’s like on-demand DR to an extent that you only pay for what you have to spin up when you have to spin it up. The efficiency of our pricing model too, hits the, wherever you buy us in a subscription model, you can move that dollar-for-dollar, to take advantage of the other services.
Tim Page: For example, I was with a CO this past week, seven data centers, wants it to be in all of them in five years. He looked at us just saying, “Oh, I can buy your versatile software to run, to port to.” We’ve commoditized the storage part of that to nothing. He looked at it as throwaway cost. Then port those licenses during that term into the Cloud.
Tim Page: Portability of our pricing model, but even within that pricing model, the services we offer are efficient, mobile by encrypting and deduping, and only needing to instantiate that resource when needed.
Chris Evans: How are you pricing it? Is that like on a VM basis? Is it on a site basis, or is it all of those?
Brian Biles: It’s all subscription and depending on the product, there are different models. In the sort of capacity sides, on Cloud DVX or on the capacity part of the on-prem DVX, it’s a dollars per terabyte per year. That’s a consistent price across those models. There’s a per host fee for performance on the on-prem DVX, and that’s just a simple per host, so we don’t care whether there’s a lot more CPU or SSD or whatever. It’s all flat. That’ll, as we do primary storage on Amazon, it’ll have a similar kind of model.
Chris Evans: I guess that means you can build big VMs, small VMs.
Brian Biles: Yeah, whatever you want.
Chris Evans: It’s whatever your choice.
Brian Biles: Whatever you want. We support up to 32 terabytes of local, raw flash per host, so with dedup with compression, and that’s just a crazy, big local workload. ControlShift is a slightly different model in that market. Because you’re moving VMs around, the easiest thing to guide to is just number of VMs, so we sell that on a number of VMs over a number of years.
Chris Evans: Right, okay. Brian, just going back to the five. I kept interrupting you on. We didn’t really sort of finish all of those, did we? We said primary, secondary, encryption.
Brian Biles: Yeah just to recap, they’re primary, backup, encryption, mobility and DR. We believe these are the five essential data services that you need in every Cloud. If you don’t have a common way to access them and work with them, then you’ll end up being kind of landlocked in a particular environment with particular vendors doing each one. As silos, very hard to move to the Cloud where there’s another five. Very hard to move then to a secondary Cloud, where there’s yet another five.
Brian Biles: You’ll see a variety of vendors ending up talking about or acquiring, or whatever, various parts of these five because we’re all sort of in a race to the data plane. Have a data plane that can work on-prem, the same way as it works on Cloud, the same way as it works another Cloud.
Brian Biles: Virtualization vendors are actually making progress on this area. Kubernetes is in multiple Clouds and operates roughly the same way. VMware is on-prem, obviously, and in Amazon, they just announced they’re going to be on Azure. If the virtualization is in common, what’s left is the five parts of the data plane, and how to get that to work in a standard way.
Chris Evans: That makes me think then, if this not a future discussion or if this is, if we’re were able to talk about this, that if you’re looking to do VMware on AWS by the end of the year, then there’s no reason why it couldn’t be in the future, VMware on Azure or it could be whatever platform this is to run. I guess that’s the sort of promise of what your solution is going towards.
Brian Biles: Yeah. As part of this announcement, we announced a bunch of roadmap items. One is that we’ll be working with VMware Cloud on Amazon. We’ll also be offering primary storage on Amazon, sort of completing our stack there. They’ve announced two weeks ago that they’re moving to Azure, which is great because we were also announcing that we were going to Azure with the whole stack. The five disciplines go wherever we go, at that point. As other Clouds are possible, we’ll go to other Clouds.
Brian Biles: We’re also going to be supporting Kubernetes, because it’s also a standard that’s emerging, but VMware is way more dominant and a much more interesting starting point.
Chris Evans: Yeah. I think the container side of things is a promise, perhaps, for the future. I don’t know how many people, Tim, are saying that containers are something they want to start adopting, but I’d imagine that we’re seeing a push from the developers’ side for that, probably more.
Tim Page: It feels like it’s becoming bigger, faster than we thought at the enterprise level. One example is, that ought to be mentioned, that we closed a big service provider in December, did a four month POC with our stuff. Replaced some big, primary vendors and smaller backup vendors, and a hyper-converged vendor with our stack, with our one code base.
Tim Page: A big piece of why they decided that is how far along we were porting the Kubernetes. Big DevOps shops like SPs and some of the big enterprises, we’re seeing it’s a big attention grabber. We believe it’s going to be VMware and Kubernetes, so that’s where we’re at.
Chris Evans: Yeah. At he end of the day you’re, ultimately, you’re a file system at the bottom end there, and that’s-
Brian Biles: That’s certainly the starting point.
Chris Evans: That’s an ideal scenario for when you’re looking at the way you put data in containers.
Brian Biles: That’s right. We’ve supported Docker for more than a year. With that approach, we offer an NFS mount, which looks like a virtual drive to … A persistent volume to Docker. Our own developers use that for build and test environments. By being able to snapshot a container, persistent volume, and then clone it and restart on another host, they can move much faster without the normal sort of copying of new builds that you get in those environments. Our own cycle time went up by about a factor of two faster by using that approach.
Chris Evans: Just by using it internally.
Brian Biles: Yeah.
Chris Evans: Amazing. Okay, so that’s going to be interesting to see the Kubernetes support. That’s just a roadmap item at the moment?
Brian Biles: Kubernetes is. Docker has been shipping for a while.
Chris Evans: Right, okay. Brilliant.
Tim Page: It’s in process.
Chris Evans: Okay, well, I think that the announcements are really interesting. It sort of sets a stronger vision for me about where you’re going to go in terms of mobility and the whole package together, which I think is really fascinating. I’ve been really pleased to learn about this release and understand exactly what the direction is going to be.
Chris Evans: If other people want to go back and understand this a bit more, because we’ve talked about a lot of technical issues here in the last sort of 40 minutes, where can people go online and find more information out?
Brian Biles: Well, obviously, our website. We’re doing a fair amount of interviewing, so hopefully you’ll see some in the pundit world as well, in the press world.
Chris Evans: What about people who maybe want to try this stuff out? Are you working on any sort of technology to allow them to see this, and see how failover works, or even try it out? [crosstalk 00:35:04]
Tim Page: Yeah. There’s a couple of ways. Go to our website. You can sign up for both demos, which we do every day. There’s also a lab sandbox we’ve created for people to actually try it in.
Chris Evans: I think the first thing is to see the video, see how it works, and then if you’re really interested, you could even try it out.
Tim Page: It is.
Chris Evans: Great. Okay, well, Brian, Tim, thanks for your time, and look forward to catching up soon.
Tim Page: Chris, thank you.
Brian Biles: Thanks, Chris.[/bg_collapse]
Brian Biles is Chief Product Officer and co-founder of Datrium. In 2001, Brian was a founder of Data Domain, a pioneer in building large-scale data deduplication systems. He served as VP of Product Management and Business Development until the company’s acquisition by EMC in 2009, where he became VP of Product Management, Backup Recovery Systems. Previously he held leadership positions at Sun Microsystems, VA Linux and Data General. He has a BS in Computer and Information Sciences from the University of California, Santa Cruz. In 1998, Brian earned a “special thanks to” credit in the movie The Big Lebowski.
Tim Page is the CEO of Datrium. He was previously one of the founding members of VCE, helping grow that venture to $3 billion in revenue in 7 years.
Related Podcasts & Blogs
Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast Episode E8D6
Podcast: Play in new window | Download