#151 – Introduction to StorageOS V2.0 (Sponsored)

#151 – Introduction to StorageOS V2.0 (Sponsored)

Chris EvansCloud, Containers, Software-Defined Storage, Sponsored

In this episode, Martin and Chris are joined in conversation by Alex Chircop, CEO at StorageOS. The company has announced StorageOS V2.0, a significant evolution in their storage platform, built for containers using containers.

As this episode explains, the version 2.0 release of StorageOS enhances scalability and resiliency, with a strong focus on features needed for enterprise adoption. Each volume presented to a container now has a “mini-brain” to implement much more distributed application awareness.

Availability is increased through the use of Delta Sync, a new technology to ensure data volumes are recovered to a consistent state within as short a time as possible. Security has also been a big focus, with data encrypted at rest and in transit. APIs all now use authentication and there’s an internal certificate authority to mitigate the complexity of certificate management.

StorageOS is free for the first 500GB and delivered through the freemium model. Platform documentation can be found here – https://docs.storageos.com/docs/

More information on the v2.0 announcement is here in this press release and blog post. Alex references the self-evaluation guide, which can be found here along with details to register for the 500GB free access.

Alex references the CNCF storage landscape white paper developed by the CNCF Storage SIG. A copy can be found here.

Elapsed Time: 00:36:50

Timeline

  • 00:00:00 – Intros
  • 00:01:00 – KubeCon postponed to August
  • 00:02:22 – What is StorageOS?
  • 00:05:35 – StorageOS builds in data resiliency
  • 00:08:25 – Container storage requires more application awareness
  • 00:10:15 – StorageOS scores workloads to manage physical data placement
  • 00:11:30 – There’s no dependency on a specific container platform
  • 00:13:50 – Container storage is changing the assumptions about storage requirements
  • 00:15:10 – What is new in StorageOS version 2?
  • 00:17:15 – The adoption of containers is adapting the development of StorageOS
  • 00:20:00 – Consistent deterministic performance is essential, even with component fluctuations 
  • 00:22:10 – Version 2 introduces more flexible federated capabilities 00:23:00 – StorageOS now includes much greater embedded encryption & security controls
  • 00:25:30 – How do customers upgrade from StorageOS v1 to v2?
  • 00:28:00 – What experiences and feedback is there from customers?
  • 00:30:10 – Storage is becoming part of the application workflow process
  • 00:34:45 – Call to action – where can users find more?

Transcript

[bg_collapse view=”button-orange” color=”#72777c” icon=”arrow” expand_text=”Show More” collapse_text=”Show Less” ]

Chris Evans:
Hi, this is Chris Evans with another Storage Unpacked Podcast. I’m here with Martin again. How are you doing, Martin?

Martin G:
Yeah, not too bad. How are you?

Chris Evans:
Pretty good. All settled in for working from home, I think now, you know?

Martin G:
Yeah, like the rest of the globe, yeah.

Chris Evans:
Yes, Like the rest of the globe. Exactly, absolutely. I think we’re all getting very much used to it. So this week we have a guests. Very pleased to say that we have a returning guest. We’ve got Alex Chircop from StorageOS. Hi Alex, how are you doing?

Alex Chircop:
I’m good, thank you. And hi, Chris and Martin, and yes we are all working from home as well at StorageOS as is, I imagine the vast majority of the world today.

Chris Evans:
And I would imagine the vast majority of people who are listening to us as well.

Alex Chircop:
Yes, indeed.

Chris Evans:
So we were going to talk at [KubeCon 00:01:00], which was due to be held in fact this week in Amsterdam. Funnily enough, I would have been on a flight today, as I’m sure you would have been if not yesterday or the day before, but we haven’t been able to do that. So we’re doing this remotely rather than face to face and we’re going to take some time to talk about your storage platform and version two which is coming out.

Alex Chircop:
Yes, that’s right. It’s a bit sad that KubeCon had to get the postponed, it’s now potentially postponed until August, or it might become a virtual event. We were looking forwards to KubeCon because it’s a great opportunity to talk to so many people in the community and I of have two hats because I’m the CEO of StorageOS, but I am also the co-chair of the CNCF Storage SIG. So these sorts of events are great opportunities to do work with the different teams from lots of different vendors and independent developers.

Chris Evans:
Yep. Let’s make the best of it and let’s get into the technology. Oh by the way, in case anybody hasn’t made the connection, you have been on the podcast before. It was actually two years ago, just over two years ago when we were talking about Spectre and Meltdown, the process of bugs. So we’ve had a discussion before, but we’re not going to go on down into that today. We’re going to talk about your storage platform and exactly what it is and what it does for people. So could you start by just giving us a bit of a background as to what StorageOS is, what the platform does and what people would use it for?

Alex Chircop:
Sure. So StorageOS is a storage-defined cloud native storage platform. So the idea behind this is that it gives end users the ability to have enterprise storage class functionality deployed in a platform-agnostic way. Our product is actually deployed as a container, which means it’s infrastructure-agnostic and it provides availability and performance and security to run stateful workloads at scale. The reason why this is important is because no application is effectively stateless, despite the mantra about containers being stateless. What we find is that end users go through a journey where they deploy, where they convert some of their applications to containers because this gives them the portability and it removes the dependencies and means that the app is now the coupled from physical server requirements. Then they look at products like Kubernetes to automate and to manage the environments and provides things like scaling and self-healing.

Alex Chircop:
Then the obvious question is, how do you deal with those stateful applications? So in dynamically formed clusters where nodes come and go and nodes are getting upgraded and nodes scale on demand, how do you deal with the storage requirements? This is where StorageOS comes in. So, we’re deployed as a container, we virtualize storage within the cluster, and we provide dynamic provisioning to those stateful applications, which could be anything from databases, to message cues, to streaming systems like Kafka. Even more recently things like VMs with projects like [inaudible 00:04:10].

Chris Evans:
Okay, that’s brilliant. So that gives us a bit of an introduction there. Now I’m really interested in this idea of building storage, which we typically see as persistent out of containers, which we typically think of as not being persistent. But there’s no reason to say you can’t do that because clearly there is persistence built in there somewhere because you still have to sit that environment on some sort of physical storage at the bottom of it somewhere.

Alex Chircop:
That’s right. So in a container, the data is typically ephemeral. What we do with StorageOS is we virtualize the storage that’s available on each of the nodes within a cluster. So potentially that could be physical disk, in the case of bare-metal boxes, or say virtual disk in the case of VMs, or even say cloud disk, if you’re running this in the clouds. We create this pool of storage that spans the entire cluster, and from there you can dynamically provision StorageOS volumes and those StorageOS volumes are now portable and linked to a container. So those volumes get imported into what’s called the namespace of the container. This means that those applications, say a MySQL database perhaps, or something like Kafka for example, can now access that storage and if the container moves between nodes, they continue to access the same storage as it moves around the notes with them.

Chris Evans:
Okay. Obviously you’re adding things like data protection, resilience and other things in there, to make sure that data is secure if things like nodes go down, containers go down and so on because one of the things with any container environment is we assume that containers can go and come back again.

Alex Chircop:
So that’s right, indeed. One of the key things is that a lot of the patterns that we see in typical enterprise applications means that they typically have a strong dependency on availability at the storage layer. So we do things like replicate the data across multiple nodes to make sure that your data is safe. Even if a node is lost or a disk is lost. We do things like encryption, so that data is encrypted, both in transit and while it’s at rest on disk and we implement a number of data services to mean that these services are available across the cluster, transparent to the application because exactly as you said, nodes do come and go, but also containers can be stopped and moved to other nodes through the use of the orchestrator.

Chris Evans:
Okay. Martin, from your perspective, how do you view it? This is quite a difference, isn’t it, from what people have been used to in terms of traditional storage?

Martin G:
We’ve had these interesting clashing cultures between with traditional enterprise and containers where containers first came along they were always thought about ephemeral and everybody in the enterprise turned around and said, “Actually, well, most of our data isn’t ephemeral. We need to make sure it’s kept and we need to be protected. We need consistency.” So there has always been a bit of debate about transactional consistency and integrity. So I’m seeing things like StorageOS becoming very important because it fixes some of these problems to enable enterprises and larger users to use containers.

Chris Evans:
Yeah, exactly. Now, I just want to dig into a couple of bits of the technical side here, Alex. Not too deep, but just try and understand some of this for people who might not be able to judge the context of the idea of having a set of containers and storage within that mapping to a set of containers.

Chris Evans:
I assume obviously you can take advantage of, for example, mapping the container that’s accessing the storage against the container that’s actually running on the node itself to a certain degree, almost like I guess, what you might call an affinity with that application container. That must be one aspect to this. The other aspect I was just going to say was that one of the things we see with physical storage is that as you try and balance this idea of containers moving around between nodes and physical storage, you’ve got to somehow map a physical array to all the different nodes that could possibly access a container, so you end up with a much more complex model. So I’d imagine you’re offering both simplification, but you can actually tie that performance to a local container as part of that too.

Alex Chircop:
So that’s worth discussing. So, there’s two things there, one is the concept of dynamic provisioning. So, in much the same way that the developer can say to Kubernetes, “Hey, this is my application. It needs to run these containers and it needs this amount of RAM and this amount of CPU and perhaps this connectivity on the network.” Kubernetes plays this awesome game of Tetris with all of your apps and manages to deploy it to use the best configuration of the underlying infrastructure.

Alex Chircop:
So in much the same way, we simplify this at the storage layer too, because in exactly the same way that you want to be able to specify something as simple as CPU and memory, you also want to specify, for example, that your database, or that your application needs this amount of storage. Let’s say it needs a hundred GB of storage and it needs to be protected and it needs to be encrypted, et cetera. What we’ll do behind the scenes is we intercept those requests, typically using CSI, which is the container storage interface, which is a standardized API between Kubernetes and we dynamically provision those volumes and connect them up to the containers. Now, when we talk about data placement, again, we see an evolution as customers deploy these sort of technologies. In the simplest form, we have the concept of hyperconverged, where every node is both running applications and also providing and consuming storage out of the storage pool.

Alex Chircop:
Then due to commercial, technical, or operational reasons, a lot of users also end up with some sort of hybrid model, where some nodes are hyperconverged and some nodes are computed only and some nodes are storage only. But then what we also see is that they quickly move on to more advanced workloads. So both in the case of … There are two examples here. One is the concept of data locality. So we’ve developed a system that integrates with the Kubernetes orchestrator and effectively helps score workloads such that a workload can be placed where that data physically is within a Kubernetes cluster. This gives huge benefits. I mean, the obvious reason being that you’re not going out over the network to access data. For the most part, you’re accessing data locally and that gives dramatic improvements on latency, which in simple terms means you do more transactions per second and your application runs faster. But also, we’re seeing more complex workloads. So things like Elasticsearch, Kafka, Cassandra for example. Apart from locality requirements, they also require affinity or anti-affinity on where the volumes are placed.

Alex Chircop:
So for example, if you have a three node Elasticsearch cluster, you want to make sure that the data is actually … doesn’t all land on the same node. The data is actually spread out across the Elasticsearch cluster instances. Similarly also, if you’re deploying for high availability, you may want to use anti-affinity placements to make sure that data is placed across data center racks, or perhaps across availability zones in the cloud.

Chris Evans:
Okay. Now you mentioned Kubernetes a lot and I remember when your platform first came out, you were very much deployed on Docker and I would suspect that’s possibly because Kubernetes didn’t have quite the groundswell is it does have today, but I’m assuming you’re not entirely dependent on … Sorry, on Kubernetes, that you could run on any container platform? It just so happens that Kubernetes is the current one of choice.

Alex Chircop:
Well, indeed. I think what we’ve seen over the last couple of years, is that Kubernetes has certainly won the mind share and the market share for the orchestrator markets. The Docker EE product is indeed based on Kubernetes. The Mesosphere product is moved to Kubernetes, the Pivotal products have moved to Kubernetes and we see all the market leaders like Red Hat, OpenShift based on Kubernetes as well as the Rancher platforms for example, based in Kubernetes. Of course, we mustn’t forget the big announcement by VMware, providing their Kubernetes engines, as well as of course all the cloud providers provide the managed Kubernetes services. So it’s hard to avoid doing Kubernetes if you want to do cloud native Kubernetes, it’s probably that single focal point that’s driving the cloud native movement forward.

Chris Evans:
Yeah. So Martin, from your perspective how would you look at it and think of this in terms of the ability for the application to tell the storage what it wants? It seems like this is a bit more advanced than we’ve ever been able to do with, say shared storage.

Martin G:
It is. I do have some questions about whether application developers and application teams often know what their storage wants or needs. So it’s how that gets defined and how they need to still talk to people who understand a bit more about storage because you find that application developers often have one of two ways to think about this. They either want as much storage for as little as possible, or they want as fast as possible storage. So I think a lot of people see these as, “Oh, infrastructure people get out the way, you’re not needed any more.” But I still think there’s quite a nuance discussion to be had with application teams and service teams about how we deploy this and how you use your storage going forward.

Alex Chircop:
So I’d like to … just chiming in on that, that’s a really good point. I mean, we’re seeing now more than ever before, developers and DevOps teams getting involved in some of the storage discussions that typically they wouldn’t be involved in because infrastructure teams would have a set number of standards that they would support and the developers would be expected to use those standards. So that’s certainly something that’s changing. In fact, one of the things that we’ve worked on at CNCF as part of the storage SIG, is we’ve created this storage landscape white paper that explains different attributes that you might want to consider when you’re thinking about storage requirements.

Alex Chircop:
So, performance is obviously one of them and availability, but also scale and latency and durability and how the storage is instantiated and all of those sorts of things and how the storage is managed and how the storage integrates with the rest of your environments. So, these are all important points in the decision process and storage is not easy. It really is still one of those strong computer science problems, especially distributed storage. So it’s hard to expect everybody to understand all of this, so great expertise is still needed.

Chris Evans:
Yeah, I agree. I think it’s always going to be complicated, but things are changing significantly. On that note, why don’t we get on and talk about what you’re doing within version two of the platform and let’s try and understand what some of the new features are. I’m guessing that some of the features we’re likely to talk about are going to come from some of your understanding of how customers are looking to use the storage? So where should we start in terms of what version two actually has within it?

Alex Chircop:
Version two is a significant change to StorageOS, just over a year ago now, we started looking at what the trends in the marketplace were looking like and we saw that there were a few things happening. One is obviously, Kubernetes became the de facto standard, but we also saw environments getting more complex and more hybridized so people weren’t deploying entirely in the cloud, or entirely on-prem and they were using different platforms. Also, they weren’t moving entire applications wholesale. Very often, parts of an application lived on existing infrastructure and new parts of an application were being put in cloud native environments. Then another change that we were seeing was that in the Kubernetes space, there was this expectation that customers would deploy really large clusters and have a number of applications or projects share those clusters.

Alex Chircop:
Certainly that is the case in some cases but what we’ve seen as a general trend is that more and more customers are deploying a smaller number … sorry, a larger number of smaller clusters. What this means is that it changes the dynamics of what the storage system needs to look like because the clusters are getting bigger and more complex and need to scale, but also the clusters need to talk to each other and the clusters need to hybridize. So with that in mind, we looked at this and said, “Okay, how are we going to get the reliability and the functionality that customers are expecting in these extremely dynamic environments?” Where there can be glitches, where you have, say a network glitch or a server glitch that takes a node offline for a short period of time, or perhaps a long period of time.

Chris Evans:
Just to interrupt there, Alex, and I guess you probably … when you were designing your solution in the first place were assuming that we would see many, many nodes, many, many separate instances of the operating system, or the StorageOS operating system running, so your resiliency and other things would be improved by that scale-out nature of solutions. But it sounds to me like you’re saying that that’s not what you’re seeing people do?

Alex Chircop:
Yes and no. We’re seeing scale increase, but we’re also seeing the scale of change increase, right?

Chris Evans:
Right.

Alex Chircop:
So scale improves redundancy as long as there isn’t correlation between the things that can fail. So having the scale of many nodes is great, but if they’re all connected via a similar switch, or something like that, then you still end up with the issues where you can have a correlated change that creates a large blast radius to a failure scenario. So when we looked at this, we said, “Okay, how do we solve this problem?” Because with a distributed control plane and the distributed data plane, the complexity is large and these are serious computer science problems that are hard to address. So one of the things we came out with was this concept of a mini brain.

Alex Chircop:
Effectively each StorageOS volume has its own smart mini brain that allows placements but also recovery decisions to happen independently of other nodes in a cluster. What this means in real life is that it dramatically improves the reliability of the environment because now even if nodes get partitioned, we can still make sensible decisions without some sort of monolithic control plane having to be able to talk to all the nodes within a cluster, but it also has some other functionality. So by implementing these mini brains, we get the concept of distributed redundancy and better availability during those complex failures but it also means that we get better bandwidth and better scale because we’re not going through single choke points to make routine decisions, or data convergence decisions. Then just one step on top of that, we’ve added the concept of what we are calling delta sync, which is the ability for volumes to rapidly converge.

Alex Chircop:
So when nodes fail and they come back, or when new replicas needs to be provisioned, we can dramatically improve the speed at which those reconvergences happen. This is important because we want to maintain deterministic performance. So in all of these cases, we need to remember our traditional use cases are things like databases or message cues, which mean that you need low latency, but you need deterministic low latency. As these changes happen with a cluster, you need to be able to maintain the same sort of speed. So for example, we can often reconverge volumes when and node blips, or a network has an issue within milliseconds as opposed to seconds, or minutes. That means that applications continue to see a consistent deterministic performance.

Chris Evans:
Okay. I mean, I think that’s really interesting. I just want to get Martin’s opinion on that, because one of the things, I guess from anybody who’s been in this industry for any length of time as you have been Alex, obviously, is that understanding of how the containerized environment is changing and how effectively storage is going to have to change to match it. Now. Martin, I was just thinking that whole idea of moving to smaller clusters and many of them is a potential scenario that developers probably want because it means they all get their own cluster, but it directly has an issue on the way that you deploy the storage with that.

Martin G:
It directly has a impact on how you deploy clusters, or deploy your infrastructure full stop, not just for storage. It has impacts on how you share data between different applications. Obviously a smaller cluster means you have smaller numbers potentially of disks. So a disk failure itself can it be much more impactful, but it also allows you to constrain some things. So if you have a smaller cluster and you have a cluster which is aimed very much at a development group, it means that we can help minimize the impact of noisy neighbors, for instance. As Alex said, it allows you to manage a lot more deterministic performance environment for people, it means that one rogue application can’t actually impact everybody else.

Chris Evans:
That’s the first thing that Alex said, continue with what you were saying in terms of features, because I think you’ve got an answer to that haven’t you? In terms of you’ve got a comment on what Martin just said.

Alex Chircop:
That’s right. So the other concept that all of these little mini brains gives us is the ability to be much more flexible when we’re talking about, for example, federated access between clusters. So V2 is going to enable us to provide a very rich roadmap between the way data is shared across clusters and across different platforms and environments and this gives us a really strong capability of being able to deal with these environments as these environments mature.

Chris Evans:
That’s the first thing then, this whole idea of more resiliency, more matching to the workloads and so on. What else can we expect in version two?

Alex Chircop:
The other thing that we have implemented is a strong security by default. So again, what we’ve seen is people are deploying Kubernetes and Kubernetes does have a learning curve. It’s easy to get things wrong and therefore what we wanted to make sure is that when deploying StorageOS, everything is secure by default. So we now automatically provide built-in certificate authorities, that certify and authorize the access across all the API end points as well as all the data part elements within StorageOS, and we’ve implemented security in-depth for even things like the security around the CSI access points and things like that. So that effectively now, Kubernetes is using authenticated and certified connectivity into StorageOS for all the functionality. On top of that, we’ve implemented encryption by default for all the data that’s in transit. So we don’t use unencrypted connections anywhere within the product. Again, some of these things were things that you could do with the V1 product, but maybe you had to do them by hand, but we’re taking away all of that complexity and making it secure by default now.

Chris Evans:
Okay. I mean, I’d say a lot of what you’ve said so far in terms of, especially around the security side, is typically what you would expect if you were going into an Enterprise environment. It sounds like there’s a lot more focus on delivering features that the Enterprise are going to be coming knocking on your door for anyway. So the more you can build those in from day one and make them literally part of the product that you don’t even have to think about, I guess it makes your solution more and more Enterprise attractive?

Alex Chircop:
That’s right. So, StorageOS has quite a large community adoption. We’ve had over 3,000 clusters installed now, since we went GA just over a year and a bit ago. The customers that are adopting Kubernetes, as you would expect are typically the customers who … or the verticals that are further along in product maturity cycles. So we’re seeing strong adoption in financial services, service providers and life sciences, for example. And yes, of course in all of those environments, things like reliability and security are incredibly important tick boxes that are both valued and required by the internal organizations too, before they adopt any new software.

Chris Evans:
Martin, I think makes common sense, doesn’t it? I mean, that’s one of the things with our background, and well, Alex as well, because we’ve known Alex for a long time, but from our background in the Enterprise, these are the sorts of things that are expected to be table stakes in traditional products so anything that’s coming along from a software perspective is going to have to deliver to the same requirements.

Martin G:
Yeah. So you’ve got the basic security requirements and now encryption of data at rest and data in flight’s becoming table stakes. This wasn’t a really big case three or four years ago in most environments, it’s only recently that encryption’s become absolutely key for your data, manageability, manageability at scale. I do have a question for Alex. So this is our version two, so how do you go from version one to version two because there is a … There’s an interesting comments which I’ve seen floating around about Kubernetes in general, which compares it to OpenStack. Lots of people actually had a pretty good OpenStack environment, but then it came to upgrade it, that’s why OpenStack is now dying, because it’s a nightmare to upgrade. So Kubernetes itself could actually have some of the same issues, but what about your product going from V1 to V2, is it a in-place upgrade? Is it non-disruptive? So, all things which Enterprise customers expect to see.

Alex Chircop:
So that’s a really good point and certainly what we’re saying is, as Kubernetes matures, the upgrade processes are certainly becoming easier and automated. So I’ll just speak a little bit about Kubernetes and I’ll speak a little bit about StorageOS. So in the Kubernetes world, we’re seeing both managed service providers, but also the distributions, moving to what they’re calling immutable operating systems. We’re seeing this across a number of the different service providers now. What that means is that, effectively when you have a managed Kubernetes cluster, you upgrade by effectively throwing away a node and adding a new node with the new version to the cluster. This is, while not quite a non-disruptive process, it is an in-place process. Where applications and data that gets drained from nodes and nodes get replaced on a rolling basis. When it comes to StorageOS, we’ve invested quite a lot of work into this concept called an operator, which is a Kubernetes construct that effectively automates the work that a traditional [inaudible 00:27:27], I guess, would be responsible for.

Alex Chircop:
That includes things like for example, removing nodes, adding nodes to the cluster, scaling the cluster, but also the upgrade process. So similar to that, with StorageOS, there is an upgrade process. Upgrading from version one to version two involves a couple of steps but they’re looking to be automated through an in-place upgrade too.

Chris Evans:
So that all sounds great, but I’m really interested to understand what exactly you’re finding from the customers that you have and what their experiences are, because you said earlier that there were 3,000 clusters out there. With that many environments out there, you must be getting some good information back. What are you seeing customers do and what successes are they having with the platform?

Alex Chircop:
So we’re seeing quite a wide variety of adoption, as you can imagine. Everything from small, developer-focused instances, to midsize projects, to larger environments. Some of the more interesting environments and customers we’re working with involve, for example, a financial services trading platform that is being rolled out to a number of different banking systems right now, where StorageOS is providing both availability and trade processing capabilities for these really large financial services organizations. We’re also seeing adoption in life science environments, where there is a strong mix between analytics but also the data services that are used by researchers. We’re also seeing some really innovative, more complex workloads that … So for example, I’ll give you two examples. One is in a case where users have a very sophisticated CICD environment, which involves fairly complex workflows and lots of regular updates, where this particular customer actually creates and deletes something like 3,000 volumes at day as part of their workflow.

Alex Chircop:
We’re also seeing more complex environments with, for example, another trading firm that’s using StorageOS to actually create multiple copies of data that are used for different analytics and different batch requirements at different points in the day too. So it’s everything from the mundane and more traditional, but we’re also seeing the types of workloads that can only happen with these type of cloud native environments. So, for example, the vast majority of traditional [inaudible 00:30:00] for example, would seriously struggled to create and delete thousands of volumes on a daily basis.

Chris Evans:
I find that interesting and Martin, you’ve probably got a better angle on this than I have because you actually have people working for you and I don’t. But I find it interesting that Alex is almost pointing out that the sort of use cases that you’re seeing, which are the interesting ones, are the ones where storage is becoming part of the actual workflow process, rather than just being a place to keep your data. I think that that is a really interesting use of the technology.

Martin G:
We’ve seen storage, certainly in our world, I mean, storage is part of the workflow process. So it’s absolutely key of how we move content around and what we do with content as you work on it. It’s not just somewhere we store data. We do have large, large amounts of data stores but this concept of workflow storage has become absolutely key to the way we deal with content in our business.

Alex Chircop:
Yeah, very true.

Chris Evans:
Do you think that’s the majority, Alex, of what you’re seeing out there? That we’re seeing an evolution of the way that people are developing apps into applications and solutions that really have that workload requirement and therefore the traditional story just simply won’t be able to match those requirements?

Alex Chircop:
I think that’s right. I think for some of these use cases, traditional storage just can’t keep up with those rates of change, for example. But also it is about the type of changes that Kubernetes and orchestrators in general bring to an environment. So the whole concept of having composable infrastructure, or declarative way of describing what your infrastructure needs to look like, really does change the whole aspect of how people consume infrastructure. Because one of the things we’ve been considering is what’s driving the adoption of all of these things? I think we have this concept that cloud came along and started providing the ability to have infrastructure and demand, but actually, in our thinking, cloud it isn’t actually a place. What it’s done is it’s created a set of behavioral models that people want to adopt.

Alex Chircop:
So end users want to have a switch on/switch off consumption model, they want to have self-service, they want to have automation on deployment. They want to have automation around operational processes too. So what we’re seeing is, we’re seeing every layer of the stack, whether it’s compute, or storage, or networking, operate to provide these dynamic services because end users wants to be able to say, “This is my application, this is what it needs.” And they just want the thing to just work. They don’t want to have to go through and configure everything by hand. So as we move to those sorts of environments, we’re going to see a change in focus, where rather than previously focusing on the complexities of how you configure your infrastructure, application developers can now focus on, well, their actual application.

Chris Evans:
Yeah, I think you’re right. I think the way that we saw, for instance first VMware and then the cloud, allow us to just say, “Give me a VM,” which effectively was, “Give me an environment to write my application into.” Even more so when we saw that move with containers, I think the fact that you can just say, “Give me an asset to do some work with,” storage shouldn’t be treated any differently. It’s now that workflow that’s been driven much quicker with containers that’s forcing that. I think that’s a really key message for people to think of going forward.

Martin G:
I think we’re in a very interesting time when we’re recording this. Obviously we’re in the middle of the coronavirus crisis and it would be really nice for a lot of the larger end users to be able to redefine their infrastructure and re-declare it without a massive amount of extra work. So you could actually almost have a pandemic-declared instruction, which may change your application mix and what applications spin up but most of us don’t have that at the moment. So we’re trying to do things by hand and if we had a much more mature orchestration environment which was ready to go, there was a whole load of stuff which could have been done a lot more quickly. I think the current crisis is actually going to drive some of these changes and reinforce what people are trying to do.

Chris Evans:
Yeah, agreed. I don’t think we should be looking at things and thinking the adversity side of it. We should be thinking of the positive side of the fact that inevitably this is an evolution, but it just might be an evolution that’s been pushed a bit faster because of where we stand.

Martin G:
Well, today actually, I saw an interesting tweet from somebody who was suggesting a new survey for the companies of what’s driven your digital transformation, your CEO, your CIO, or COVID-19?

Chris Evans:
Oh yeah, yeah, yeah, yeah, absolutely. Okay, so Alex, if people want to go and find out about the technology, go and try it out, understand what platforms it’s supported on, just generally get that next stage into understanding how to use this and mess about with it and then even consider putting in into production, how would they go about doing that?

Alex Chircop:
So that’s really easy. The first point of call is the website at storageos.com. There you can find all the documentation as well as the software. The software is available for free on Docker Hub and other industries and it is just a container. One of the nice things about StorageOS is we don’t have any particular dependencies, so anywhere you can install a container, you can install StorageOS. We currently support all of the major distributions and the major managed service providers including things like OpenShift and Rancher as well as EKS and AKS and Google, et cetera. The easiest way to try the software is to follow the self-eval guides. The software is available on a freemium model. What this means is that effectively your first 500 GB are available for free and then the rest of the software is available on a subscription basis.

Chris Evans:
So obviously, that now means that version two is available so people can go and you actually download and try out version two of the software?

Alex Chircop:
That’s right. Version two is available and all the documentation for version two is available on docs.storageos.com.

Chris Evans:
Fantastic. Well, Alex, thanks for joining us. I think that’s been really interesting, there’s some very thought provoking pieces there around how we’re going to adopt technology like this going forward as part of the way that we develop applications and so on, and I appreciate you coming on. Thanks very much for your time and look forward to catching up with you soon.

Alex Chircop:
Thank you, Chris and thank you, Martin. This has been a great conversation.

[/bg_collapse]

Related Podcasts & Blogs


Copyright (c) 2016-2020 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #23EK.