People rush to the cloud, and it’s understandable. But what do you really want from the cloud? And can you reap those benefits yourself, on the premises?
To answer that, let’s take a look at the benefits of the public cloud. Many people think of the cloud as renting, rather than buying, hardware. The public cloud provides faster access to machines, avoiding waiting while hardware is ordered. You also don’t have to make a long-term commitment to particular types of machines, as you do when shopping. But you don’t just rent hardware with the public cloud – you also rent IT resources from the cloud provider.
This idea of renting both hardware and human resources brings us back to the original question: what do you want to get out of public cloud services?
Much of what organizations are looking for is flexibility. You can quickly start from scratch with a wide variety of applications and projects. This cloud-native agility approach is particularly attractive for innovative projects such as those that rely on heavy computing, such as the AI and machine learning model training phases. The ability to scale quickly and temporarily is also attractive for seasonal-based traffic spikes, such as those from retail companies.
What you get with the cloud sounds great, but what do you have to give (or give up) to get it?
Costs and compensation
There are three main areas where you give up something to get the flexibility that the public cloud offers. An obvious one is the expense of renting from a public provider and as your needs grow this is a cost that can be substantial and over which you have little control.
But a second trade-off that people often overlook is the issue of public multiple tenure. With the public cloud, you don’t control who is on the other side of the wall, as depicted in this figure.
This lack of control over who shares resources and how they are shared is not primarily a security issue, although that is something to consider. It is also a question of who has priority. The public cloud makes business sense in part because of the optimization of resource usage and costs (for the provider) through shared resources. But usage can be heavily subscribed, and since you’re not necessarily first in line for resources, this can make a big difference when you have heavy workloads. Not only do you give up control over costs and who you share resources with, but you can also give up control over required levels of performance when you need it.
Perhaps the biggest tradeoff is the third: location. With no machines on the premises, moving data and applications becomes a challenge. Many people who use public cloud services now realize that migration clouding is harder than it sounds. Moving everything at once to the cloud is not really feasible. You may want to move only certain applications to the cloud, but you most likely have many interlocking applications, so it is difficult to move just a few.
What if you could get many of the benefits of the public cloud with less trade-offs and costs? You can do this by bringing the cloud to you.
Secret ingredient for the cloud (consider the private cloud)
The private cloud provides much of the flexibility and convenience of the public cloud, but allows you to maintain control over costs, security, and how workloads are allocated. To consider what it would take to build and maintain a private cloud, first recognize a key reason why public clouds work as a business model: delegation.
In this context, what does delegation mean to you and why is it important? Part of the intelligence of the public cloud is that public cloud providers rely heavily on the operator-tenant-user style of resource allocation and management (compared to a more traditional IT user style). In the operator-tenant approach, IT responsibility can be divided between the provider (operator) and the customer’s IT and administrative teams (tenant). This delegation model eliminates the need for an intensive and highly skilled IT effort on the part of the tenant and the user of the negotiation, that’s part of what you get from the cloud provider, leaving the most basic and personalized administration to the customer, hence the benefits of convenience and flexibility.
This delegation of responsibilities also makes sense for the cloud provider. How can they afford to handle the biggest load on IT? The answer is simple: tasks delegated to them are common among tenants / users, and that often means these tasks can be automated. That’s most of what makes it possible for cloud providers to handle IT for such a large multi-tenant. Delegation in this context is radically different than simply outsourcing IT. With outsourcing, the goals are not aligned and the invoice, usually based on hours worked, reflects this. With the cloud, the customer is pay for results instead of effort.
If you could adopt this optimized model of operations for your systems, you could bring many benefits of the cloud on premises, under your own control. That is how.
What does it take to build a private cloud?
For the private cloud to work, you need a efficient scale system that truly supports multi-tenancy for diverse workloads at scale without imposing a heavy burden on IT. Delegating in a controlled way makes possible a DevOps model, where users serve themselves (the tenant / user roles in the old model) while allowing IT teams (the operators) to handle much more standardized tasks.
Automation of logistics tasks (data movement, data location, workload balancing, data replication, control over access to data and usage limits, and self-healing systems) provides the efficiency that makes all of this doable. This efficiency really pays off in convenience and agility, as well as value for money.
You don’t have to build all this automation yourself. An underlying software infrastructure for data storage and management that is designed to handle many of the data logistics tasks automatically can reduce IT effort and provide many self-service options for users. Internal users pay for resource usage in terms of results, not effort, by IT teams.
Another key aspect of a cloud-native system is the ability to leverage application containerization, both for agility and to run different workloads in separately defined environments on the same shared data infrastructure. The combination of a container orchestration framework like Kubernetes plus a data infrastructure that can hold containerized application data is essential to building a private cloud.
Technologies that support the private cloud
To address these requirements for building on-premises cloud systems at scale, HPE provides hardware-independent and software-defined solutions. HPE Ezmeral data structure it is highly scalable data infrastructure designed to handle data logistics automatically at the platform level.
With built-in self-healing capabilities, the data fabric provides reliability at scale. Data fabric also offers an efficient management system through data tissue volumes It provides controlled delegation of tasks between users and system administrators. Data Fabric’s open data access from multiple APIs makes it ideal for supporting multi-tenancy of diverse workloads at scale, and the Data Fabric serves as a data persistence layer for containerized applications orchestrated by Kubernetes. And to make it easy to manage multiple Kubernetes clusters on a large system, HPE provides the HPE Ezmeral Container Platform, with the data structure as a built-in data persistence layer.
About Ellen Friedman
Ellen Friedman is an HPE Principal Technologist focused on machine learning and large-scale data analytics. Ellen worked at MapR Technologies for seven years prior to her current role at HPE, where she engaged in the open source projects Apache Drill and Apache Mahout. She is the co-author of several books published by O’Reilly Media, including AI & Analytics in Production, Machine Learning Logistics, and the Practical Machine Learning series.
Copyright © 2021 IDG Communications, Inc.