Ad Code

Responsive Advertisement

Container ,the core technology of serverless computing

    Containers the core technology of serverless computing

 “Serverless Computing is one of the cloud computing execution models. Cloud providers dynamically allocate and manage resources, and the price is based on the amount of resources actually consumed by the application, not the volume unit purchased in advance. I set it up.”

It's called serverless computing, but it still requires a physical server. However, it is a type of cloud service that requires only attention to application functions as all the contents of server management, such as capacity expansion and server location to be operated, are completely covered by the developer or operator.


Container, the core technology of serverless computing

In serverless computing, all elements except applications are provided by a cloud provider in the form of a service, which is sometimes referred to as'Function-as-a-Service'.  The cloud provider provides services for physical infrastructure, virtual machines, containers, and integrated management areas excluding applications, and users only manage applications.

Among them, container is a virtualization technology that is much lighter than virtual machines, and it is a technology that configures and runs all related elements (related libraries, configuration files, etc.) for application execution in one package. For example, if you move to a different version of the OS or server, you can easily move to a different computing environment without changing the code because libraries important for compatibility are configured together.


Since the capacity is smaller than that of a virtual machine, developers can shorten the time to create and deploy containers, and restarts are quick. For this reason, when not in use, the container instance can be powered off to optimize server usage. Serverless computing is generally based on this container technology.

However, in a container environment in which the creation and destruction of services (or containers) is performed in seconds, the number of volumes used by the container can be created in real time from a few tens to as many as tens of thousands. Since these volume creation and deletion operations occur dynamically, the existing static volume allocation operations are not suitable. Therefore, a storage environment that can dynamically allocate volumes is required.

Another issue is the Persistent Volume issue


Until now, containers have run mainly on web servers and stateless workloads that store temporary archival data, so you don't have to keep the existing data when you delete the container. However, as the container application area has recently expanded to stateful workloads such as DB servers, the latest state information and DB are preserved even when the container is powered-off, and then restarted when necessary to continue the service. Volume support was needed. Initially, the persistent volume was stored in the form of a file in a shared file system on the network, but this method is not suitable for DB servers that require high-performance I/O. Therefore, a method of dynamically allocating FC/iSCSI-based high-performance external storage volumes ( LDEV) is used.

HSPC supporting the optimal container environment

Hitachi Vantara's Hitachi Storage Plug-in for Containers (HSPC) supports dynamic creation of persistent volumes by linking with container orchestration tools such as Kubernetes and Docker Swarm. By installing HSPC and completing simple configuration, you can dynamically automate and manage storage volumes on the container orchestration tool.

With HSPC and the next-generation hybrid cloud storage VSP G-series and all-flash cloud storage VSP F-series launched in the first half of 2018, it is possible to dynamically create volumes for containers from a minimum of 16,000 to a maximum of 64,000 serverless computing. It guarantees the scalability of the container, which is the basis of.

In addition, since proven advanced storage management functions such as performance monitoring and remote failure handling support can be used as it is, 100% data availability service can be provided when the infrastructure environment is converted to a container, which is the only method in the industry. This allows customers to minimize risk when transitioning to a cloud environment and focus on implementing a target cloud/serverless computing environment.