• Ei tuloksia

Cloud technologies have been on the rise for more than a decade. Since the introduction of today’s most popular cloud technologies Docker (2013) and Kubernetes (2014), mi-gration to cloud has become easier and uncertainties related to cloud technologies have reduced significantly. Today, more and more of the world’s computing is done in the cloud. This chapter covers the basics of cloud computing and elaborates how micro-service architecture can help in harnessing its benefits. At the end of the chapter, the building blocks of modern cloud applications – Docker containers – are introduced.

2.1 Cloud Computing

Cloud computing is a resource sharing model that is used to give access to computing resources, such as storage and servers. The resources can be accessed easily and on-demand via a network, and they are allocated automatically with minimal interaction with the service provider. Cloud computing services are provided by using three different ser-vice models: Infrastructure as a Serser-vice (IaaS), in which the physical resources like com-puting hardware and storage are provided as a service; Platform as a Service (PaaS), which is a platform that provides both the physical infrastructure and the operating sys-tems for developing and deploying applications; and Software as a Service (SaaS), which provides applications as services via a thin client interface or programming inter-face [2].

One of the key characteristics of cloud computing is rapid elasticity, meaning that the provisioned resources can be scaled up and down quickly to match the demand [2]. This reduces the need to over-allocate resources for varying loads and enables cloud provid-ers to enforce a more constant load by pricing resources based on available supply.

Cloud services can be deployed within an organisation (private cloud) or using an outside service provider (public cloud) [2], such as Google, Amazon, or Microsoft. Whether using private or public cloud – or a combination of them (hybrid cloud) [2] – efficient allocation of resources is a great concern. Meeting the demand exactly, and thus using cloud ser-vices efficiently, sets requirements for the applications to be developed. Most im-portantly, the horizontal scaling of the application should happen with ease and automat-ically. This requirement can be fulfilled with microservice architecture, which is discussed next.

2.2 Microservices

Microservices are small services that are used as building blocks when creating more complex systems. While in traditional architectures systems consist of parts with different roles, microservice architecture uses more independent and loosely coupled micro-services to achieve modularity. The benefits of micromicro-services mainly consist of their in-dependence, which makes them easy to develop, deploy and scale [3].

Systems that use microservices are often more robust than traditional systems. As the components of the system are independent services, failure in one component does not render the other components unusable. Furthermore, as one instance of a service fails, other instances can be created to take over the responsibilities of the failed service. The independence of the services also makes them inherently scalable. Service instances can be added and removed according to need. While applications using traditional archi-tectures can be scaled as well, they usually must be scaled as wholes. When a system is built from microservices, parts of the system can be scaled according to the demand for that specific service [3]. This can make resource use more efficient and reduce both bottlenecks and the over-allocation of resources.

To harness the benefits of both robustness and scalability, the states of the services must be managed. Applications that have some sort of memory of previous events that they use to fulfil their tasks are considered stateful. The information in memory defines the application’s state. Ideally, microservices are stateless [4]. If a service is stateless, it does not matter which instance of it handles which task. This makes scaling and taking tasks over from failed instances easy. Many services still require some state information, which makes scaling and recovering them problematic. As will be seen later, this is the case for the current implementation of the target system of this thesis.

Microservices should be run in isolation to prevent other microservices from affecting their performance. Isolated environments can be achieved through virtualization, but the overhead of traditional virtualization often gets too high when small microservices are used. Containers offer a solution to this, providing isolated execution environments with less overhead and better cost efficiency [3]. The most used container technology is Docker [1], which is discussed next.

2.3 Docker Containers

Docker is an open container platform released in 2013 and developed by Docker Inc.

Docker allows packaging applications with their dependencies and running them in

loosely isolated environments called containers [5]. Containers are similar to virtual ma-chines (VMs) but strip away more of the functionality not required by the application.

Containers achieve this by sharing host operating system services instead of relying on their own guest operating systems like VMs do. This makes containers more lightweight and faster to start than VMs. The difference between VMs and containers is illustrated in Figure 1.

Figure 1. Virtual machines (on the left) have their own operating systems, while Docker containers share host operating system services via Docker Engine.

Adapted from [6].

Docker is used for building, running, and shipping containerised applications. The oper-ations are powered by Docker Engine. Docker Engine resides on top of the host operat-ing system and operates usoperat-ing its resources. The Engine has a client-server architecture, that consists of a server, a command line client and the API between them. The server includes a daemon process called dockerd, that is responsible for creating and managing Docker objects, such as containers. The daemon is controlled by using the command line interface (CLI) client [7].

Docker packages applications and their dependencies into images. A Docker image con-tains all the information needed to run a Docker container. Images can be created by building on top of other images or by starting from scratch. The instructions to build an image are defined in a Dockerfile. Docker images can be shared with other users via Docker Hub or private registries. Containers are short-lived instances of Docker images.

Docker uses Linux namespaces to isolate containers from each other and the host. Con-tainers are managed by using either Docker CLI or Docker API. ConCon-tainers can be con-nected to networks, and storage can be attached to them with volumes [5]. Volumes are Docker objects used to persist the data that containers generate and use [8].

Docker supports defining and running multi-container applications through a separate tool called Docker Compose. Instead of running services through Docker CLI, Docker

Compose allows creating a YAML file for defining a set of services and running them with a single command. Volumes and networks for the application can also be defined in the Compose file. Docker Compose enables deploying a microservice application on a single Docker host. It is often used in development environments and continuous inte-gration workflows, but it works in production environments as well [9].