• Ei tuloksia

Automatic Testing Approaches For Serverless Applications In AWS

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Automatic Testing Approaches For Serverless Applications In AWS"

Copied!
58
0
0

Kokoteksti

(1)

Automatic Testing Approaches For Serverless Applications In AWS

Faculty of Information Technology and Communication Sciences M.Sc. Thesis October 2021

(2)

Eetu Rinta-Jaskari: Automatic Testing Approaches For Serverless Applications In AWS

M.Sc. Thesis

Tampere University

Master’s Degree Programme in Software Development October 2021

With the increasing popularity of cloud services and cloud-native applications, server- less functions (FaaS, or Function-as-a-Service) are an emerging pattern for cloud application development. While the topic is under active research, little is known about how practitioners can apply existing automatic testing practices to serverless functions. Current research note that the tight coupling with cloud services, lack of tools, and reduced monitoring and observability capabilities introduce challenges to testing and troubleshooting serverless functions. This thesis aims to identify and document practical automatic testing approaches for serverless functions built for the Amazon Web Services cloud platform.

The thesis discusses the fundamental concepts of serverless functions and soft- ware testing, the current state of the technology, and collects possible testing ap- proaches by conducting a literature review of guidebooks. The testing approaches are applied in practice to a full-stack application, where the back end application comprises a serverless function and a cloud database service. Unit, integration, and system testing approaches are examined and applied to the application. The study identifies three distinct integration testing approaches; local, hybrid, and cloud in- tegration testing. Each of the approaches varies based on the level of mocking used and the utilization of real-world cloud services. The study showcases how to test serverless functions using current tools, including an Infrastructure-as-Code frame- work, and discusses the advantages and disadvantages involved with each testing approach.

Keywords: software testing, serverless function, Amazon Web Services, Lambda, Function-as-a-Service, Infrastructure-as-Code.

The originality of this thesis has been checked using the Turnitin Originality Check service.

(3)

1 Introduction . . . 1

2 Serverless Functions & Cloud . . . 3

2.1 Cloud Service Layers . . . 3

2.2 Advantages, Challenges, and Disadvantages . . . 4

2.2.1 Cost . . . 4

2.2.2 Application State & Cold Starts . . . 5

2.2.3 Vendor Lock-In & Technology Limitations . . . 6

2.2.4 Event-Based Programming Model . . . 7

2.2.5 Monitoring & Observability . . . 7

2.3 Tooling for Serverless Functions . . . 8

3 Testing . . . 9

3.1 Testing Levels . . . 9

3.1.1 Unit Testing . . . 9

3.1.2 Integration Testing . . . 10

3.1.3 System Testing . . . 11

3.1.4 Acceptance testing . . . 11

3.2 Background on Testing Serverless Functions . . . 11

3.3 Testing Approaches in Guidebooks . . . 13

3.3.1 Unit Testing . . . 16

3.3.2 Integration Testing . . . 16

3.3.3 System Testing . . . 17

3.3.4 Summary of Approaches . . . 18

4 The Full-Stack Application . . . 19

4.1 Serverless Framework & AWS Services . . . 19

4.2 Application Description & Architecture . . . 20

4.3 Infrastructure Management . . . 22

4.3.1 The Pulumi Application . . . 22

5 Applying The Approaches . . . 25

5.1 Testing Tools . . . 25

5.1.1 General Purpose Testing Framework . . . 25

5.1.2 End-to-End Testing Framework . . . 26

5.2 Unit Tests . . . 27

5.2.1 Alternative Tools & Methods . . . 31

5.3 Integration Tests . . . 31

5.3.1 Local Integration Tests . . . 35

(4)

5.3.4 Alternative Tools & Methods . . . 37

5.4 System Tests . . . 38

5.4.1 Alternative Tools & Methods . . . 40

5.5 Summary of Tests . . . 41

6 Discussion . . . 43

6.1 Testing Approaches . . . 43

6.1.1 Complexity of Testing . . . 44

6.1.2 What Is Missing . . . 44

6.2 Tools . . . 45

6.2.1 Licenses . . . 45

6.3 Threats to Validity . . . 46

7 Conclusion . . . 47

References . . . 53

APPENDIX A. Back End Test Coverages . . . 54

(5)

1 Introduction

Serverless functions are a recent form of software that live in the cloud. They are, first and foremost, cloud-native applications. The technology promotes building small functions, leading the applications towards a microservice structure. The idea of serverless functions has been around for a while but did not become popular until 2014 with the introduction of Lambdas to the AWS (Amazon Web Services) cloud platform [32, 72, 17]. In the context of cloud services, serverless functions refer to the FaaS (Function-as-a-Service) category of services provided by cloud providers such as Google Cloud Platform (Cloud Functions), Amazon Web Services (AWS Lambda), and Microsoft’s Azure (Azure Functions). The term serverless is also used to refer to BaaS (Backend-as-a-Service), which are cloud provider managed back end services, such as databases, that can be used in combination with FaaS functions [17, 34].

In the FaaS model, the cloud provider controls the application runtime infras- tructure and provides the application with the computational resources it requires.

Software practitioners are in charge of defining and configuring the cloud infras- tructure on a more general level and implementing the serverless functions [32, 35, 17]. Serverless services generally follow a pay-as-you-go subscription model, where the computational resources used by the application are billed based on their usage [32, 44]. Serverless functions can scale to zero, meaning the application instances are running only when necessary. Serverless functions are invoked within the cloud platform through some trigger, such as an incoming HTTP request [32, 44]. Besides cost, the benefit of serverless applications is that they scale horizontally automati- cally and seemingly infinitely while cutting back on maintenance and management needs [44, 38].

The serverless application model is actively researched, and current publica- tions are aware of challenges with the technology, including the challenges with testing serverless applications. Serverless functions are tightly coupled to the cloud provider’s runtime environment and services, introducing challenges to software test- ing, as the environment is difficult to reproduce elsewhere [38]. Concrete, practical approaches for the automatic software testing of serverless applications have not yet been well documented in scientific literature. Therefore, this thesis aims to discover practical approaches for testing and explore how to apply them in practice. The research questions in this thesis are:

RQ1. What are the current challenges for testing serverless functions found in scientific literature?

(6)

RQ2. What are the different automatic testing approaches for FaaS applica- tions?

RQ3. How can the approaches be applied in practice?

First, scientific literature is reviewed to understand the background on serverless functions’ current state and challenges. Then, a literature review of various guide- books aimed at serverless application development is conducted. Once the testing approaches have been collected, the thesis will explore how to apply the approaches to a cloud-native serverless application and what kind of tools can be used in the process. The approaches are then analyzed to find what advantages and disadvan- tages each approach has. Alternative methods and tools for applying the approaches are also discussed.

The thesis comprises seven chapters. The Chapter 2 explores the concept of the serverless application model in general and the scientific literature on the subject.

The Chapter 3 discusses the background of testing in general and serverless appli- cations, the methodology for the guidebook literature review, and the discovered testing approaches. The collected testing approaches are then applied to a server- less application that is first introduced in the Chapter 4. Then in the Chapter 5, the approaches and their application process are shown and discussed. The thesis results are discussed in the Chapter 6, and lastly, the thesis is concluded in the Chapter 7.

(7)

2 Serverless Functions & Cloud

Serverless functions, called Lambdas in AWS, are a modern way of developingcloud- native software solutions that take advantage of cloud platforms’ scalability and dynamic nature. In the serverless function application model, the management of the application instances, usually residing within containers, is relinquished to the cloud platform [17, 35]. In this sense, the term serverless is slightly misleading, as the applications still do run on a server, but that server is not a concern for the operation of the application [17]. The concept of serverless applications encourages a microservice architecture for applications where the application is divided into smaller, more optimized functions. Serverless functions are intended to be small programs with a single purpose [44, 38].

AWS imposes certain limitations on the serverless functions on the platform.

Currently, the Lambda execution times have been restricted to 15 minutes [6]. Ad- ditionally, the Lambda application zip image size is limited to 250 MB, or 10 GB when using container images [6]. Leitner et al. [38] note that the restricted run time length and application binary sizes are considerable limitations and challenges for serverless applications. However, it is worth noting that the study was conducted in 2019, whereas AWS Lambda introduced the possibility of 10 GB container images in 2020, overcoming the particular limitation to a degree [38, 51]. Therefore, serverless functions can be built using more traditional architectural models as long as the application stays within the set bounds [17, 25]. The serverless application model is not perfect and still maturing, and as with any technology, it comes with trade-offs and limitations discussed further in the following sections.

2.1 Cloud Service Layers

To better understand Serverless and FaaS, it is essential to know how cloud services are categorized. The key difference between cloud services and regular server rental services is that instead of offering dedicated hardware, the resources provided by the cloud service are in most cases virtualized, bringing along the benefit of elasticity [34]. Elasticity means that the system can automatically scale based on the resources required by the system [34]. There are different levels of abstraction and services available for application hosting in cloud platforms, which are generally divided into IaaS, PaaS, and FaaS [34].

IaaS (Infrastructure-as-a-Service) falls under the lowest level of virtualization offered in cloud services, the Virtualization model depicted in Figure 2.1. IaaS refers to automatically scalable VMs (Virtual Machines) managed by the cloud platform.

(8)

[34]

The next layer depicted in the Figure 2.1 is PaaS (Platform-as-a-Service), which utilizes containerization services such as Kubernetes and Docker Swarm to manage and scale application containers based on their configuration automatically. Con- tainers are VMs that are generally more lightweight compared to regular VMs due to running on top of the Kernel of the host machine instead of a full-blown Guest OS (Operating System). The lightweight nature of containers allows faster scalability and more efficient utilization of hardware resources. [34]

FaaS, or serverless functions, is built on top of the PaaS technology. The differ- ence is that with FaaS, the cloud platform takes control of managing the containers automatically. Compared to PaaS, FaaS allows so-called scaling-to-zero, meaning that when there is no load to the application, the FaaS system shuts down the containers. In contrast, with PaaS, containers are generally always on. [34]

Figure 2.1 Cloud architecture layers. Based on [34].

2.2 Advantages, Challenges, and Disadvantages

This section explores the different challenges, advantages, and disadvantages server- less applications face in general. The most mentioned topics in the scientific litera- ture were cost, application state, cold starts, vendor lock-in, technology limitations, event-based programming model, monitoring, and observability.

2.2.1 Cost

Serverless functions can save resources within a software project. With the cloud platform automatically managing the physical infrastructure, elasticity, and scaling,

(9)

there is no need for the operational team to spend time on it. Building a FaaS application can also significantly cut down on the running operational costs of an application, depending on the nature of the application [38]. The costs of running a serverless application are mandated by the pay-as-you-go payment schemes of the cloud providers. However, due to this type of payment model, developers need to also focus on optimizing the serverless functions to minimize running costs [17, 38]. The obvious beneficiaries of this payment model are applications that are not required to be processing something around the clock [38].

The computational resources metered in AWS are memory and CPU usage [17].

Generally, serverless functions are very affordable to operate. According to the ex- ample on the AWS Lambda web page [7], invoking an AWS Lambda application three million times in a month for a second at a time, totalling approximately 833 hours or 34.7 days of processing time, would cost around $18.74. Baldini et al.

[17] note that for cost optimization, the FaaS model is best suited for applications that mostly require short-lasting and highly compute-intensive workloads. Addi- tionally, I/O heavy applications are cheaper to operate within VMs (IaaS) or by using containerization (PaaS) technologies instead of FaaS [17].

Due to their scalable nature, serverless functions also mitigate the risks of Dis- tributed Denial-of-Service (DDoS) attacks by sinking large computational loads as increased service costs, instead of the reduced availability faced by more traditional application operational models. This aspect of serverless functions is beneficial for software companies who want to trade the downtime of their services for increased costs. However, it is vital to notice that the serverless functions themselves are not the only sources for costs when building applications in the cloud. Leitner et al.

[38] found that while their interviewees considered FaaS cheap, many grey literature sources claim otherwise. AWS offers an extensive amount of computational process- ing for free, but costs start to add up once the limits of the free tier of services are exceeded. FaaS applications require other services from the cloud provider to func- tion. For example, an API Gateway is necessary to trigger the serverless functions from incoming HTTP requests. These additional services have separate costs and billing schemes that incur on top of the FaaS function’s invocation costs. [38]

2.2.2 Application State & Cold Starts

Serverless functions are designed to be stateless in AWS. The application’s internal state persists only for that single invocation, and new instances are created with no knowledge of previous or parallel invocations of the function [38, 17]. Some application developers have resulted to certain practices to overcome the stateless nature, such as storing application states to external storage, e.g. a database [38].

This practice can negatively affect a Lambda’s performance and increase costs and

(10)

coupling to other cloud services.

Lambda invocations are not guaranteed to use pre-existing instances of the ap- plication, as instances are created on-demand based on load [38]. If an invocation is required to start up an entirely new function instance, it leads to a cold start. Cold starts are an aspect of serverless functions that need to be taken into account. They slow down the serverless function invocation for the duration the container needs for loading up a new application container and its dependencies [38, 17]. Some cloud platforms and technologies optimize cold start better than others or provide ways for mitigating the issue through methods such as ”warming”, where the function is invoked periodically to keep an instance of the application running at all times [17]. However, this is an anti-pattern in the sense that the FaaS runtime loses the ability to scale the function to zero, increasing its operational costs. Nowadays, AWS offers this as an additional paid feature called Provisional Concurrency that ensures a specified number of Lambda functions are kept warm at specified times for sub-hundred millisecond response times [6]. Therefore, serverless functions are not best suited for use-cases requiring them to act urgently within milliseconds after being triggered, unless paying extra to keep the functions provisioned [38].

2.2.3 Vendor Lock-In & Technology Limitations

When choosing the cloud provider for a serverless application, it is necessary to know what kind of technologies will be used. The supported programming languages and technology stacks vary between cloud providers, and each cloud provider has vari- ous services available that require the use of specific technologies [17]. The cloud providers often want to enforce their platform-specific programming SDKs (Soft- ware Development Kits) when interfacing with other cloud services, which escalates vendor lock-in [38, 17]. Since serverless applications are often tightly coupled to other services within the cloud platform such as file storage, databases, messaging services, among others, they become vendor-locked to some degree with the current technologies and tools. The tight coupling also means that higher levels of software testing become more complicated as the serverless functions become dependent on cloud services.

Being locked into a specific vendor can be a significant challenge and a risk that organizations need to consider [38, 17]. One way of counteracting this issue is by abstracting vendor-specific parts of the application, which some frameworks like the Serverless Framework aim for [35, 38, 59]. However, these frameworks come with their own limitations in regards to supported technologies and programming languages. Additionally, by looking at the sample applications and documenta- tion provided by the frameworks like the Serverless Framework [59], it can be seen that the frameworks are still unable to be ultimately provider agnostic, and require

(11)

vendor-specific SDKs to interface with the cloud services. Vendor lock-in caused by the SDKs can be mitigated to some degree by isolating vendor-specific code to a separately interfaced service layer, as can be seen in the structure of the full-stack application in Chapter 4. This way, the entire application would not require as much refactoring when migrating from one vendor to another since only the vendor-specific layers of the application needs to be re-implemented.

2.2.4 Event-Based Programming Model

As discussed earlier, serverless applications comprise one or more serverless functions designed to have a single purpose. This idea is carried on to the source code level and is similar to more traditional programming. Each serverless function has one main programmatic function called the handler, which is called upon when the serverless function is invoked. Serverless functions are triggered by events from services they are attached to, such as scheduled events, file uploads to file storage, or incoming HTTP requests through an API Gateway. Each invocation of the serverless function has access to the context of the event, which in the case of an HTTP request would include the request data and parameters. [44, 17, 38]

Leitner et al. [38] found that FaaS is commonly used for developing HTTP APIs that incorporate other cloud services, such as databases, because of this event-driven approach. They also found that many practitioners have difficulty understanding the event-driven model behind FaaS because it differs from more traditional pro- gramming models in many ways. The difficulties are partly caused by how recent the technology is, as practitioners have not had time to adapt it yet.

2.2.5 Monitoring & Observability

Most cloud providers, including AWS, do not allow users to access or view the appli- cation instances in any way and only provide application logs for the invocations [17, 41]. This limitation means that serverless functions suffer from reduced observabil- ity to the execution and inner states of serverless functions, which brings difficulties to diagnosing and troubleshooting bugs [17, 41]. To add to the complexity of moni- toring serverless functions, the number of application instances (containers) running at once at any given time is unknown. This scalable nature of serverless functions increases the number of log trails produced [17].

The studies by Leitner et al. [38] and Baldini et al. [17] point out that the lack of comprehensive tooling regarding monitoring and observability is one of the significant challenges that serverless applications currently face. One solution cloud platforms and third parties have come up with is to trace the steps and phases of the cloud application execution and provide end-to-end traces of the execution

(12)

flow [17, 44, 8]. However, troubleshooting by following the traces and logs is still primarily a manual, complex, and labour-intensive process [41]. Therefore, the importance of implementing comprehensive automatic testing suites is increased so that bugs are caught as early as possible before they become difficult to troubleshoot.

Additionally, automatic testing opens the door to regression testing that might alleviate the necessity of manual troubleshooting later.

2.3 Tooling for Serverless Functions

Most major cloud providers offer their platform-specific SDKs (Software Develop- ment Kit) to develop and deploy Serverless applications. AWS’ catalogue includes the AWS CLI (Command Line Interface) and SAM (Serverless Application Model) toolkit, IDE (Integrated Development Environment) plugins, and SDK libraries for various programming languages [15].

There are also a lot of frameworks available, varying from open-source projects to commercial solutions intended to simplify the development of serverless applications.

This list includes frameworks, like the previously mentioned Serverless Framework [59], Webiny [71], AWS Chalice [3], Kubeless [36], and Fn [24], among many others.

While some of these frameworks, like the Serverless Framework, aim at providing cloud provider agnosticism through libraries and APIs that support different cloud providers, there are other approaches as well. The approach shown by the Kube- less and Fn frameworks, is to move the FaaS layer’s runtime management to the framework, which itself runs on the PaaS containerization layer through services like Kubernetes and Kafka [36, 24]. Additionally, this approach enables hosting FaaS applications outside of cloud platforms. According to the study of Leitner et al. [38], the Serverless Framework is the most common choice for developers. It is also one of the few that provide and support CI/CD (Continuous Integration and Continuous Delivery) services, though currently, the only cloud provider supported is AWS [59].

Cloud infrastructures are often managed using template engines like AWS’ Cloud- Formation, or other IaC (Infrastructure-as-Code) frameworks [30]. Some develop- ment frameworks, like Webiny [71] and AWS Chalice [3], are built to support IaC frameworks such as Pulumi and Terraform out-of-the-box. IaC frameworks, dis- cussed more in Chapter 4, are used to build, manage, and destroy the cloud service infrastructure programmatically and automatically.

(13)

3 Testing

Software testing methods can be divided into multiple categories based on different factors. Tests can be executed either manually or automatically, they can be func- tional or non-functional, and they can be divided into levels based on the test scope [52]. Additionally, tests can utilize either white-box or black-box techniques, though their applicability varies based on the testing level [69]. Testing is an integral part of the Quality Assurance (QA) process of software development projects. The goal of testing is to catch bugs, and it is a continuous recurring process throughout the lifecycle of a software application [52]. Automation is an important aspect of testing, as testing everything manually would quickly become unnecessarily labour-intensive during a software project [52].

3.1 Testing Levels

The four general levels of automatic application testing are unit, integration, system, and acceptance testing, based on the V-model of software testing (Figure 3.1) [52].

The following sections describe these levels in more detail.

3.1.1 Unit Testing

Unit testing is the lowest level of testing with the smallest scope. Unit tests are smaller tests where individual tests verify the functionality of a piece of the appli- cation (unit), like a single function. Unit tests isolate the unit from any relevant dependencies by using mocks [64]. Mocks are simulated objects that are intended to mimic the functionality of a piece of software [63]. What is mocked is up to the practitioner. However, the most commonly mocked dependencies are external dependencies such as cloud services, and domain objects (other units and compo- nents of the application) [63]. Unit testing falls under white-box testing, meaning that knowledge of the application code is required for implementing the tests [69].

Unit tests are oriented towards exploring the different possible executions for a unit, including valid and invalid function calls and ensuring the unit is designed to handle these different scenarios. Unit tests are at the bottom of the V-model and form a basis for higher levels of testing [64]. Unit tests are generally the fastest to run, as they test smaller pieces of software and are more isolated than higher-level tests [64].

(14)

Figure 3.1 Software Testing V-Model. Based on [52].

3.1.2 Integration Testing

Integration tests are on the next level of testing from unit tests, and they are intended to test larger pieces of the application [52]. External integrations may or may not be mocked depending on the need and whether the integration is the focus of the test or not [63]. Integration tests can either be white-box tests or black-box tests. Black- box tests do not require any knowledge of the application code to be implemented [69]. For example, an integration test for an HTTP API application could call an endpoint with some inputs and verify that the returned response matches the expected output values.

There are two major branches for approaching integration testing; big-bang and incremental testing. The big-bang approach relies on all functionality to be imple- mented before testing, and it tests all units simultaneously. The incremental testing approaches, top-down and bottom-up approaches, do not rely upon all modules (or units) to be implemented for the tests, and missing functionalities are mocked. In the bottom-up approach, the smallest units are tested first, and their integration is tested incrementally upwards level-by-level, while the missing higher-level imple- mentations are mocked throughdriver implementations. In the top-down approach,

(15)

the tests move from the highest level modules towards the lower level modules while mocking all missing lower-level unit implementations. The added benefit of incre- mental testing is that tests can be implemented while development is still ongoing.

[58]

3.1.3 System Testing

System-level tests aim to test the entire system as a whole, which might consist of multiple applications [52]. One method of system testing is end-to-end (E2E) testing [64]. For example, modern full-stack applications consist of separate front end and back end applications. When testing this kind of system end-to-end, tests are usually designed using a black-box technique where the front end application is tested by navigating the UI programmatically and observing the effects of the actions (cause-effect graphing [69]). By clicking buttons, filling forms, and so on, the front end application is made to call the back end application, executing code on both ends and testing the entire system [64]. Non-functional testing methods, like load and performance testing, also fall under the umbrella of system testing [52].

3.1.4 Acceptance testing

Acceptance testing is the highest level of testing and often includes the end-users and other various stakeholders of the application. Acceptance testing covers many different testing types, like operational, contract, regulatory, and user acceptance.

For example, alpha and beta test releases are a form of user acceptance testing. [52]

Acceptance testing does not fall under the topic of this thesis and will therefore not be looked at in more detail.

3.2 Background on Testing Serverless Functions

Serverless functions are triggered through well-defined interfaces, meaning that im- plementing unit testing is generally a straightforward process [44, 38]. The most considerable obstacles with testing emerge in the higher levels of testing, more specif- ically, integration and system testing [44]. Reproducing the cloud environment in a local environment is currently challenging, and being able to match the cloud envi- ronment locally completely is impossible [38, 41]. This obstacle means that the only way to achieve realistic testing results is by running higher-level tests within the cloud environment, which has the disadvantage of possibly increasing operational costs [38]. However, Ivanov and Smolander [30] note in their case study, that the costs incurred by cloud executed tests were quite low, and most of the costs came from other cloud services they chose related to monitoring.

(16)

Because unit tests are very isolated, they are generally run in a local environment [64]. Therefore, testing in the cloud needs to be mainly considered for integration and system tests. Mocking is a possibility for integration tests but not for system testing. The general benefit of testing locally with mocks is that tests generally run faster when slower external dependencies and services, like databases, are mocked [63]. Mocking also has some benefits in regards to cost, as the local tests do not take up cloud resources. The study by Lenarduzzi and Panichella [39] mentions that locally testing serverless functions requires more extensive mocking in general. Even though there are many mocking libraries available for various cloud providers and programming languages, mocking can still be a labour-intensive process. Addition- ally, mocking leads to isolation, and when used within integration tests it means that the tests are not validating whether the interactions with the external dependencies and services are functioning as intended [39, 63]. On the other hand, while more difficult, testing with actual services means the integration tests can reproduce the application’s behaviour in production use to a varying degree [39, 63]. Addition- ally, not relying on mocks reduces the workload in regards to their development and management.

When it comes to system testing, running tests in the cloud environment is cur- rently the only realistic option available, as the entire system needs to be present in a production-like environment for testing. The favoured approach for system test- ing by the interviewees in Lenarduzzi’s and Panichella’s paper [39] is to implement end-to-end tests. In large-scale systems this can be a complex task, as they can con- sist of large amounts of serverless functions interacting with each other along with various other cloud services, such as event buses, databases, and messaging services [39]. The tests need to be able to trace and observe the asynchronous interactions between different systems throughout the execution of the test. It means that tests increase rapidly in complexity the more components the system consists of [39]. Ad- ditionally, monitoring and observing the interactions between services and functions is complex when multiple participants may be broadcasting them simultaneously [39].

Leitner et al. [38] discovered through their survey that the most common ap- proach among developers for implementing integration and system tests for serverless functions is by carrying out the testing in a separate cloud environment expressly set up for that purpose. After that, the second most common approach was to run integration tests in a local environment with mocks. Lastly, some develop- ers tested applications in the production cloud environment or utilized acceptance testing methods like canary releases and A/B testing. The study did not mention the hybrid integration testing approach discovered during the guidebook literature review that is discussed next.

(17)

3.3 Testing Approaches in Guidebooks

To further look into the testing approaches discussed by the scientific literature, a literature review was conducted of guidebooks oriented towards teaching how to build serverless applications. In total, 22 guidebooks were reviewed and evaluated, as seen in Table 3.2. The books were found through the Tampere University Library’s Andor search engine and Google search. The used search queries were:

1. ”serverless” AND ”testing”

2. ”aws lambda” AND ”testing”

3. serverless testing book

4. (”serverless” OR ”aws lambda”) AND ”integration test”

The testing terminology used across the books varies quite a bit, and lines be- tween testing levels can be blurry at times. This ambiguity created the need to think about criteria for differentiating between testing levels and approaches for the book evaluation. For example, both integration and end-to-end system tests test the integration of a serverless function in a system to a varying degree, but the difference lies in where the focus of the test is and what is being observed and monitored. Some books put tests that test the entire serverless function under unit testing, which other authors have categorized under integration testing. Both terms can be considered valid depending on the size and integrations of the function in question. A serverless function could in its entirety be a single small unit, the han- dler function, which produces a dilemma for evaluating the book categories. In the case of a single-function serverless function, the guidebooks were categorized under unit testing if the approach supports white-box testing techniques and mock- ing, and otherwise under integration testing. Additionally, testing approaches that were indirectly invoking the function through API Gateways, or some other trigger, were categorized as integration tests unless the test scope was wide enough to be considered a system test (e.g. end-to-end test).

Some books used other slightly ambiguous terminology for testing, like ”func- tional testing” or ”component testing”, which, in most cases, were interpreted to refer to integration testing. Criteria statements were defined for distinguishing the testing levels, as seen in Table 3.1. The statements are based on the scientific liter- ature discussed earlier and the views seen in the guidebooks.

The books collectively provided multiple different approaches for each level of test- ing. A small set of the books were focused on some other platform than AWS or discussed different approaches for other platforms while including AWS approaches.

(18)

Table 3.1 Testing levels in the context of serverless applications.

Testing level Description

Unit testing Tests focus on testing modules of the Serverless function directly while mocking necessary dependencies to isolate the unit in ques- tion. The testing approach must support mocking and white-box testing techniques.

Integration testing Tests invoke the Serverless functions directly using an SDK li- brary, CLI tools, or by calling the handler function in code with appropriate parameters. The focus is on observing the invoked function as a whole. Dependencies may or may not be mocked, and integration to services can be simulated. Tests can be run completely locally or in the cloud against real-world services. The testing approach is not necessarily required to support mocking nor white-box techniques.

System testing Testing approaches are evaluated based on different approaches, including E2E, load, performance, and availability testing. In gen- eral, E2E tests test the whole system and the serverless function by triggering an attached event in the cloud, like calling an API Gateway endpoint from a front end application or inserting a file into an S3 bucket. The scope of a system test can cover multi- ple serverless functions and various cloud resources. System tests generally utilize black-box testing techniques.

The approaches that are not applicable to AWS were left out of the collected ap- proaches. The following sections boil down the various approaches by the respective testing levels.

(19)

Table 3.2 Table of books by testing categories and general approaches

Book title Cloud

platform

Unit Inte- gration

System

Building Serverless Microservices in Python [25]

AWS L H LT

Serverless computing in Azure with .NET [57]

AZ L L, C* LT

Learning Serverless [32] - L** C** E2E**

Azure Serverless Computing Cookbook [65] AZ L L*, C* -

Mastering AWS Lambda [70] AWS L C LT

Programming AWS Lambda [20] AWS L L E2E

AWS Lambda Quick Start Guide [33] AWS - H*, C* -

Learn AWS Serverless Computing [47] AWS L C -

Hands-On Serverless Computing [21] AWS, AZ, GCP

- H*, C* -

AWS Lambda in Action [50] AWS - C* -

Hands-On Serverless Applications with Go [37]

AWS L L, H LT

Hands-On Serverless Applications with Kotlin [67]

AWS - C* -

Building Serverless Applications with Python [56]

AWS - C* -

Building Serverless Python Web Services with Zappa [18]

AWS - C* -

Building Serverless Web Applications [75] AWS L - -

Serverless Programming Cookbook [31] AWS, AZ, GCP

- C* -

Serverless Design Patterns and Best Practices [74]

AWS H - -

Serverless Architectures with AWS [27] AWS - C*

Serverless Applications with Node.js [60] AWS L H E2E**

JavaScript Cloud Native Development Cook- book [26]

AWS L L E2E

Integrating Serverless Architecture Using Azure Functions, Cosmos DB, and SignalR Service [68]

AZ L L*, C* E2E*

Legend: AWS = Amazon Web Services, AZ = Azure, GCP = Google Cloud Platform L = Local, H = Hybrid, C = Cloud, LT = Load testing, E2E = End-to-end

* = Only manual testing, ** = On a theoretical level only

(20)

3.3.1 Unit Testing

Unit testing was featured in a total of 13 books. The approaches in most books were similar to each other and are essentially regular application unit testing prac- tices. Tests are implemented with general-purpose testing libraries using white-box techniques and mock out any necessary integrations to other units and external ser- vices. Tests either define the mocks themselves or utilize existing mocking solutions available through various package management systems, like NPM for Node.js.

3.3.2 Integration Testing

In total, 20 books were categorized under integration testing. In general, the inte- gration testing approaches can be divided into three distinct approaches:

1. Local integration tests isolated from external services through mocks

2. Hybrid integration tests with real or simulated cloud resources while the tests are run in the local machine

3. Cloud integration testing, where the serverless function is deployed in the cloud and run against real-world services

Some books, like the book by Zambrano [74], rely on running local versions of services like traditional SQL databases for testing. It is also possible to simulate some cloud services locally. For example, AWS has a local executable available for their DynamoDB [11]. There are also third-party solutions, like LocalStack, which can simulate a more extensive palette of AWS services locally utilizing Docker containers [40]. However, multiple books [32, 20, 47, 60, 75] advocate for testing against real-world cloud services, as the most reliable results are achieved that way regarding the functionality of the integrations in a real execution scenario. There is always a level of uncertainty regarding how well a local simulation is able to mirror its cloud counterpart [38]. In that regard, cloud integration testing would be the best option for validating integrations to cloud services. However, testing deployed functions in the cloud leads in most cases to black-box testing, as there is no access to the application runtime, making it challenging to observe the code execution or to inject mocks. There can be scenarios where some integration, like a payment processor, has to be mocked, as described in the book by Simovic [60]. In this scenario, hybrid testing is likely a better choice.

Hybrid testing does also come with challenges. One of these is that the cloud services, including the serverless functions, can be placed into a private subnet that is not accessible from the public internet [13]. In most situations like this, it would be necessary to have a separate machine (e.g. on-premise or virtual cloud machine)

(21)

with access into that subnet to run the tests. One approach is to place the CI/CD pipeline worker that executes the tests into a virtual machine residing within the private subnet, as explored in an online blog by Ibragimov [29].

Most of the manual testing techniques mentioned in the guidebooks fall under integration testing. The techniques mostly revolve around invoking the serverless functions either directly or indirectly during development and debugging. In general, serverless functions can be directly invoked through multiple tools, including the AWS Console website and AWS CLI. Indirect invocations are done by whatever triggers the Lambda may have, like an API Gateway, in which case invocations can be done via command-line tools like curl or REST clients like Postman. A third manual technique mentioned is to simulate the Lambda on the local machine using AWS SAM or Serverless Framework command-line tools.

3.3.3 System Testing

The system testing approaches discussed in the 9 books were focused on load testing and end-to-end testing.

While end-to-end testing was discussed in multiple books, not many of them had practical examples of tests. The likely cause for this is that most of the example programs were very simple and did not have separate user interfaces or front end applications. As discussed previously, the idea in end-to-end testing is to invoke the application by simulating a real-world execution scenario. There are many ways of end-to-end testing a cloud system, as there are a plethora of use-cases and different triggers available for serverless functions. For example, if we have an application that is triggered by an upload to an S3 bucket, the apparent test case would be to do just that and then monitor the invoked Lambda for its execution. However, this kind of approach can be challenging to implement, error-prone, and the tests generally run for more extended periods as they might have to rely on timers [39].

The load testing techniques in the guidebooks are unanimous; use a load testing tool like Locust or AWS’s Lambda test harness to invoke the Lambdas multiple times in parallel and sequence programmatically, and then collect metrics on the invocations. Load tests can be used to measure the cold start of Lambdas and dif- ferent kinds of throttling done by services like DynamoDB. Additionally, load testing could be used to get statistical measurements of code efficiency. However, Simovic [60] notes that the necessity of load testing is questionable, as the performance of the services is generally well documented and easily observable.

(22)

3.3.4 Summary of Approaches

In summary, unit, integration, and system-level testing approaches were discovered in the guidebooks. As seen in Table 3.3, there are three approaches for integra- tion testing; local, hybrid and cloud integration testing. Each integration testing approach comes with certain trade-offs, as discussed earlier.

Table 3.3 Summary of collected testing approaches.

Testing Approach Summary

Unit testing Testing the functionality of the lowest-level parts of an application, e.g. a function. Tests generally rely on mocking for isolation from domain objects and external dependencies, such as cloud services.

Local integration testing Testing the combined functionality of the units. Tests rely on mock- ing to remove the dependency on cloud services. It supports cover- age statistics.

Hybrid integration testing Testing the combined functionality of the units. The part under test is connected to either real-world cloud services or simulated services. Support for mocking and coverage statistics is possible.

Cloud integration testing Testing the combined functionality of the Lambda function de- ployed in a cloud environment. It does not support mocking or cov- erage statistics due to no access to application runtime.

System E2E testing Testing the entire system end-to-end by simulating realistic use- cases, or user stories, in a real-world cloud infrastructure.

System load testing Testing the system under load by simulation, e.g. calling the ap- plication many times in parallel, through load testing tools that collect statistics. The necessity of load testing is questioned in the guidebook by Simovic [60], as serverless functions are automatically scalable.

(23)

4 The Full-Stack Application

In this chapter, the full-stack application used to apply the approaches is discussed.

The full-stack application put under test was found from the Serverless Framework [59] sample application listings. The application code is available through its public GitHub repository, and it is published under an open-source license. The original purpose of the application is to showcase how full-stack web applications can be developed using the Serverless Framework. The application was chosen based on the following suitability criteria:

1. The application has enough complexity to write meaningful tests

2. The application is a cloud-native program with integrations to other AWS services

3. The application gives a good representation of real-world REST API applica- tions, also in the sense that it is built with the Serverless Framework

4. The application has sufficient documentation available through the Git repos- itory along with the Serverless Framework’s online resources for test develop- ment

A fork of the application’s GitHub repository was created for this thesis, available athttps://github.com/clowee/fullstack-app.

4.1 Serverless Framework & AWS Services

The previously discussed Serverless Framework is a framework aimed at simplifying the development of serverless applications. It consists of a CLI tool and a variety of online resources and libraries built around it. At the core of the framework is the serverless.yml YAML configuration file that defines the structure of the application, including the AWS cloud infrastructure, using its proprietary YAML structure along with AWS’ CloudFormation YAML template formats. Additionally, the file configures the environment for the CLI tool, including any plugins required for the operation of the CLI tool. The Serverless Framework also hosts a wide range of packages for development, such as tools for emulating cloud services locally, additional build tools, infrastructure management, and programming libraries. [59]

AWS Lambda [6] is the service within AWS that is used to host serverless func- tions, Lambdas. The AWS Lambda defines the runtime environment for the Lambda function, and enables the functions to be integrated into other AWS services. Lamb- das are invoked via triggers attached to other AWS services, like an API Gateway

(24)

in this case. The AWS API Gateway [2] is a service that listens for requests via its network through HTTP and WebSocket protocols, and converts them to formats that AWS services are able to read. When it comes to database services, AWS has multiple different solutions available, including the DynamoDB service used by the full-stack application. AWS DynamoDB is a high-performance object-based NoSQL database service, where data is stored as items under DynamoDB’s tables.

AWS’ CloudWatch [4] service is used by default to collect the logs of each Lambda invocation.

AWS S3 [9] is a file hosting service, where files are stored under buckets. The S3 buckets restrict access to the files on a file-to-file basis and provides a version history for files. S3 is used to host the front end application code and functions as the source for AWS CloudFront. The AWS CloudFront [10] is a CDN (Content Delivery Network) service, which adds a cache layer on top of the S3 bucket and provides other additional performance features.

TheAWS IAM [5] (Identity and Access Management) service controls the access to and between AWS services. Access can be defined using a combination of IAM’s user accounts, user groups, roles, and policies. IAM also allows configuring access based on networks and supports VPC (Virtual Private Cloud) for most of its services, allowing the use of private cloud subnets that restrict connections to and within the network.

4.2 Application Description & Architecture

The full-stack application is a simple login and registration sample application with no other functionalities or purpose. It was written entirely in JavaScript in the orig- inal Git repository, but the back end application was converted to TypeScript to ease test development. TypeScript [42] is a strongly typed programming language built on top of JavaScript. It can be compiled to regular JavaScript while bringing strong typing into the language, helping to reduce bugs and issues resulting from pro- gramming errors [42]. The back end serverless function is built using the Serverless Framework [59], and the Express [45] framework aimed towards building traditional REST APIs. As Express applications are not able to work in AWS Lambda as-is, the Express application is wrapped under the Serverless Framework’sserverless-http package which acts as an intermediary, converting Lambda invocations to Express requests. The application uses AWS’ official SDK programming toolkit to interface with AWS’ DynamoDB which is used to store the user account information. The application uses a separate Express middleware library (Passport.js) to handle authorizations when the user logs in. Authorizations are managed using JSON Web Tokens (JWTs) [16].

As seen in Figure 4.1, the application consists of the front end application hosted

(25)

in an Amazon S3 bucket with CloudFront in charge of distributing the files to the end-user client. The back end application is hosted in AWS Lambda, which is interfaced through an AWS API Gateway integration that is open to connections publicly. The application is not placed into a private cloud subnet. Therefore, access to services is restricted only using AWS IAM (Identity & Access Manager) roles, policies, and AWS Lambda permissions.

Figure 4.1 The full-stack application architecture.

The front end application is a SPA (Single Page Application) created with React.

It comprises three main views as seen in Figure 4.2 - the front page, login and registration page, and the main view for a logged-in user. Once the user submits the login or registration form, a corresponding Lambda endpoint is requested through the API Gateway. During the process, the Lambda interfaces with the DynamoDB to query the user’s information, and in the case of registration, to save the new user. If the login or registration is successful, the Lambda function responds to the request with a signed JSON Web Token. The token is then used to authorize access to other endpoints in subsequent API requests that require the user to be logged in. In this case, the only endpoint requiring authorization is the user information endpoint. The tokens are signed using the HS256 algorithm scheme, meaning only the back end application can verify the token signature, as only it has access to the secret key the tokens are signed with. When the user is logged out, the front end application simply deletes the authorization token from its memory, making it

(26)

impossible to authorize future requests before the user logs in again.

The back end application is missing some features, and some improvements should be made if it were a real-world API. The first is that secrets, such as the secret key used to sign JWTs, would probably be best to be stored in AWS’ Secrets Manager or a similar service instead of the Lambda environment variables. The second is error handling; currently, the back end application does not differentiate between different error types and sources and only uses JavaScript’s top-level Error type. Additionally, the application uses HTTP status codes inconsistently in its responses. For example, if login is attempted with an invalid email address, the API responds with an Internal Server Error, even though a Bad Request response would have been a more appropriate error code. One minor improvement to the users model module was implemented for testing purposes - theformatUserEntity function comprised of logic that was previously duplicated in other functions was added.

4.3 Infrastructure Management

The original Git fork of the application used Components provided by the Serverless Framework to deploy the infrastructure to AWS. However, while the Components aim to simplify infrastructure management, they also take control away and re- duce customizability. Therefore, the application was refactored to the ”traditional”

Serverless Framework model, which generates AWS CloudFormation templates for deployments. This way, it was easier to study the cloud architecture created via the Serverless Framework, which was later utilized when migrating the infrastructure management to an Infrastructure-as-Code (IaC) program using the Pulumi frame- work.

4.3.1 The Pulumi Application

Pulumi is an IaC framework for programmatic cloud infrastructure management.

The benefit of programmatic infrastructure management is that it offers more flex- ibility through programming language features, like loops, classes, and functions.

Pulumi supports a range of different programming languages, meaning that it is possible to write the IaC program using the same TypeScript programming lan- guage as the full-stack application itself is written in. When using a strongly typed programming language, such as TypeScript, Pulumi helps reduce the number of mis- takes made compared to YAML infrastructure configuration files that the Serverless Framework and AWS CloudFormation use. Pulumi also provides an automation API that can be used directly in test suites to deploy the infrastructure before run- ning any tests automatically and destroy the infrastructure automatically after the

(27)

(a) Front page (b) Login & registration view

(c) Logged in view

Figure 4.2 Front end application views.

tests are done. [55]

The benefit of using IaC in combination with testing is that when the cloud infrastructure is deployed from scratch for each test run, it is ensured that the cloud infrastructure and the state of the services are as intended for every test run, making the tests run more consistently. Additionally, it is possible to divide the infrastructure application into as many pieces as necessary and only deploy parts of the application. The smaller parts can be utilized for deploying only certain parts of the infrastructure necessary for each test suite, decreasing the overall run time of the test suite in question.

(28)

Alternatives to Pulumi

For infrastructure management, there are multiple choices. In the IaC territory, AWS does support other technologies besides Pulumi. One such is Terraform [28]

which is based on its proprietary HCL (HashiCorp Configuration Language) along- side JSON for templates. Other choices include Chef [54], based on the Ruby programming language. Additionally, AWS provides its own previously mentioned CloudFormation IaC service that uses JSON and YAML templates to deploy and manage cloud infrastructure [14]. AWS also has the previously mentioned SAM toolkit for developing Lambdas, which works as an extension of CloudFormation when deploying through it [15].

(29)

5 Applying The Approaches

This chapter describes the application of the automatic testing approaches discussed in Chapter 3 to the Serverless Framework’s full-stack sample application, introduced in Chapter 4. The application does not have any pre-existing tests in its original Git fork. Therefore, there were not any existing dependencies regarding testing frameworks and other tooling choices.

The application tests can be found under the directory api/tests within the GitHub repository of this thesis. Additional configuration for Cypress, including the Pulumi deployment and tear down processes, can be found under the api/cypress directory. The README.md file in the root of the directory contains instructions for executing the applications and their tests. The general scope of the tests is the back end serverless function, with end-to-end tests extending to the entire full-stack application. The Pulumi application and the front end application were not tested separately.

The unit test suites reached a total coverage of 100% in all areas for the testable units. The integration test cases reached a statement coverage of 93.33%, a branch coverage of 74.07%, a function coverage of 100%, and a line coverage of 100%. The combined coverage is 100% in all areas; refer to Appendix A for complete coverage tables.

5.1 Testing Tools

This section describes the tools used for testing, as well as alternatives available for each tool.

5.1.1 General Purpose Testing Framework

The unit and integration tests for the full-stack application were implemented using the Jest testing framework. Jest was developed by Facebook, and it was chosen for the unit and integration tests based on its simplicity, flexibility, previous experi- ence, and support for a wide range of frameworks and technologies, including Type- Script. It is built for the Node.js JavaScript runtime and is a general-purpose testing framework. Jest comes packaged with the most common functionalities needed in application testing; setup and tear-down processes, mocking, assertions, snapshots, coverage statistics, parallel testing, and support for asynchronous testing. [23]

(30)

Alternative Frameworks

There are multiple alternative general-purpose testing frameworks for JavaScript (and TypeScript) testing.

One alternative is Mocha [66], developed by the OpenJS Foundation, which has less functionality packaged out-of-the-box compared to Jest. However, the syntax and functionality offered by its API closely resemble Jests. For example, the setup and tear-down hook functionality are the same, with minor naming differences; the beforeAll hook used for test preparation in Jest is equivalent to the before hook in Mocha. Mocha is often paired with additional libraries, such as Chai [19] for assertions and Sinon.JS [61] for mocking, as it does not offer that functionality by itself.

Jasmine [49] is another alternative testing framework offering a similar set of functionalities and API to Jest and Mocha and has built-in support for assertions and mocking.

5.1.2 End-to-End Testing Framework

The Cypress framework was chosen for E2E testing due to its ease of use, range of features, and previous experience. Cypress is more independent than its competitors relying on Selenium, allowing it to provide a more simple testing API for writing tests. The test preparation process in Cypress differs slightly from Jest, as any setup and tear-down tasks that can not be run in the test browser need to be executed separately by implementing a Cypress plugin. The Cypress plugins additionally allow executing predefined back end tasks from the test suite itself. As seen in Figure 5.1, Cypress also allows running the test interactively, where it opens up the browser window allowing the developer to view the test execution in real-time.

The interactive mode also supports viewing a specific point in time during the test execution, including the state of the user interface. Cypress supports Chrome, Firefox, Edge, Electron, and Brave web browsers for test automation. [22]

Alternative Frameworks

The Selenium framework is one of the popular choices for E2E testing with web browsers. The Selenium WebDriver automates browser execution natively in a some- what similar manner to how the Cypress framework does. The key difference is that Cypress runs within the browser, whereas Selenium is a separate standalone appli- cation. Because of this, Cypress is limited on features that Selenium can offer, such as remote browser control, multiple browser tabs, multiple parallel browsers, and being limited to one web URL per test suite. [62, 22]

(31)

Figure 5.1 Cypress in interactive mode while running tests on Chrome.

Besides the default Selenium WebDriver implementations, multiple E2E frame- works and libraries are built on top of the Selenium core or that support it through the W3C WebDriver standard [73] derived from Selenium. For example, Night- watch.js and WebdriverIO both support Selenium along with other drivers built using the aforementioned W3C standard [48, 46].

5.2 Unit Tests

As expected, based on the guidebook literature review in Chapter 3, the unit tests written for the back end API are not affected by the application being a serverless application. For the most part, the application implementation was already divided into small testable units, removing the need to refactor the application code in any significant way. As the back end application is a simple API consisting of three endpoints, achieving a high level of test coverage was not difficult. The unit test cases, seen in Table 5.1, were devised by analyzing the pre-existing behaviour of the application code.

The path testing white-box technique was utilized in the unit tests. The goal of the path testing technique is to test all possible code execution paths compre- hensively [69]. Each unit was tested in isolation from other units by mocking their implementations. The mocks were created through Jest’s spy functionality which spies for function calls to a specified module and inject mocks that replace the origi- nal function implementations. The integrations with AWS DynamoDB were mocked using the mocking libraryaws-sdk-mockthat is available through NPM. The library

(32)

provides the ability to inject mocks to the AWS SDK library similarly as Jest does, preventing any fundamental interactions with DynamoDB from taking place. The main app.ts file of the API application itself is not unit tested, as it mainly maps the controller functions to Express HTTP endpoints which are tested through inte- gration testing. The Express middleware configuration used to authorize incoming HTTP requests to the API was also left out to be tested in the integration tests.

An example of how the tests and mocking libraries work together can be seen in Program 5.1. Before each test, the necessary DynamoDB calls are mocked, and the mock is cleared after each test. Jest is told to spy on the units from other modules using its spyOn function, which are then used to inject mock implementations.

After the register function has been executed, the expect assertions are used to ensure the function worked and called our mocks as expected. As can be seen from the Program 5.1, setting up the tests and mocks takes more code lines to implement in general than the actual test itself.

(33)

Table 5.1 The test cases for unit tests.

Unit name Id Test description

Registration controller U1.1 Responds with status 200 OKand signed JWT on successful registration.

U1.2 Responds with status400 Bad Requestwhen user is already registered.

U1.3 Responds with status500 Internal Server Errorwhen Lambda environment is invalid or DynamoDB query failed.

Login controller U2.1 Responds with status200 OKand signed JWT on successful login.

U2.2 Responds with404 Not Foundif user is not registered.

U2.3 Responds with401 Unauthorizedwhen passwords do not match.

U2.4 Responds with500 Internal Server Errorwhen Lambda environment is invalid or DynamoDB query failed.

User controller U3.1 Responds with status200 OKand the user information of the authenticated user.

User modelregister U4.1 Registers a valid user account.

U4.2 Throws an error if Lambda environment is lacking.

U4.3 Throws an error if email or password is missing or invalid.

U4.4 Throws an error if user is already registered.

User modelgetByEmail U5.1 Returns an user entity if found.

U5.2 Returns null if the user was not found.

U5.3 Throws an error if Lambda environment is lacking.

U5.4 Throws an error if email is missing or invalid.

User modelgetById U6.1 Returns an user entity if found.

U6.2 Returns null if user was not found.

U6.3 Throws an error if Lambda environment is lacking.

U6.4 Throws an error if id is empty or missing.

User model

convertToPublicFormat

U7.1 Converts user to a publicly displayable format.

U7.2 Is able to convert any object (with the same key-values).

User model formatUserEntity

U8.1 Parses user entity and adds id and email values to the object.

Utilities

validateEmailAddress

U9.1 Returns true for valid email addresses.

U9.2 Returns false for invalid email addresses.

UtilitieshashPassword U10.1 Hashes a password correctly.

UtilitiescomparePass- word

U11.1 Returns true when passwords match.

U11.2 Returns false when passwords do not match.

Viittaukset

LIITTYVÄT TIEDOSTOT

By analyzing the tools features using Plotytsia’s (2014) eleven steps and comparing the tools with each other one tool is selected for implementation. The selec- tion was made also

One of the benefits of using automation framework for testing is that it provides a test runner to execute test cases. It is one of the essential parts and Cypress is re- nowned

Windows environment is set up, Octopus can be used to deploy the case management software into the test server.. The tentacles can be setup in two different ways: a

The wireless testing tools used in the implementation will by default target every available access point and client they are able to find, which could result in uninten-

There are many varieties of testing tools to be used depending on the situation and pro- ject, as presented in the previous chapter about testing tools, which range from open

The main difference between IRT and BN in CAT systems is that IRT computes the probability of a correct answer to a question depending on the student’s knowledge level and the

Different testing environments and tools were used in the implementation: HW/SW testing framework in the receiver firmware handles the test execution and reporting whereas

Infrastructure-as-code is a modern practice for IT automation as well as managing complex and large-scale infrastructure. It allows describing infrastructure and configuration