• Ei tuloksia

Benefits of IPv6 in Cloud Computing

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Benefits of IPv6 in Cloud Computing"

Copied!
61
0
0

Kokoteksti

(1)Vlad Gâdescu. Benefits of IPv6 in Cloud Computing Master of Science Thesis. Subject and examiners approved by the Faculty of Computing and Electrical Engineering Council on 11 January 2012 Examiners: Prof. Jarmo Harju MSc. Aleksi Suhonen.

(2) ABSTRACT. TAMPERE UNIVERSITY OF TECHNOLOGY Master’s degree program in Information Technology Gâdescu, Vlad: Benefits of IPv6 in Cloud Computing Master’s thesis, 54 pages June 2012 Major subject: Communication Networks and Protocols Examiner(s): Prof. Jarmo Harju and MSc. Aleksi Suhonen Keywords: cloud computing, ipv6. Efficiency is one of the main focuses of the world, today. As the entire world relies on computers and networks, their efficiency is of utmost importance. Energy, processing power, storage and data access must all be used and offered in a most profitable and economical way possible. The majority of companies and businesses cannot afford the implementation and deployment of huge data centres, for their specific requirements. Thus, from need of efficiency in both business and IT environment, the cloud computing idea emerged: online infrastructures in which clients buy or rent processing power and storage, according to their needs. During the last year cloud computing has become in the new internet evolutionary trend. Due to its material and monetary merits, it has gained increasing popularity. As IPv6 was developed as a response to the limitations of IPv4, IPv6 brings new features and advantages, which fit into the cloud computing paradigm and provide means to better develop new techniques from which the modern data centres can profit. The development and implementation of IPv6 has been underway for some years and consequently, its improvements over IPv4 are skilfully outlined and acknowledged. The thesis is based on the research of the advantages of IPv6 over IPv4 and how they can improve the operation and efficiency of cloud computing. In the first part background information is presented, giving a brief view on what cloud computing and virtualization is, as well as a few problems that customer might find in the cloud computing idea. As the advantages of IPv6 over IPv4 are widely known, but their benefits for data centres are rarely exposed, the second part of the thesis focuses on how cloud computing environment can benefit from IPv6 by adjoining IPv6 and cloud computing. As a result, the thesis tries to portray a better image to both customers and network administrators, so that they could properly see and understand why IPv6 and cloud computing should be used together.. ii.

(3) PREFACE. The thesis was written as a response to the ever growing idea of cloud computing and low deployment of IPv6. It was based on personal research done both at the university and spare time and it is heavily depended on papers published on IEEE and RFCs. During the writing process I concluded that the background information should be presented more thoroughly in order to better support the benefits IPv6 bring to the cloud concept. Moreover, the benefits are presented in such a way that the thesis might be used as a source to motivate a faster deployment of IPv6 in cloud computing data centres. I would like to thank Prof. Jarmo Harju and MSc. Aleksi Suhonen for their guidance in the writing process and for the formalities regarding the university thesis process and my friend that help me with proofreading.. Tampere, 29 May 2012. Vlad Gâdescu (vlad@gadescu.com) Str. Carpenului, Nr. 3 500256 Brașov Romania. iii.

(4) TABLE OF CONTENTS. ABSTRACT ................................................................................................................... ii PREFACE ......................................................................................................................iii TABLE OF CONTENTS .............................................................................................. iv LIST OF FIGURES ....................................................................................................... vi LIST OF ABBREVIATIONS....................................................................................... vii 1 Introduction ................................................................................................................. 1 2 IPv4 and IPv6 differences ........................................................................................... 3 3 Virtualization and IPv6 ............................................................................................... 7 3.1 VPNs .................................................................................................................. 8 3.1.1 Traditional versus virtualized ISPs ............................................................. 8 3.2 Virtualization software - hypervisors and virtual networking ............................ 9 3.3 Server farms and potential problems ................................................................ 11 3.3.1 Traffic overhead and load balancing ......................................................... 12 3.3.2 Localization and migration ....................................................................... 13 4 Cloud computing ....................................................................................................... 15 4.1 Software infrastructure versus hardware infrastructure ................................... 16 4.2 Implementations of Cloud computing .............................................................. 18 4.3 Insecurities and problems in cloud computing ................................................. 19 4.4 Future of cloud computing ............................................................................... 21 5 IPv6 benefits in cloud computing ............................................................................. 23 5.1 Security benefits ............................................................................................... 23 5.1.1 NAT avoidance ......................................................................................... 23 5.1.2 IPsec .......................................................................................................... 27 5.2 Network Management benefits......................................................................... 32 5.2.1 IPv6 addressing and interface identifiers .................................................. 32 5.2.2 Stateless approach ..................................................................................... 35 5.2.3 Address validation..................................................................................... 36 5.3 QoS benefits ..................................................................................................... 38 5.4 Performance benefits ........................................................................................ 41 5.4.1 Load balancing .......................................................................................... 41 iv.

(5) 5.4.2 Broadcast efficiency .................................................................................. 43 5.5 Mobility benefits .............................................................................................. 45 6 Conclusions ............................................................................................................... 49 7 Bibliography .............................................................................................................. 51. v.

(6) LIST OF FIGURES. Figure 1. Differences between IPv6 and IPv4 .................................................................. 4 Figure 2. IPv4 header [3] ................................................................................................. 5 Figure 3. IPv6 Header [4]................................................................................................. 5 Figure 4. Fragmentation Header [4] ................................................................................ 6 Figure 5. Non virtualization vs. virtualization .................................................................. 9 Figure 6. Type 1 and Type 2 hypervisors........................................................................ 10 Figure 7. Hardware infrastructure and software infrastructure .................................... 16 Figure 8. Cloud Computing deployment models............................................................. 18 Figure 9. Basic NAT........................................................................................................ 23 Figure 10. AH and ESP header insertion ....................................................................... 24 Figure 11. Authentication Header [3] ............................................................................. 25 Figure 12. IPv4 vs. IPv6 AH authentication. .................................................................. 27 Figure 13. IPv6 addressing [41] ..................................................................................... 30 Figure 14. Migration and IP based authentication ........................................................ 30 Figure 15. Multiple addresses per interface and random generated addresses. ............ 33 Figure 16. Anycast .......................................................................................................... 34 Figure 17. Structure of a data centre and exponential number of machines.................. 43 Figure 18. Mobile IPv4 ................................................................................................... 45 Figure 19. MIPv6 and VM migration ............................................................................. 47. vi.

(7) LIST OF ABBREVIATIONS. AH - Authentication Header ARP - Address Resolution Protocol AS - Autonomous System CDN - Content Distribution Networks CN - Correspondent Node DNS - Domain Name Server DSCP - Differentiated Services Code Point ESP - Encapsulating Security Payload FA - Foreign Agent HA - Home Agent IaaS - Infrastructure as a Service ICMPv6 - Internet Control Message Protocol Version 6 IKE - Internet Key Exchange IPv4 - Internet Protocol Version 4 IPv6 - Internet Protocol Version 6 ISP - internet service provider MAC - Media Access Control MIPv4 - Mobile IPv4 MLD - Multicast Listener Discovery MN - Mobile Node MTU - Maximum Transmission Unit NAT - Network Address Translation ND - Neigbor Discover Protocol NIC - Network Interface Card OS - Operating System PaaS - Platform as a Service QoS - Quality of Service RFC - Request For Comments SA - Security Association SaaS - Software as a Service SLA - Service Level Agreement TCO - Total Cost of Ownership TCP - Transmission Control Protocol UDP - User Datagram Protocol uRPF - Unicast Reverse Path Forwarding VLAN - Virtual Local Access Network VM - Virtual Machine VPN - Virtual Private Network WAN - Wide Area Network vii.

(8) 1 Introduction. Internet has evolved in the past two decades at a fast pace, beyond anyone’s expectations. It has offered new solutions to businesses and personal development. It has evolved from an only text based environment to an interactive one with all sorts of media. In the beginning of this decade, it made a major shift to Web 2.0 and again, everything changed. It seems that every now and then, a new development perspective arises that changes the status quo of the virtual world and how we perceive it. Nowadays it is cloud computing. Cloud computing is a technology that modifies the way how the businesses have to think about using the IT resources and the internet. Cloud computing makes use of virtualization to provide new kinds of services, from software to hardware. That means that now, commercial organizations can make use of incredible IT infrastructures at lower costs. Processing power can be accesses based on demand and on budgets. These all are great advantages, not only from the cost point of view, but also from the fact that it opens new possibilities of development for companies that could not afford large IT infrastructures. Cloud computing is still in its infancy and it is an emerging technology. It provides great benefits, but it is open to more improvements. New protocols and ideas can be coupled with virtualization to provide greater value or solve exiting problems. One protocol that can do this is IPv6. But the IT world is quite reluctant to using new technologies, which at a first glance do not provide any palpable benefits or advantages. This can be said also about businesses and business managers who do not see reason to invest money in something that is not broken. As a consequence, this leads to the inflexibility of the internet. The ossification of the internet (inflexibility and reluctance of new technologies) [1] and the wide spread of IPv4 in all the networks around the globe, made companies, big or small, unenthusiastic in implementing the new IPv6. A proof of this situation is actually the low deployment of IPv6 in the internet, specifically in virtual infrastructures, making this an issue for the companies and the customers too. Security, privacy, reliability, fast resource provisioning, mobility and other problems focused now on virtual environments, can be, at some degree, improved or solved by taking the next step, that being the implementation and development of infrastructures based on IPv6. Development has always been forced by some key elements that came at the right moment. Internet spread all over the globe and made IPv4 developed beyond anyone’s expectations. As stated by Peter Loshin this was the killer application for the older protocols [2]. Nowadays, cloud computing is the new killer application for IPv6. It can be considered that new internet will evolve around cloud computing and virtualization. IPv6 will have to be part of this progress, but for that to happen, the proper advantages have to be pointed out. A clear examination is needed, on why the 1.

(9) two technologies will help each other grow. Businesses as well as regular customers could do with a clear view on why cloud computing makes perfect sense with IPv6. The benefits of the IPv6 in the Cloud Computing environment have to be properly outlined and explained. This thesis will show the benefits and the need of implementing IPv6 so that the services and virtual environments may develop further more. The new protocol fits much better the needs of the new internet and that means companies as well as customers need to know the advantages they will get by switching to it. The thesis is structured as following: in Chapters two a brief comparison between IPv6 and IPv4 is made with the outline of the main differences of the next generation IP; in Chapter three, basic background and concepts of virtualization are offered; Chapter four presents the cloud computing idea along with information on how it is deployed in the network with a clear distinction between online hardware and software; Chapter five shows the benefits of IPv6 in the context of cloud computing and how they will affect the cloud computing data centres, and finally Chapter six presents the conclusion remarks.. 2.

(10) 2 IPv4 and IPv6 differences. It is widely known that TCP/IP is the backbone protocol stack with which the internet grew from anonymity and low coverage, to world wide spread and availability. TCP/IP went through a period of modifications and testing until it was adopted by the major players in networking at the time; universities and the military. However, it was designed in a period in which computer networks where in their infancy. They were seen as the pinnacle of the computer world, something revolutionary, but none the less, still not widely adopted. So, the TCP/IP stack started to be used in these small, rather primitive networks. No one thought that at some time, computers all over the world will be interconnected. Consequently, due to rapid and unforeseen development of networks, IPv4 is at a point in which its limits not only have been reached, but there are serious drawbacks that greatly limit the current internet services. IPv4 has been developed around the idea of interconnecting dedicated networks, for example different universities or research centres, government facilities and so on. At that time, for example, the number of addresses available (232) was seen more than enough for all the existing networks. Security was not a concern; routing tables and router performance with respect to IP header processing was not taken into consideration. However, all of these are now of utmost interest and concern. During the 90s, efforts were put into creating a new protocol that will address all these new problems that were not foreseen when creating the old IPv4. Thus, IPv6 came to fruition. One would think that by now IPv6 would be worldwide implemented. Even though the internet has passed through different concepts in resource management from horizontal and vertical scalability to cloud computing, IPv6 is not yet widely and fully deployed. Figure 1 presents some of the differences between IPv4 and IPv6:  IPv6 has an address length of 128 bits, meaning that the pool of addresses will be large enough to serve all the present and future hosts on the internet. Moreover, the addresses are differentiated into block addresses that are meant for specific functions; they are valid only in specific parts of the networks, which are identified by the “scope”: link-local, unique local addresses, global. Besides the usual unicast and multicast addresses, IPv6 includes a new, anycast address format.  IPsec is mandatorily supported in the new protocol, an aspect that can solve a lot of the security issues that arise with careless users.  QoS can now be served directly through a field in the IPv6 header, which opens new possibilities in how to manage communications and application traffic.  Fragmentation does not happen along the route with IPv6; this is beneficial because it decreases the work a router has to do in case packet fragmentation is needed.. 3.

(11) . . . Checksum is not included in the header; as with fragmentation this will release some of the burden on the router, since no calculation of the checksum is needed every time fields like hop limits are changed. However this function can be appointed to upper layer protocols or ICMPv6 headers. In IPv6, neighbor discovery protocol replaced the broadcasts and the ARP protocol, thus reducing network floods through more efficient LAN communication. With IPv6, any host on the network can autoconfigure itself. Manual configuration or even DHCP servers are not needed anymore, unless a certain situation requires them to.. IPv6. IPv4. Addresses are 128 bits in length. Addresses are 32 bits in length. IPsec is mandatorily supported. IPsec is just optional. QoS handling through flow label field in the header. No QoS identifier in the header.. Routers do not fragment the packets, just the sending node. Routers and host can fragment packets. No checksum in the header. Checksum in the header. IP to MAC resolution is done through Multicast Neighbour Solicitation. IP to MAC resolution is done through ARP broadcast. Broadcast addresses are replaced by linklocal scope all-nodes multicast address. Uses broadcast addresses to send traffic to all nodes on an subnet. Automatic configuration; does not require DHCP. Manual or DHCP configuration. Must support a 1280-byte packet size (no fragmentation). Must support a 576-byte packet size (maybe fragmented). Figure 1. Differences between IPv6 and IPv4 The differences between the two versions of IP are each meant to improve the overall performance of the protocol, increase security, mobility and flexibility of the IP itself. However, as with IPv4, the internet evolved into a direction which could not had been predicted, thus, nowadays, IPv6 has to envelope the needs of the new internet paradigm. The idea of cloud computing is quite recent, thus all the improvements were not meant explicitly for it. Therefore, the benefits and additions of IPv6 have to be put in context. However, a first step is to present the most obvious differences between the protocols.. 4.

(12) Figure 2 and Figure 3 depict the headers of the two protocols. As it can be clearly seen, IPv6 header is simpler, with fewer fields. Consequently it can be concluded that it carries much less information than an IPv4 header, even though the length of the next generation IP is double (20 bytes versus 40 bytes). Some of the fields that are presented in the headers are however common to the two, but the names differ. Obviously, the version field indicates the IP version of the header, in the case of IPv4 it would be 4 and 6, in the case of IPv6. 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Version| IHL |Type of Service| Total Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Identification |Flags| Fragment Offset | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Time to Live | Protocol | Header Checksum | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Source Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Destination Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Options | Padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+. Figure 2. IPv4 header [3]. 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Version| Traffic Class | Flow Label | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Payload Length | Next Header | Hop Limit | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + + | | + Source Address + | | + + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + + | | + Destination Address + | | + + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+. Figure 3. IPv6 Header [4] Both the type of service and traffic class in IPv6 have the same function, to differentiate the service classes in QoS techniques. It has to be stated that both these fields have been modified from their original purpose and now they are used for DSCP in DiffServ QoS [5], as it will later be presented.. 5.

(13) The total length and payload length fields define the entire packet length. In the case of IPv4 these field can indicate a packet of maximum length of 65,535 bytes. However, IPv6 is meant to carry heavy loads of traffic, much more than was originally thought in version 4. As a result, in case the payload length field has a value of 0, the packet will be considered a jumbogram, consequently being able to carry much more data than the MTU. The maximum value can reach 4,294,967,295 bytes. The time to live and hop limit are fields that limit the propagation of an IP packet to a certain amount of time; the time is represented by how many routers the packet has been passing through; each router decreases this value by one. The daisy chain concept is a new addition to the next generation IP. It allows a greater flexibility of the protocol, through the use of different headers that perform only specific tasks. These headers are added and removed based on the necessity of the data transmission. The next header filed in IPv6, was created to point to the existence, if that is the case, of another header behind the one that is processed. Accordingly, a chain of sequential header can be created, each one having a clear function and a simple structure. In IPv4, this was possible through the use of the option filed, but this approach makes the protocols much more inflexible and harder to process. Headers in IPv4 have variable length, while IPv6 headers are static. Hence, the IHL field defines the total length of version 4 IP headers, a field that is not needed in version 6. Flags and fragment offset are used in the case of packets being fragmented at the source or along the way of the data flow. In IPv6 these were eliminated, because fragmentation at routers is forbidden in version 6. However, the source of transmission can fragment the packet. In this case, a new header, which will take the fragmenting and recomposing responsibilities, will be added above the original IPv6 header, as depicted in Figure 4. 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |. Next Header. |. Reserved. |. Fragment Offset. |Res|M|. +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |. Identification. |. +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+. Figure 4. Fragmentation Header [4]. 6.

(14) 3 Virtualization and IPv6. Efficiency is always one of the main focuses in the computer world. Resources, storage and networks must meet high criteria of efficiency and at the same time with keeping a low degree of coupling between these entities. Virtualization is a concept that made all of this possible and provided the specialists with the means of high resource use for a high yield. Productivity, resource management and cost effectiveness can thus reach levels that are very tempting not only to IT specialists but to customers and business owners. Consequently, the virtualization paradigm started changing the existing internet and created new paths for development. In addition, virtualization can be seen as a disruptive technology that will drive away the inflexibility of nowadays internet. It has to be pointed out that virtualization and IPv6 can both profit from each other. Through a paradigm shift in the internet, IPv6 has now the opportunity to be faster deployed, but this is not enough. Benefits for virtualization with IPv6 have to be pointed out and explained in clear manner, thanks to the fact that IPv6 brings more subtle advantages that are not clearly defined and differentiated form already existing protocols. Virtualization can be applied to a broad range of concepts, like overlay networks; software, in which servers and OSs are created and run in virtual machines. IPv6 can bring improvement in performance, ease of use and troubleshooting. Networking equipment’s burden of processing is decreased through a simple representation of the IPv6 headers and structures. ICMPv6 new functionalities provide more efficient ways for hosts to communicate between each other and can prove to be a solution to the L2 broadcast and scaling problem presented in [6, 7]. Last, but not least, the compulsoriness of IPsec in IPv6 implementation makes all communications more secure through proper authentication and encryption, a requirement that should be mandatory in virtual environments. Based on the ideas presented in [8], it can be argued that in the future the role of the traditional ISPs will be modified. Nowadays, ISPs offer not only infrastructure access but services too. These two roles may at some point be divided and subsequently create separate entities. That means that the same separation has to be made when talking about protocols. IPv6 will have a greater impact in the underlying physical infrastructure of the virtual environment, but it will provide some advantages to the software component as well, like, for example, better VM migration. Businesses and potential customers of cloud services will have to make the same separation based on what their needs are from the cloud services: renting a whole cloud hardware infrastructure or only software services.. 7.

(15) 3.1 VPNs Virtualization as a concept has been around for many years [9], but not until recently has it grasped the whole internet through usage in different services. It can be said that this idea is still in its infancy and still has to prove itself and expose all the pros and cons attached to it. Fortunately, VPNs have been around for a longer time, proving their usefulness and making it possible to extrapolate their benefits to the virtualization concept. VPNs can be considered a type of virtualization by offering a method of creating an overlay, secured network over the public internet. They offer a little bit of a history lesson and a good example on how virtualization can benefit from implementation of new protocols. Nowadays VPNs start taking advantage of IPv6 thanks to the benefits and improvements it brings. It can be assumed that cloud computing will follow the path of VPN technology and profit from the enhancement IPv6 offers. IPsec is used in VPNs, but with IPv4, IPsec is harmed by NAT. That makes an obvious and direct benefit to switching to IPv6. Authentication headers can now be used thanks to the fact that NAT is obsolete in IPv6 networks and environments. Furthermore, in [10] it is clearly stated some of the advantages or of implementing VPNs over IPv6. This outlines a first step toward the goal of this thesis, showing that the new protocol has a beneficial impact on virtualization. Cloud computing and VPNs are two concepts that are tightly connected to each other. The virtual networks provide secured means to connect to remote data centres as well as an efficient and a robust way for roaming clients to access their services and data. Therefore, it can be concluded that cloud services benefit from more capable IPv6 based VPNs. 3.1.1 Traditional versus virtualized ISPs As virtualization increases its presence in internet, new role shifts will take place in the traditional ISP. As stated before, a proper delimitation will happen between hardware infrastructure and software infrastructure providers. Infrastructure providers will manage hardware equipments and will create the underlying physical networks, taking the role of traditional ISPs. Service providers will offer services, from virtual networks to different software services, creating a new entity, the virtual ISP. IPv6 transition and implementation can have a catalyst role in the development of both of these two new providers. But at present the low IPv6 deployment shows low resilience in implementing new protocols and comes to support the idea of internet ossification [1], something that is not adequate in a faster growing online environment. The ubiquity of virtualization and the fact that it can greatly benefit from the adoption of the next generation IP, will force the internet to relinquish its old habits. Consequently, it seems that there is a strong correlation between the future of the internet, virtualization and IPv6. As seen in [11], virtualization offers great test beds for development of technologies and offers new ways to use already existing infrastructures. It can greatly expand the use and possibilities of the internet. 8.

(16) The players that will have a place in the development and the future of the internet will have to understand that inflexibility is not something that is wanted in the great structures that are today’s internet. But, nonetheless, they will have to properly understand the advantages that new technologies will have to offer. As a result, tradition and virtualized ISPs will have to know the benefits that IPv6 will bring to their service, whether it is network wise or software wise.. 3.2 Virtualization software - hypervisors and virtual networking When talking about cloud computing and all that is “virtual”, everything reduces to two essentials: virtualization software and hypervisors. This is the corner stone of all that the Internet is becoming. Without advances in this area, the idea of computing in the cloud would be unrealizable. Virtualization software gives the chance of multiple OSs to run on the same machine, it gives the opportunity to clearly separate resources independently of the underlying hardware, decrease cost, improve management and most importantly, to increase the efficiency of resource use. Hypervisors and virtual network adapters are two key software components that made virtualization possible.. Figure 5. Non virtualization vs. virtualization The hypervisor is software that shares the physical server resources between several virtual machines Figure 5. It separates the guest OS from the underlying host, or hardware. It controls the CPU, memory, I/O operations, in such a way that the VM instances “think” they can access all the actual resources of the server. Moreover, all guest OSs work without knowing about each other’s existence on the same hardware. Hypervisors are classified into two categories, as seen in Figure 6. Type one, or bare metal, implies that the hypervisor is installed and operates directly between the hardware and the VMs. The type 2 hypervisor is set up in an already existing installed operating system, the host OS. Hypervisors and their advantages to the computing would be without use if the VM could not connect to the outside world, to the network. 9.

(17) Regardless of the type, hypervisors have to use virtual network interfaces in order to create a connection between the guest OS and the real NIC.. Figure 6. Type 1 and Type 2 hypervisors Network adapters are present in the virtualized space. They make communication between VMs and between VMs and outside world possible. But contrary to the traditional network card, they are not represented by any kind of hardware. That means that they are implemented and work only at abstract, software level, meaning that their performance and ability to manage network traffic is paramount to the optimum functionality of the VM. Nevertheless, despite its software representation, the virtual adapter is seen by the guest operating system as a proper physical one. Furthermore, the network, host and protocols do not make any difference between NICs and its virtual counterpart. That pushes the importance of the virtual card, in the virtualized network, even further. Proper studies and tests have to be undertaken before proper deployment of data centres can occur. Functionalities differ from one virtual network device to another, as well as performance in the similar conditions. [12] Both, hypervisors and virtual network adapters lead eventually to the idea of virtual networks and virtual networking. This allows the VMs and their host operating system to communicate between each other as if they were using an actual, physical network. When deploying a data centre that will eventually support a cloud computing service, proper evaluation of the benefits of certain virtual cards as well as benefits and cons of the virtualization mode have to be assessed. Virtualization mode refers to how the guest systems, the host OS and the outside network will interact with each other. These modes commonly include bridge mode, 10.

(18) NAT mode and host only networking. In bridge mode the guest OS will connect to the physical LAN as if it were an actual machine, thus a transparent and independent mode of accessing the outside network is possible. NAT mode, offers the same functionalities as an ordinary NAT device. The guest OSs and the host will all share the same IP and MAC address. Host only networking refers to the restriction of communication only between the main machine and its hosting VMs. In this mode there is not interaction with the physical interfaces, but rather a loopback interface is created that facilitates the network traffic between the actual machine and its virtual instances. TUN/TAP is an open source virtual adapter that is used, in some virtualization software, to implement scenarios as the ones described before. It is composed of two modules: TUN, that operates at layer 3 of OSI model, dealing with IP packets and routing and TAP that simulates an Ethernet adapter that controls incoming and outgoing frames. It is used mostly in Linux based virtualisation software, from tunnelling protocols (OpenVPN, Hamachi) to virtual machine networking (KVM, VirtualBox). Other common virtualized devices are: AMD PCNet PCI II (Am79C970A), AMD PCNet FAST III, Intel PRO/1000 MT Desktop (82540EM), Intel PRO/1000 T Server (82543GC), Intel PRO/1000 MT Server (82545EM). [13, 14]. As stated in, [13, p. 88], the virtualization software has some limits that have to be taken into account, in respect to jumbo frames. This aspect comes to support the idea that proper assessment of both the network driver, hypervisor and network virtualisation mode have to be carefully made and correlated with the business plan so that optimum performance is achieved. IPv6 and its functionalities have to be properly evaluated based on the driver chosen or shipped by default with the hypervisor. Only at this layer, IPv6 is able to effectively demonstrate its new features and advantages to the networking environment.. 3.3 Server farms and potential problems Server farms or data centres represent the base of the new, centralized internet. They are characterized by large efficiently cooled rooms, in which numerous servers run simultaneously to provide different services, from pure processor power to data base storage. Communication between these machines is done through high speed networking technologies and equipment. This results in potential problems that can cause the data centres to suffer performance problems, from high latency to network traffic bottlenecks and data corruption, making proper networking planning mandatory so this problem is avoided to a high extent. Different impediments in the data centres must be resolved so that efficiency is increased and the TCO would be within the planed limits. This is more important for data centres that support cloud computing services or provide virtual infrastructures. VMs can reach large numbers, from hundreds to thousands and maybe even more, hence there is an increased burden on the network through all these virtual machines communicate. Virtual server farms share the same potential problems as the traditional ones, where every single machine was represented by one and only one operating system. But 11.

(19) because virtualization brings high efficiency and consequently low idle times for servers, traffic in data centres tends to be more intense and denser. This correlates with increased burden on the links that connects the physical server to the routers and the outside networks. The ratio of VM per server, as stated in [15, 16] is quite high, reaching to 12:1 or 15:1 and potentially this ratio could increase even more in the near future. [17] Both application and network problems can compromise data centres. Even though these problems have a multitude of causes, IPv6 can bring some advantages that are worth taking into consideration. Server farms can have I/O bottlenecks especially with storage and data bases. The same can happen with traffic flow in the case of huge virtualized datacenters. This creates a commonly occuring problem: congestions. The new control implementation of IPv6, ICMPv6 can better cope with huge traffic flows. Also it provides new methods for management, troubleshooting and mobilty, functions that can greatly improve the quality and reliability of any data center and as a resultto any cloud computing platform. As stated before, virtualization can increase the traffic in a data centre a lot, mainly because each VM is seen as an independent machine, with its own IP and MAC address. That leads to the overwhelming number of broadcasts used by ARP functionalities. [18, 19]. This problem can easily be solved with IPv6, through neighbour discovery and anycast addressing. In the following subchapters some of the problems commonly seen in server farms are presents and preliminary solutions are exposed. 3.3.1 Traffic overhead and load balancing When talking about networking and data centres, traffic has a high impact on the overall performance of the services offered. TCP/IP packets are a sensitive issue that has long been treated with utmost respect and consideration. Inexact use and deployment of the stack can result in lot of data corruption, unnecessary data retransmission, traffic bottle necks, and high latency. This has lead to many studies that monitored the different behaviours of both IPv4 and IPv6 protocols have in different network environments. Form the application point of view, both Ethernet and IP headers are seen as overhead. Unfortunately the new IP protocol creates a bit of a problem when talking about this aspect. IPv4 header is at least 20 bytes long while IPv6 header is twice as large, reaching 40 bytes. When taking into consideration the fact that when IPsec is used with IPv6, the overhead of the protocols increase even more. This poses a problem, especially because the majority of packets in the internet are small size packets, in IPv4 [20] and in IPv6 as well [21]. That means that the overhead can compose quite a lot of the actual traffic, making data transmission less efficient. Furthermore, presented in this small scale test [22], IPv6 presents an increase in overhead compared to IPv4. This means, that potentially, the new protocol can create further problems in the data centres and consequently to the cloud computing idea. The flexibility of IPv6 is one of its strongest points. It can be customized according to the customer services that it has to serve and based on the traffic that it 12.

(20) handles. For example, the problems addressed earlier can be solved through customisation of the header. In LAN communications IPsec can be opted out, therefore reducing the overhead and extra information. Depending on the level of customization that the provider wants to apply, a solution like the one presented in [21] can easily avoid some of the problems. Furthermore, header compression protocols [23] can be used with IPv6. This might solve the problem of the overhead, and at the same time making use of the benefits and advantages that the next generation IP provides. Load balancing is a method that offers the possibility, to current data centres, to scale their computing power and distribute traffic and processing load across a multitude of servers. Furthermore, it offers a method to distribute the traffic load across different segments of networks. One of the methods the load balancing is using a network balancer; anycast addressing scheme can provide some help as well. Load balancers can benefit from the new functionality of neighbor discovery, improving the flexibility and scaling methods for the servers behind the load balancers. New servers added to the infrastructure can be easily detected. Additionally, as presented in [24], direct routing load balancing method, involves a problem regarding handling ARP requests. This issue may find its solution in full deployment of IPv6. Anycast addressing will reduce the need of the broadcast floods and can actually improve some of the load balancing methods. 3.3.2 Localization and migration Politics and government are always a possible problem in regard to any new initiatives and ideas that are foreign to them. Cloud computing is still in its infancy and it has not provided yet enough evidence for its benefits to the business world. That leads to government scepticism concerning data security, potential loss of businesses and the list can go on. This means that every country has its own rules when talking about potential cloud services, especially the ones that provide payment or financial services. Furthermore, big cloud computing businesses usually have more data centres scattered around the globe, for better coverage and service. As a consequence, in some cases VMs have to be moved from one physical server to another or even between two different data centres. In addition, VM migration can increase the flexibility of the computing processing power in grid computing by shifting VMs where they are needed. This will allow to dynamically moving virtual machines between data centres or physical servers to provide specific tasks, improve performance and resource load balancing. Stated in [25] and [26] are possible design objectives to achieve flexibility in data centres, with their associated design requirements. Both of them support the ideas that migration of VM across different physical servers has to be transparent to the user, all the data connections have to be maintained all over the migration process, it has to be done as fast as possible and the destination VM has to be 100% identical to the source one. In [26] and in [27] mobile IPv6 is proposed as a solution for better transfer of virtual machine, at the same time with keeping the migration requirements. 13.

(21) Furthermore, in [28] the migration of the virtual machines is done along with the persistent file that is used by the VM; usually a file of greater size that is stored on the local servers. In two of the above examples, IPv6 benefits are already exploited by implementing the mobile IPv6 for data transfer between data centres. It can be argued that mobile IPv6 use can be improved even more by implementing IPv6 QoS techniques in the home agent. This will provide better ways to deal with large amounts of VM migration and management. In [28], the migration of the persistent file may be improved by the use of the jumbograms that are a feature of the new IP protocol. All in all, virtual machine migration is one of the main problems that are present in the virtual data centres around the globe. Live migration is a great solution for providing the most flexibility in data centres, but also provides the means to offer to roaming customers the best services by moving the VMs physically as close as possible to them. Consequently, through the next generation IP, new advantages can be added and utilized.. 14.

(22) 4 Cloud computing. Cloud computing is used more and more often in the online environment and it can, at times, be a confusing term. Technically, it involves any kind of resources, software or hardware, which are created and sold as services, by third parties. Roughly, that means outsourcing IT infrastructure to specialized companies that will offer processing power or software, based on customer demand and budget. The term “cloud” comes from the fact that in different diagrams, the internet was always depicted as a cloud, through which all the smaller components, as nodes, hosts, smaller networks, would communicate with each other. That implies that cloud computing always involves the use and the need of an online component. Subsequently, resources are always accessed remotely through the use of the internet. Many people confuse the idea of virtualization with cloud computing. It has to be clearly stated, that virtualization is not cloud computing. Instead, virtualization is just a technical method to create an abstract entity from a physical one. As stated before in the thesis, virtualization created the idea of virtual machine and virtual network. Accordingly, it can be said that virtualization is just the means through which cloud computing is implemented and offered as a service. As stated before, cloud computing is all about services provided remotely and independently from the customers’ IT infrastructure. The “computing“ part refers to services that are sold over the “cloud”. Hardware infrastructure can be sold to customers, who in turn can customize it as they want, without being worried about managing and troubleshooting the data centre itself. Software is also an important part of the cloud computing idea. As it will be presented, when talking about software in the cloud, there are two different approaches that imply different levels of interaction with the underlying infrastructure; one with the possibility of creating your own applications and the other granting access to the ones that are predefined. The possibilities that cloud computing bring to the businesses communities, from customizable infrastructures to customizable cloud-based software are hard to ignore. This means that the “false starts” [8] that in the last years was a barrier and a mirage in front of different services like, multicast, security and differentiated services are now becoming an incentive in cloud computing. What is even better is that cloud computing relies on these services to grow. Popularity of cloud computing is growing very fast. More and more companies chose to outsource their IT needs to different companies around the world. The consequence is that, in the near future, cloud computing can grow beyond its capabilities and crush under its own success. IPv6 deployment has been slow up until now and maybe it will be in the future too, but “The introduction of IPv6 is envisaged as a solution, not only because of its huge address space, but also because it provides a platform for innovation in IP-based services and applications.” [34]. Thus, the 15.

(23) innovation that IPv6 can bring to cloud services, can truly launch the implementation of IPv6 at an increased the pace.. 4.1 Software infrastructure versus hardware infrastructure In Cloud Computing, infrastructure can be divided into two parts: hardware and software. The reason behind this separation is that both can be sold as services or products. Cloud computing is all about offering some kind of outsourced infrastructure to companies that seek to be more efficient with their IT budgets or those that cannot afford it. Furthermore, based on the level of customization of these infrastructures, three building blocks can be defined for any cloud computing service: IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service).. Figure 7. Hardware infrastructure and software infrastructure Infrastructure can be defined as the underplaying structure that offers and allows upper layer services to perform their tasks. It allows the interaction between different entities using the same “language” or architectures. Figure 7 depicts the two infrastructures that make up the cloud computing concept. Online hardware is a collection of online physical or virtual resources that are accessed through a remote connection. They are represented by fully deployed networks. As it will be presented in the next chapters, some implementations of cloud computing, like private clouds, allow the use of resources through actual hardware rent. Physical servers, virtual servers or entire fully operational networks are rented by third 16.

(24) parties to the companies that need them; they are data centres, outside the client’s organization, that are maintained by a specialized firm. When talking about online hardware resources one must take into account all the components and the SLA that the provider will render. The servers, architectures and protocols used, all contribute to a stable, scalable and efficient service. It needs to be mentioned that the benefits of IPv6 are mainly seen at this lower level: the underlying hardware and network that comprise the online hardware. Moreover, the online hardware encompasses the virtualization process that takes place in these environments. Virtual machine managers, hard disks, network adapters and other virtualized hardware are very important in the whole picture of cloud computing. Virtualized hardware is very important in the overall performance of the services and new technologies can improve their operation. Outlining properly the advantages will not only lead to improving the hardware layer, but the online software layer will benefit from the new protocols and hardware development as well. IaaS, or infrastructure as a service, is the concept that turns the above motioned hardware resources, either physical or virtual, into a product that can be marketed. Through IaaS, cloud computing vendors can sell processing power, either offered as simple servers or as virtual resources (virtual machines, online storage, etc.) to different customers. In turn the customer has access to its own hardware infrastructure and has the option to modify and use it as he sees fit. Online software is the corner stone of cloud computing. It is the layer that provides the most services and functions and behaves as normal, locally installed software. To further expand, all the software programs that can be accessed remotely through a web browser or any other kind of remote connection technology, and behave as any other piece of local software can be considered online software. But we have to make a step deeper into the concept and split the online software into two branches. Providers can offer predefined software, for example Google Docs, Zoho and many more. Here the customer can only use and customize to some point the services that are already available. This approach to cloud computing is characterized by the SaaS or software as a service. On the other hand, services such as Google Apps, give more freedom for the customer to create their own application based on the existing tools offered by the provider and the specifications of the underlying software infrastructure. The customer has access to online databases storage and usage, website hosting, mobile software support any many more elements that he can use in its custom software. In other words, the customer has access to a platform on which he can build, according to some specifications, software. PaaS or platform as a service is the concept that turns custom online software into a product. In the online environment all the elements interact with each other through, simply put, networks and dedicated online connections. It can be thus deducted that the performance of the networks that support the online computing service is of critical importance. We have to be always aware of the fact that all data exchange is done, if not all the time, through remote connections that are influenced by the networking protocols. Specific online software needs to have the best performance when accessing remote databases or any other kind of data. Hardware has to be able to process very 17.

(25) efficiently the different protocols used in the communication between different physical machines and virtual machines. The benefits of newer protocols cannot be ignored and this is the case of IPv6, given its potential to improve the overall performance of virtual networks and the services that they provide. Even though online software is the most important component in cloud computing and online hardware is the transparent one, often unseen by the user, the execution of the software depends on the performance of the underlying hardware infrastructure, which in turn depends heavily on the machines and protocols they use. It has to be stated now that all the work will evolve around the hardware part and the protocol of the underlying network that serves the software component.. 4.2 Implementations of Cloud computing Cloud computing offers online infrastructure that customers can adapt and use as they see fit. But to further understand the impact of a new protocol over these infrastructures we have to differentiate and detail the models in which cloud computing can be implemented. The deployment model of an online computing service can be divided into 4 categories: private cloud computing, hybrid cloud computing, cloud hosting and the most commonly used, public cloud computing. Figure 8. Cloud Computing deployment models Private clouding or internal clouding, as it is depicted in Figure 8, is one of the simplest models. It implies the uses of all available infrastructure by only one customer who can choose to host the data centre internally, in its own organization, or it can choose to be managed by a third party company. It the latter case, the customer will rent the cloud infrastructure, from an IaaS vendor. Consequently new development in the 18.

(26) computing and networking technologies can be implemented easily than in the other models; the customer is able to enforce and adapt the cloud infrastructure, from the actual physical servers and data connection to the protocols, software and security, as they consider suitable. One can argue that the private model lacks all the benefits that cloud computing brings to the IT world: on demand computing power, lower cost of ownership and flexibility. In some cases this may be true, but one has to be aware that this type provides the best security and thus it can be a first, timid step of a company towards cloud computing and its full benefits. In addition, private data centres offer the proper ways for big corporation to support the cost of such deployments, to implement its internal security politics and benefit of full security standards. Hybrid clouding, as the term implies, it is a service that merges two models into one: private and public service. This model offers a choice for the customers that want to reduce their IT service cost, by outsourcing a part of their IT infrastructure. This model can encompass all the building blocks of cloud computing: IaaS, Paas and SaaS. For example the user can choose to rent private hardware infrastructure for certain purposes, use a cloud platform to create custom software and deploy them on the infrastructure of a Paas vendor and use SaaS for email or document editing. Cloud hosting is external to the customer company and it provides the most flexible and budget friendly model. It is characterized by the possibility of renting virtual machines on a need basis. To proper understand the concept, Amazon AWS is such a service, in which a potential user can rent different VMs to perform any kind of job. Depending on the service, the VMs are rented based on hour usage, VM performance, traffic or other options. In addition to virtual machines, online storage can be bought and used. After the customer has finished using the rented resources, these are freed and made available for other purposes. Cloud hosting is another approach that IaaS can take. However, the entire physical infrastructure on which the cloud hosting service relays on is outside of the customer reach. Public cloud is a service that is generally available and requires the least knowledge about IT. This model is the most popular one amongst home users, because it provides the access to basic software and services, for example Google Docs, Google Calendar, Gmail, etc. Furthermore, public cloud service like Google Apps or Zoho Creator offer tools to create user specific application and deploy them on the cloud. However the user does not have the possibility to access and configure the hardware or software infrastructure in any way. As a result, public clouds can encompass the PaaS and SaaS building block and provide service free or, over a certain user, or traffic quota, a fee can be applied.. 4.3 Insecurities and problems in cloud computing In the context of new ideas and concept arising, people tend to be reluctant to accept them. Scepticism and insecurity take a hold on their whole rational thinking and adventurous driving force. When these two emotions intersect the business environment, where risk, profits and even social status come into question, the issue at 19.

(27) hand further amplifies, arriving at a point in which new ideas are not only put aside, but are rejected as a whole and not even taken into consideration. The cloud computing business has seen this happening over and over again. Many companies are still reluctant when considering moving their IT infrastructure, if not all into the cloud, at least a part of it. Moreover, some of them are not even aware of the concept or about the fact that they are using some sort of cloud services, as stated by the president of Trend Micro, Dave Asprey in a survey about insecurities of cloud computing: “On top of that, some respondents didn’t even know they were using the cloud, much less securing it.” However, we have to be unbiased and acknowledge the fact that some of the problems put forward by several companies have real substrate. Businesses and their success are based on the confidentiality and security of their data. Cloud computing, through its definition, means outsourcing your data infrastructure to a third party company. Hence, all your data security is in the hand of a company outside your company’s policies, a company that may not implement and deploy the best methods to protect your data integrity and confidentiality. In a survey made in 2011 by Trend Micro [29], 43% of the surveyed companies had at some point security issues with their cloud computing providers. Furthermore, the article presents another interesting aspect pointing out another big concern. While security is still a problem, another arises in the form of performance and availability. The percentages presented, 50% of companies concerned about security and 48% about performance and availability, create a grim picture for the future of cloud computing. “Data in the cloud is vulnerable without encryption”. [29] In addition, companies encrypt the data they store in the cloud and tend to choose services that offer encryption in their offers. So, it can be determined that security concerns in the cloud, are strongly related to the need of powerful encryption of the data stored and reliable encryption key management. One of the problems that arise when talking about the encryption keys is the safe exchange, due to the fact that this is done usually over unsafe environment, like the internet. IPv6 provides a safer networking environment for encryption key exchange, through its authentication and encryption functionalities, defined in IPsec along with the framework protocol define by ISAKMP. Presented in this article [30], it is pointed out that security problems often can be attributed to unaware or unprepared customers, as well as to the provider itself not deploying enough security methods. Amazon cloud computing service’s business plan offers means for customers to create personalized VM and make them available for other users. This creates the possibility of many security breaches and data theft, due to inefficient and incomplete removal of sensitive information, before making the VM image widely available for other customers. One problem presented is the exploitation of SSH keys. [30, p. 396] One solution to this problem is to restrict the IP addresses that can access a certain VMs. With IPv4, that is deployed in the Amazon infrastructure it is almost impossible to achieve such a solution. But, with IPv6, this issue can be solved. Furthermore, the authentication, which is fully functional with IPv6, can offer protection against unauthorized access to deployed VMs. 20.

(28) Presented above is one the benefits that deployment of IPv6 can bring to a lot of cloud computing services around the internet concerning security. Performance concerns also have a solution in the next generation IP. NAT avoidance and the flexibility of the IP headers, can improve the latency and overhead of the network, as it will be presented later in the paper. IPsec, tunnelling, data integrity and security will all benefit from full deployment of IPv6. Insecurities about migrating to cloud infrastructures and services will diminish once cloud vendors will fully implement and make available IPv6 and understand that IPv6 will make solutions that are not possible with IPv4 possible. Furthermore, as customers and businesses realise that new technologies offer better tradeoffs that older ones, as with IPv4 versus IPv6, the grim sheet that covers cloud computing insecurities will not be as concerning as before.. 4.4 Future of cloud computing Digital data is nowadays ubiquitous. The quantity that is processed and managed everyday grows exponentially, above values that can turn out to be unmanageable by small or medium companies. All fields of work require more and more data manipulation and process. That means, soon businesses will not be able to afford to storage and manipulate the data that they need, unless it is in an efficient and cost effective manner. Thus, cloud computing can be seen as a pertinent solution, one that sooner or later will be adopted by all the players in the information environment and more. The cloud adoption at general scale is imminent; its services will be used by more and more entities. In a survey made by KPMG International in 2010 [31], it was shown that the companies’ level of interest to incorporate cloud services into their business plans is increasing. Moreover, the survey presented bank and government institutions as being reluctant to the idea. However, in 2011 the US government issued a Cloud First policy [32] as a way to decrease the cost of IT infrastructure and at the same time to increase efficiency and ease of implementing new IT structures. As a consequence, a programme has been developed to accelerate the US government into adopting cloud technologies. [33] All of these come to support the idea of sudden change and adoption of formerly reluctant entities, regarding outsourcing IT and using cloud computing services instead. Author Christopher Barnatt, defines four categories of companies in one of his book [34], based on the adoption of cloud computing: pioneers, early adapters, late adapters and laggards. The peak of companies switching to cloud service, according to the author it will take place between 2010 and 2020, these being the early and late adapters. Correlated with the information presented above, with government trying to reduce cost and to create more efficient IT infrastructures, we can assume that online infrastructure, hardware and software will know a massive boom of customers. The growth of interest in online services will put great pressure on the security, performance, availability and data networks. This means that the possible upgrading of cloud computing, regarding any of the fields presented before, has to be not only taken 21.

(29) into consideration, but actually thoroughly examined for the possibility of improvements. Cloud services will experience more pressure coming from their customer, and it can no longer afford to postpone the adoption of new technologies, IPv6, being one of these. The future relies on cloud service vendors offering the best products with the introduction of the most efficient and reliable advanced technologies and on the customers who, nowadays, seem to be more aware of the better tradeoffs cloud can bring to their businesses. The benefits that IPv6 can bring to cloud computing can also help companies plan and develop cloud computing policies for their businesses. Therefore, IPv6 can not only help develop the technical performance, but also improve the view on cloud computing and reduce the uncertainty about security, privacy and performance.. 22.

(30) 5 IPv6 benefits in cloud computing. 5.1 Security benefits Insecurity is one of the critical issues that generate reluctance to potential customers to make use of cloud computing. The fact that storing your sensitive data into the cloud might create potential losses for business due to low level of security should force cloud computing providers to deploy all the necessary methods to provide high security. Due to the limitation of IPv4, some of the techniques created to prolong the life of IPv4 can prove to be, in some situations, a barrier towards proper safe data transmissions. IPv6 has the potential to create a safer environment in which data can be exchanged easily with no down side for security. The mechanism developed to slow down the IPv4 address exhaustion, network address translation (NAT), can be considered one of the main obstacles that inhibit proper security in the cloud and stop the deployment of addiction protection measures. The protocols that assures secure IP communication, IPsec, through data authentication and encryption, is the main victim of NAT protocol. As it will be presented, by avoiding NAT new possibilities arise when deploying IPsec; the cloud computing customer will have better security options at its disposal. 5.1.1 NAT avoidance IPv6 came into existence with the idea of bringing new and advanced features that can solve some of the problems that IPv4 is facing with. We can start talking about the benefits that IPv6 brings to cloud computing from the most basic and obvious change in the protocol: the huge pool of addresses: 2128. The sheer number of addresses that are available not only solves the critical problem of running out of addresses but brings a few advantages regarding other aspects of networking, for example, NAT avoidance.. Figure 9. Basic NAT Figure 9 depicts the basics of a NAT router or any kind of NAT device deployed in a network. In this example the computer situated in the internal network of a company wants to communicate with a computer or server situated somewhere on the 23.

(31) internet. As it can be seen, the internal computer has assigned to it a private IPv4 address. To communicate, the internal computer sends its packets with the source address of 192.168.32.2 and destination address of the foreign computer, 198.51.100.2. But because the source address is not routable on the internet, the NAT device translates this address to a public one, in this case to 198.51.100.1 and sends the packet further along the way. Now, the original packet’s source address is of the NAT interface that faces the internet, but keeps the destination address. The server responses to the request at the address of 198.51.100.1 not knowing that solicitor is not represented by the same address. When the packet reaches again the NAT device, it will do the reverse process presented above: it will translate the public address into the private address of the original solicitor, 192.168.32.2 and send it accordingly, to the internal network computer. The importance of properly understanding the basic functionality of NAT resides in its effects on the security aspects concerning IPsec. Through address translation, parts of the original packets have to be changed, which in turn makes the use of some features of IPsec impossible. Authentication and encryption are the two. Figure 10. AH and ESP header insertion building blocks behind IP security. Authentication provides data integrity and validation of the source, while encryption provides confidentiality and ensures no data manipulation. This is done through AH and ESP headers that are added to the original packet. In addition, IPsec has two modes, tunnel (host to gateway communication) and transport (host to host communication) mode, which implies two different modes of header insertion, as presented Figure 10.. 24.

(32) The authentication process implies creating a unique identification number, commonly a hash, based on some non mutable parameters, one of which is the source address in the IP header preceding the authentication header. The hash is added to the AH header; the receiver of the packet will apply the same algorithms on the same parameters and it will compare the resulted hash with the one received. In the case they do not match, the packet will be discarded. Going back to the principles behind how NAT works, we can clearly see that if the send-packet’s IP is modified in any way along the route, the authentication will fail at the receiver. In consequence, when using the authentication mechanism provided by the AH header along a route that at some point has a network translator mechanism, the communication will fail. So, through NAT avoidance, full IPsec functionality can be achieved. AH and ESP headers offer almost the same functionalities with respect to authentication, hence, it can be said that there is not much loss in the case of failure when using the AH header in cloud computing communication. However, this is true to the point that ESP provides authentication for only a part of the original packet. The authentication is made only for the tunnelled packet, and does not take into consideration the headers that are outside the tunnel, as it can be seen in Figure 10. On the other hand, AH extents its protection on the IP and all extension headers (even the hop-by-hop ones) that precede it, regardless of the transport type applied. In Figure 11 the authentication header and its fields are presented. The integrity check values or ICV is an integral multiple of 32 bits and it carries the authentication data for the packets to which it was attached. The values are created based on nonmutable headers; headers that are known will not change their filed values along the route to destination. These values are use in a hash function and create a unique authentication number. Based on this, the destination can verify the integrity of the packet and that it was not modified along the way. As stated before, if a NAT device exists along the route, the values in the non-mutable field, especially the source IP address, will change and thus the integrity check will fail. It can be observed that in this case, no real end-to-end IPsec connection can be achieved and consequently neither complete security. 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Next Header | Payload Len | RESERVED | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Security Parameters Index (SPI) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Sequence Number Field | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + Integrity Check Value (variable) | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+. Figure 11. Authentication Header [3] NAT is still widely employed due to the ubiquitous nature of IPv4. However, the exhaustion of addresses imposes the use of network translator in cloud computing 25.

(33) environments. For example, Amazon EC2 does not provide unique, routable IP addresses to the VM instances in the cloud, but rather uses NAT mechanism to map routable IP addresses to customer accounts or virtual private clouds [35]. One of the benefits of cloud computing is the possibility to access data from everywhere, regardless that is from inside the corporate network or from an outside host. As a result, not being able to have a proper end-to-end connection with the cloud services or virtual servers rented in the cloud can have consequences on the security, confidentially and integrity of the data stored in the cloud and the functionality of IPsec. Security and data integrity is offered only on a part along the way. Amazon, for example, ensures security through VPN gateways. All security is done between the customer gateway and Amazon’s gateways or customer hosts and Amazon’s gateway, so no end-to-end authentication is taking place. The traffic between the cloud service provider gateways and the final virtual machine, host, etc. is left unprotected from the point of view of authenticating the original source. Furthermore, even in tunnel mode, the information outside is not authenticated. Stated in RFC 4306 [36, p. 6] it is clear that end-to-end security in not fully applicable to IPv4 networks. IKE is an important part of IPsec and therefore of IPv6. It is a protocol that helps the negotiation between two peers of certain security features, like cryptographic algorithms and private keys for ESP and/or AH security association. In some case, when NAT modifies parts of the TCP/UDP header, as the source and destination port, IKE functionality is impaired by the network translator. For communication, ESP uses port 500 to identify itself. However, ESP encrypts the upped headers of the IP packets, as it can be seen in Figure 10, including the TCP/UDP header. Thus, NAT cannot “see” the source and destination port, needed in the case of NAT with port translation; resulting into improper manipulation and forwarding of the incoming packet. The solution to this problem is offered in [37], through and UDP encapsulation of the already tunnelled packet. However, this can be treated as an example of a solution to a problem that may be easily treated with IPv6. Furthermore, the NAT problem and the solution offered with the UDP encapsulation of IPsec tunnelled packets, increases the complexity of a system which is already highly complex: the virtual environment, the data centres and the networking sub-layer on which they all rely for proper functionality. NAT avoidance, at a first glance, does not seem to provide a substantial benefit. However, as the next chapter will show by eluding NAT from the data centres topologies, IPsec will become bring new security possibilities. Furthermore, cloud computing providers have to be as transparent as possible towards the client. As a principle, it would be ideal that no client-traffic manipulation should be done. However, nowadays this takes place due to technology limitation, a problem that forced the creation of the NAT protocol. This imposes a fast transition to IPv6 to take advantage of its benefits. But, until the IPv6 will not be adopted on a large scale, customers and provides will have to use techniques of transition, which in this case involve the use of NAT on some scale. Conversely, as it will see in a later chapter, in some cases NAT will still has to be used to provide some services to data centres. NAT avoidance is most benefic when talking about IaaS infrastructure and not PaaS or SaaS 26.

Viittaukset

LIITTYVÄT TIEDOSTOT

Ydinvoimateollisuudessa on aina käytetty alihankkijoita ja urakoitsijoita. Esimerkiksi laitosten rakentamisen aikana suuri osa työstä tehdään urakoitsijoiden, erityisesti

Hä- tähinaukseen kykenevien alusten ja niiden sijoituspaikkojen selvittämi- seksi tulee keskustella myös Itäme- ren ympärysvaltioiden merenkulku- viranomaisten kanssa.. ■

Jos valaisimet sijoitetaan hihnan yläpuolelle, ne eivät yleensä valaise kuljettimen alustaa riittävästi, jolloin esimerkiksi karisteen poisto hankaloituu.. Hihnan

Mansikan kauppakestävyyden parantaminen -tutkimushankkeessa kesän 1995 kokeissa erot jäähdytettyjen ja jäähdyttämättömien mansikoiden vaurioitumisessa kuljetusta

Jätevesien ja käytettyjen prosessikylpyjen sisältämä syanidi voidaan hapettaa kemikaa- lien lisäksi myös esimerkiksi otsonilla.. Otsoni on vahva hapetin (ks. taulukko 11),

Helppokäyttöisyys on laitteen ominai- suus. Mikään todellinen ominaisuus ei synny tuotteeseen itsestään, vaan se pitää suunnitella ja testata. Käytännön projektityössä

Tornin värähtelyt ovat kasvaneet jäätyneessä tilanteessa sekä ominaistaajuudella että 1P- taajuudella erittäin voimakkaiksi 1P muutos aiheutunee roottorin massaepätasapainosta,

Keskustelutallenteen ja siihen liittyvien asiakirjojen (potilaskertomusmerkinnät ja arviointimuistiot) avulla tarkkailtiin tiedon kulkua potilaalta lääkärille. Aineiston analyysi