• Ei tuloksia

Comparison with competing mechanisms

In this section, CVIHIS is compared with two other congestion control mechanisms. The backward-loading mode is compared with LEDBAT. LEDBAT is designed for low priority applications and it has been used by some background bulk-transfer applications.

It has been used by Apple for software updates, and by BitTorrent, for example. The real-time mode of CVIHIS is compared with Google Congestion Control for Real-Time Communication on the World Wide Web (GCC). GCC tries to be TCP-friendly. It has already been implemented in the Google Chrome and Firefox browsers. LEDBAT and GCC have been described in Chapter 3.

4.5.1. Backward-loading mode versus LEDBAT

The LEDBAT comparison was made using the NS-2 simulator. The NS-2 source code for the LEDBAT implementation is provided on the LEDBAT software page. Figures 4.33-4.36 present the results of two simulation cases where the back off behavior of

126

LEDBAT was tested based on the network structure of Figure 4.5. In these two cases, the capacity of the bottleneck link was 600 kbps and the one-way propagation delay was 90 milliseconds. The queue size of the bottleneck link was 30 packets.

In the first simulation case, the target queuing delay for the LEDBAT connection was 60 milliseconds. Figure 4.33 presents the size of the LEDBAT congestion window for this case. Figure 4.34 presents the queue size of the bottleneck node. In the second simulation case, the target queuing delay for the LEDBAT connection was 55 milliseconds. Figure 4.35 presents the size of the LEDBAT congestion window for this case while Figure 4.36 presents the queue size of the bottleneck node. In both cases, the LEDBAT connection was started at the beginning of the simulation. We waited until the sending rate of LEDBAT settled to the capacity of the bottleneck link. After this, the TCP Reno connection was started. The TCP connection was active between 50 and 150 seconds.

Figure 4.33 LEDBAT congestion window size when the target queuing delay was 60 ms

127

Figure 4.34 Queue behavior when the target queuing delay was 60 ms

Figure 4.35 LEDBAT congestion window size when the target queuing delay was 55 ms

128

Figure 4.36 Queue behavior when the target queuing delay was 55 ms

These two simulation cases were designed so that the results in Figures 4.33-4.36 could be compared with the figures describing CVIHIS-BLM behavior, presented in sections 4.2.2 and 4.2.3. Both mechanisms have the ability to stabilize their sending rate to the capacity of the bottleneck link. CVIHIS-BLM can do this stabilization without significant oscillation in every case. Instead, as presented in Figure 4.35, the sending rate of LEDBAT oscillated slightly in some cases. The reason for this oscillation is that the LEDBAT target delay is a strict value rather than a range. In addition, the LEDBAT sending rate is controlled in a window-based manner, which is sensitive to the phase effect. In contrast, the CVIHIS-BLM target delay is an area and the CVIHIS sending rate is controlled in a rate-based manner. Because the CVIHIS-BLM target delay is an area, the queue level varies more with CVIHIS-BLM.

As the figures present, LEDBAT can accommodate its sending rate more quickly than CVIHIS-BLM. This behavior has its advantages and disadvantages. If free capacity becomes available at the bottleneck router, LEDBAT can take it quickly into use. On the other hand, if this free capacity is temporary, the sending rate must be reduced radically somewhat later. This kind of situation can be seen in Figures 4.33 and 4.35 at a time unit of 70 seconds. Free capacity becomes temporarily available in the network because the TCP connection makes its slow start after the packet drop. Figure 4.8 shows that CVIHIS-BLM reacts in a more moderate way when there is a TCP slow start.

Generally speaking, CVIHIS-BLM is somewhat more flexible because it can acquire the values of the targetDelay area without manual configuration. CVIHIS-BLM can automatically take into consideration the properties of the connection path. With

129

LEDBAT, the target delay is configured manually. On the whole, it is difficult to say which mechanism is better because LEDBAT and CVIHIS-BLM work in a slightly different way. However, we can draw the conclusion that CVIHIS-BLM is competitive with LEDBAT.

4.5.2. Real-time mode versus GCC

The GCC comparison was made with the help of Carlucci et al. (2016). In their paper, they experimentally evaluated GCC by setting up a controlled testbed. In this testbed, two hosts were connected directly using a network cable. The network between these hosts was emulated by software. The NetEm Linux module along with the traffic shaper set delays for connection paths and available bandwidths for bottlenecks. This test setup does not depart significantly from our real network test setup. They also used the network capacities that have been used in this thesis. The capacity of the bottleneck varied between 500 and 4000 kbps. They sent video traffic between the hosts. Using the testbed, they evaluated to what extent GCC flows were able to track the available bandwidth, minimize queuing delays, and fairly share the bottleneck with other GCC or TCP flows.

They found that GCC flows were able to track the available bandwidth of the empty network so that GCC used slightly over 80 percent of the bandwidth. If this result is compared with Figure 4.10, it can be seen that CVIHIS-RTM can use almost 100 percent of the bandwidth. On the other hand, GCC has a fairly good ability to minimize queuing delays. The real-time mode of CVIHIS-RTM can minimize queuing delays if congestion notifications are sent when queuing delays are still at a moderate level. This can be done by sending explicit congestion notification well before the queue actually overflows. The number of dropped packets can also be minimized using ECN. Because the sending rates of both mechanisms fluctuate smoothly, they also take the free capacity for their use slowly. The sending rate of GCC varies slightly more than that of CVIHIS-RTM.

Carlucci et al. (2016) found that when three GCC video flows shared the bottleneck, the bandwidth was shared quite fairly. The measured Jain’s fairness index was 0.93. If these results are compared with the results in sections 4.2.5 and 4.3.3.3, it can be seen that CVIHIS-RTM behaves against itself better. In all the cases described in these sections, Jain’s fairness indices are over 0.99 when considering the phases where the sending rates have stabilized. They also found that three GCC flows were able to track the available bandwidth so that about 80 percent of the bandwidth was used. In contrast, CVIHIS-RTM can use almost 100 percent of the bandwidth in a similar case.

The TCP friendliness of GCC was tested against 99 TCP connections. The results showed that TCP friendliness of GCC was at an acceptable level. When the test results of Carlucci et al. (2016) are considered regarding TCP friendliness, it is worth remembering that the

130

results are affected by the TCP version used. They used the TCP CUBIC congestion control, since it is the default version used by the Linux kernel. In TCP friendliness, comparison cannot be made with CVIHIS-RTM because the GCC test results were quite limited. In particular, the results with respect to different RTT times are limited. However, they have performed a broader experimental evaluation than the one presented in Carlucci et al. (2016).

One disadvantage of GCC could be that it increases its sending rate in a multiplicative manner instead of using additive steps. The first version of CVIHIS (Vihervaara and Loula 2014) as well as the work by Chiu and Jain (1989) showed that increasing sending rates in a multiplicative manner could pose challenges for TCP friendliness and friendliness against itself. We can conclude this subsection by stating that CVIHIS-RTM is at the least competitive with GCC. In some cases, CVIHIS-RTM outperforms GCC.

The two mechanisms are quite similar but CVIHIS-RTM is able to use bandwidth more efficiently than GCC, whereas GCC is able to control network delays better than CVIHIS-RTM.