• Ei tuloksia

Network Setup and Implementation Details

The network setup emulates an NB-IoT-type link as detailed in Table 4. Down-stream, the link has a data rate of 30 kbps and a one-way delay of 400 milliseconds.

Upstream, the link has a data rate of 60 kbps and a one-way delay of 200 millisec-onds. The maximum transfer unit (MTU) for the link is 296 bytes. To emulate a variable delay for the rest of the path between the last-hop router and the fixed host, a 10-20 millisecond delay with random variation is used.

Retransmissions and other congestion control-related events are triggered according to the mechanisms of the congestion control mechanism employed in each particular

Downlink Uplink

NB-IoT data rate 30 kbps 60 kbps

1-way delay 400 msecs 200 msecs

Bottleneck buffer size 2500 B, 14100 B, 28200 B, or 14100 B

MTU 296 B

Table 4: Network parameters of NB-IoT and the bottleneck buffer size used in the experiments.

test case. For CoAP over UDP both the CoAP default congestion control and the more advanced CoCoA congestion control are used. For CoAP over TCP traffic both TCP New Reno and TCP BBR are used.

In the experiments, four different sizes of buffers are used for the bottleneck router.

The smallest buffer is only 2500 bytes, which is approximately the bandwidth-delay product (BDP) of the link. In contrast, the largest buffer size is 1,410,000 bytes, which can easily fit all of the payload. This buffer size is also referred to as the infinite buffer. The middle-sized buffers are 14,100 bytes and 28,200 bytes. The three largest buffers cause bufferbloat.

The likelihood of bit-errors in the link is also varied. In the base case, the network is entirely free of errors, and all packet loss is due to congestion. To study how the congestion controls differ in their ability to recover from packet loss, three different error profiles are used.

These three states are low, medium, and high, detailed in Table 5. In the case of the low error rate, the packet error rate is a constant 2%. In the other two cases the error rate varies, resulting in an average of 10% and 18% for the medium and the high error profile, respectively. The errors are introduced using a Markov model that alternates in suitably short intervals between two states: the error-burst and the low-error state. Notably, in this test setup, it is possible for multiple retransmissions of the same packet to be lost, making recovery particularly challenging.

Low constant 2%

Medium 10% in average alternating between 0% and 50%

High 18% in average alternating between 2% and 80 % Table 5: Packet error profiles and their states.

Network emulation details

Figure 9 i) shows the test environment, which consists of four physical Linux hosts connected by high-speed physical links. The client software emulating the IoT de-vices is deployed in host 1, while the fixed server software is deployed in host 4. Hosts 2 and 3 are used to emulate the network using two instances of thenetem network emulator. The first instance emulates the upstream and the second instance the downstream. In this way, a message passing through the emulated network always passes through two instances of emulators in total. The first emulator emulates the bit rate of the bottleneck link and the buffer of the bottleneck router. The second emulator emulates the propagation delay and the packet loss occurring in the wire-less link, in the event there are any. The second emulator has a very large buffer to ensure no packets are dropped due to congestion. This setup ensures the router buffer size is correctly emulated, and that the capacity of the link is consumed as it should, even when a packet is dropped due to an emulated wireless error.

Figure 9 ii) explains the role of each host on the path of a packet that travels from the fixed host to an IoT device.

The server and client programs are implemented in C99 using the libcoap CoAP library for C [libcoap]. Thelibcoaplibrary was extended to implement support for CoAP over TCP.

Figure 9: The test setup. i) shows the role of each real host. ii) the role of all the hosts in the emulation of a downlink connection.

Implementation details

In the experiments, Default CoAP is implemented as per RFC7252 [SHB14], CoCoA as per the draft [BBGD18] and CoAP over TCP as per the draft [BLT+17]. The Linux TCP implementation is altered in the following way: Cubic [RXH+18, HRX08]

congestion control, Selective Acknowledgements [MMFR96], and Forward RTO-Recovery [SKYH09] are disabled in order to use TCP New Reno congestion con-trol [HFGN12]. Further, Concon-trol Block Interdependence [Tou97] is disabled and the TCP Timestamp [BBJS14] option is not used. This configuration is to make the Linux kernel TCP implementation more akin to the standardised TCP and more suitable for constrained devices. Tail Loss Probe [DCCM13], RACK [CCD18], and TCP Fast Open [CCRJ14] are disabled as well. The Initial Window [AFP02] value in the experiments is set to four segments. Finally, the Linux TCP implementation is configured to use an initial RTO of two seconds, and to send delayed acknowl-edgements with timer set to a constant 200 milliseconds.

Some changes are introduced to the CoAP congestion control, too. MAX_RETRANSMIT is set to 20, EXCHANGE_LIFETIME and MAX_TRANSMIT_WAIT are adjusted according

to the CoAP specification [SHB14], and, to avoid premature failures, SYN and SYN/ACK retries in the Linux TCP are increased to 40 and 41, respectively. This is to avoid too early termination in the case the network is highly congested.

The default upper bound for RTO timeout in the Linux TCP implementation is 120 seconds and is left as-is. CoCoA [BBGD18] truncates the binary exponential backoff at 32 seconds. For Default CoAP, 60 seconds is used, as no maximum value is defined and very long retransmission timeouts are undesirable.