A fair dynamic content store-based congestion control strategy for named data networking

In this paper, the congestion control for named data networking (NDN) is studied. A novel dynamic content store-based congestion control strategy is proposed on account of the characteristic of in-network cache in NDN. A queuing network model is constructed to judge whether congestion occurs. If the network has the tendency of congestion or the congestion happened, the buffer of the output queue is dynamically expanded by borrowing NDN content store (CS), and the forwarding rates of data packets and corresponding interest packets are reduced so as to prevent or alleviate network congestion. In order to reflect fairness, the CS to be borrowed by the data output queue in the port is calculated in terms of the data output queue length and its weight. The simulation results based on ndnSIM show that the given scheme improves the bottleneck link utilization and maintains a low packet loss rate and average flow completion time.


Introduction
With the development of network interfaces such as 'Ethernet' or 'Wi-Fi', the architecture of current hostcentric internet protocol (IP) network is facing inefficiencies such as IP address exhaustion, insufficient scalability, poor security and mobility . To solve these problems caused by IP address, name data networking (NDN) has been proposed by adopting a contentcentric paradigm, where data is uniquely identified by a name for addressing and caching (Xia & Xu, 2013). The NDN architecture offers a number of attractive advantages such as network load reduction, low dissemination latency and energy efficiency. Meanwhile, this architecture is easy lead to congestion due to large amount of redundant information caused by multi-path forwarding strategy and the strict one-to-one relationship between interest packet and data packet (Lei & Yuan, 2014;T. Li & Wang, 2019;Zou, 2019). Therefore, how to solve network congestion has become one of the key problems in NDN architecture (Jiang & Luo, 2018;Shen et al., 2019;Yao et al., 2017).
With the expansion of network scale, all kinds of contradictions and problems of IP network for addresscentric have highlighted. To solve these problems, the communication mode of information-centric network (ICN) has evolved from host-host to host-network and CONTACT Hua Yang yanghua@sxau.edu.cn, 331336198@qq.com the forwarding mechanism has converted from traditional storage-forwarding to buffer-forwarding, which solves the efficient transmission of massive information. Among them, the NDN is the most representative network architecture. It focuses more on the content name, which can effectively solve a series of problems caused by the IP address. In recent years, the problem of NDN congestion control has received significant attentions. According to the role responsibility for reacting to network congestion, the NDN congestion control schemes can be broadly classified into two categories: implicit congestion control and explicit congestion control (C. C. Li et al., 2017). In implicit congestion control scheme, the requester judges the occurrence of network congestion through the retransmission timeout mechanism (RTO), and the network congestion is controlled or alleviated by reducing the sending rate of interest packets. In explicit congestion control scheme, congestion occurrence is detected by the intermediate node, then the congestion information is explicitly feeded back to the receiver, the receiver adjusts the sending rate of interest packets to control the return rate of data packets . Implicit congestion control scheme is suitable for end-to-end communication transmission protocol. The NDN architecture advocates the pull-based paradigm where in-network caching and multi-path forwarding are pervasive and the data source cannot be determined, which makes it very difficult to estimate timeout, so the implicit congestion control scheme is difficult to implement in NDN. In explicit congestion control scheme, the congestion state is detected by the intermediate node of the router and feeded back to the downstream node or receiver to adjust the forwarding rate of interest packets (data packets). This kind of congestion control scheme is suitable for multi-path forwarding strategy and has attracted a lot of attention of scholars . A congestion control algorithm is proposed by routing the content request to the potential content source . Considering the impact of link delay and interruption, a minimum-delay congestion control algorithm combined with reinforcement learning is given to realize the intelligent forwarding (Y. D. Wang et al., 2020). In reference W. J. Wang and Luo (2018), the weight of the data stream is dynamically adjusted according to its forwarding rate. The token bucket algorithm is adopted to slow down the forwarding rate of data stream exceeding the fair rate, and the congestion information is feeded back to the downstream node through the explicit feedback mechanism. Based on the characteristic of innetwork cache in NDN, a store-based congestion control algorithm is designed to solve the burst flows by considering the interaction between router buffer size and congestion control mechanism in Xia and Xu (2013). As aforementioned, the research on NDN congestion control mostly focuses on solving or alleviating network congestion by reducing the forwarding rate of interest packets, and the utilization of in-network caching in NDN is rarely considered. On the other hand, when the characteristic of intermediate node deploying cache is considered, the fairness of cache resource allocation has not been adequately analysed in designing NDN congestion control scheme. Therefore, combined with the existing interest control protocol (ICP), according to the characteristic of in-network cache in NDN, the problem of designing fair dynamic content store-based congestion control scheme for NDN has not been adequately addressed and remains an interesting and challenging research topic. This situation motivates the present control strategy study of ICP and dynamic content store (ICP+DCs).
In this paper, we are concerned with the fair dynamic store-based congestion control problem for NDN. By assuming that the router configures the same buffer size for each port, the NDN in-network cache is allocated fairly to the ports according to the data stream queue length and its corresponding weight. If the queue control threshold is less than the given constant, the in-network cache is borrowed by the port and the forwarding rates of data packets and corresponding interest packets are reduced to preprocess congestion. When in-network cache is used by the port, the queue control threshold is recalculated. If the updated threshold is less than the given value, the forwarding rates of interest packets corresponding to all data streams in the port are reduced. The main contributions of this paper can be highlighted as follows: (1) the NDN in-network cache and its dynamic allocation are considered to smooth the burst flow and alleviate network congestion; (2) the network congestion can be judged in advance so as to make preprocessing; (3) a fair dynamic content store-based congestion control algorithm for NDN is designed.
This paper is organized as follows. In Section 2, the fair dynamic content store-based congestion control algorithm is introduced. In Section 3, the performance of the algorithm is analysed according to the simulation results. Finally, conclusion is drawn and the possible research directions in the future are pointed out in Section 4.

Motivation
According to the characteristic of in-network cache in NDN, by borrowing the router cache to expand the port buffer, a more effective and practicable congestion control algorithm may be obtained. On the other hand, the state forwarding plane in NDN routers makes per-interest forwarding state available at each hop, which provides native support for flow-based control. Note that the terminology 'flow' represents the stream of interest packets or data packets. From the discussions above, it is clear that content store-based flow-aware control mechanisms are natural candidates for congestion control in NDN architecture when we seek to realize better data delivery performance.
If the network has the tendency of congestion or the congestion happened, the buffer of data output queue is dynamically expanded by borrowing NDN content store (CS), so as to smooth the burst flow and prevent the network congestion. In order to reflect fairness, the CS to be borrowed is calculated in terms of the data output queues length and its weights.

Dynamic interactive cache congestion control model
By adding modules of data table (DT) and content storebased rate adjusting (CRA), the dynamic interactive cache congestion control algorithm, which is executed in NDN, is proposed based on the original NDN routing model. The type of DT is first in first out (FIFO). If the data packet cannot be forwarded immediately, the control information of this data packet is recorded in an entry of DT. When the data packet is forwarded, the corresponding entry in DT is deleted. The dynamic interactive cache congestion control model is shown in Figure 1.

The judging network congestion model
Assuming that each port of the router is configured with the same buffer B. Let T i (t) is the queue control threshold of port i, when NDN cache is not borrowed by port i, it is given by the following formula: where α is a positive constant describing control parameter, Q i (t) = N j=1 l i j (t) is the queue length of output data in port i, l i j (t) is the output queue length of data stream j in port i, N is the number of data streams in router.
Let B s is the borrowed buffer size from NDN by all ports in the router, when the NDN cache is borrowed by the port i, T i (t) is replaced by where B i s (t) represents the buffer size borrowed from NDN by port i and satisfies B s = M i=1 B i s (t), its calculation formula is to be given later; M is the number of port in router.
When the port i does not borrow buffer from NDN, the value of T i (t) is obtained by (1). In this case, if T i (t) is less than or equal to the given constant β B , this shows that the network presents a congestion trend. Then, to prevent congestion in advance, the port i borrows buffer from NDN. At the same time, for the output queue whose length exceeds the fair length in port i, the forwarding rate of data packets and corresponding interest packets are reduced. When the port i borrows the NDN cache, the value of T i (t) is obtained by (2). In this case, if T i (t) is less than or equal to the given constant β c , this indicates that the network is congested. Then, the forwarding rates of interest packets corresponding to all data packets in the port i are reduced to alleviate the congestion. If the value of T i (t) is greater than β B + αB i s (t), it means that the network congestion has been alleviated, and the port i no longer borrows NDN cache.

Determine the weight of data stream
Let ω j (t) is the weight of data stream j, it is given as follows: where p(l j (t)) and s(θ j ) are monotonically decreasing nonnegative functions, l j is the output queue length of data stream j, θ j is the priority value of data stream j. For the convenience of calculation, let p(l j (t)) = 1/l j (t) and s(θ j ) = 1, then The above formula indicates that if the output queue length of data stream j is equal to zero, its corresponding weight is zero and the data stream j will not be forwarded. If the output queue length of data stream j is greater than zero, the corresponding weight decreases with the increase of the output queue length of the data stream j. If all data streams have same priorities, the increase of the sending rate of the data stream will lead to the increase of the corresponding output queue length, which can reduce the weight of the data stream so as to reduce the output rate of the data stream. This can prevent greedy users from seizing network resources and smooth burst flow.

Calculate the fair buffer allocated by NDN to the port
Let B i s (t) is the buffer that the NDN allocates to the port i at time t, it is given as follows: where l j = i l i j . The buffer size borrowed from NDN by port i, which is related to the output queue length of the data stream and its corresponding weight, is dynamic. It should be pointed out that, to embody fairness, the weight value is given in terms of the output queue length of all data streams in the router.

Fair output queue length
Based on the assumption that each port in the router is configured with the same buffer capacity B, neglecting the priority of data stream, the fair output queue length of the port i can be given by q i (t) = B/ N j=1 f (j). If the length of the data stream j in the port i is equal to zero, let f (j) = 0; If the length of the data stream j in the port i is greater than zero, let f (j) = 1.

Algorithm implementation
The pseudo code of the fair dynamic content store-based congestion control algorithm is given as follows: Algorithm 1 Pseudo-code of the fair dynamic storebased congestion control algorithm BEGIN input: Data; PIT match; if(there are matching entries in PIT) The detailed execution process of CRA, which is the key part to solve congestion, is given in the above algorithm. By using the algorithm, for possible congestion trends, the sending rate of data packets and corresponding interest packets are reduced to prevent network congestion. In the case of network congestion, the sending rate of all data packets is reduced to alleviate network congestion. In the following, the effectiveness of the proposed algorithm is given.

Simulation results and analysis
In this section, we study the performance of the proposed algorithm using the ndnSIM simulator (Alexander et al., 2012). To this end, we carried out extensive simulations and comparisons with the methods of ICP and ICP+DCs under a variety of network scenarios. In what follows, we present a selected set of simulation results to illustrate the benefits of our scheme.
For what concerns the topology used for the evaluations, we consider the single bottleneck link topology shown in Figure 2, which is typically used for congestion control analysis. There are two servers and five consumers. The consumers send 3000 interest packets per second to servers. The link propagation delay is 10 ms. The bottleneck bandwidth ranges are from 10 Mb/s to 60 Mb/s. The experimental parameters are shown in Table 1.
In the experiment, ICP, which is a classical interest request rate control algorithm in NDN, is adopted. On the basis of ICP algorithm, ICP+DCs are adopted in case of burst flow. With the increasing of bottleneck link width from 10 to 60 Mb/s, the simulation results of bottleneck link utilization and packet loss rate are shown in Figures 3  and 4.
The comparison results of bottleneck link utilization rate are described in Figure 3. It is easy to see that the bottleneck link utilization rate of ICP decreases from 86% to 78% with the increase of bottleneck bandwidth. The reason is that ICP uses congestion window to forward interest packets. As a result, the queue length increases with the rise of interest packets so as to reduce of bottleneck link utilization. By using the ICP+DCs, the bottleneck link utilization rate increases gradually and approaches 94% with the widening of bottleneck bandwidth. The reason is that ICP+DCs dynamically adjusts the cache allocation according to the output queue length of the whole data streams in the router and allocates fair bandwidth for each data stream. This algorithm sufficient utilizes more resources. As a result, it relaxes the data stream and alleviates the congestion so as to better use the network bottleneck link bandwidth to some extent. Compared with ICP algorithm, ICP+DCs adds the functions of predicting congestion and enabling NDN fair cache, which makes it  better adapt to the change of bottleneck link bandwidth and improve bandwidth utilization.
As can be seen from Figure 4, compared with ICP, ICP+DCs is lower packet loss rate, which decreases rapidly with the increase of bottleneck link bandwidth. The reason is that when there is a trend of congestion, ICP+DCs algorithm enables NDN fair cache to reduce the forwarding rate of data stream and corresponding interest stream for output queues exceeding the fair length, so as to reduce data packet loss and alleviate congestion. The ICP judges the network congestion by the timeout of data packet transmission, which delays the processing of congestion and results in high packet loss rate.
The change trends of average flow completion time of ICP and ICP+DCs are showed in Figure 5. With the increase of the number of flows, the average flow completion time of the two algorithms become longer, but the average stream completion time of ICP+DCs is shorter. When the number of flows increases to 50, the average  flow completion time increases significantly. The reason is that the network is congested at this time. The slope of ICP+DCs curve gradually tends to be smooth after 75 streams, which indicates that ICP+DCs can better adapt to network changes.

Conclusion and prospects
In this paper, we have investigated the congestion control problem for NDN. In terms of the character of innetwork cache of NDN, the buffer of data output queue is dynamically expanded by borrowing NDN CS. The buffer size to be borrowed by port is calculated on the basis of the data output queue length and its weight. Combined with the existing congestion control algorithm ICP, a novel fair dynamic content store-based congestion control algorithm is designed. Simulations are conducted to validate the effectiveness and efficiency of our proposed scheme, and the results demonstrate the benefits in improving network performance, such as link utilization, packet loss rate and average completion time of data stream.
In the proposed algorithm, quantitative evaluating the fairness and optimizing the control thresholds are not considered, which will be the future research direction. On the other hand, more influencing factors, such as the popularity of network resources, the number of resource visits, can be considered to detect the network congestion, which may be another research direction in the future.

Disclosure statement
No potential conflict of interest was reported by the author(s).