Abstract
Modern information technology services largely depend on cloud infrastructures to provide their services. These cloud infrastructures are built on top of Datacenter Networks (DCNs) constructed with high-speed links, fast switching gear, and redundancy to offer better flexibility and resiliency. In this environment, network traffic includes long-lived (elephant) and short-lived (mice) flows with partitioned/aggregated traffic patterns. Although SDN-based approaches can efficiently allocate networking resources for such flows, the overhead due to network reconfiguration can be significant. With limited capacity of Ternary Content-Addressable Memory (TCAM) deployed in an OpenFlow enabled switch, it is crucial to determine which forwarding rules should remain in the flow table and which rules should be processed by the SDN controller in case of a table-miss on the SDN switch. This is needed in order to obtain the flow entries that satisfy the goal of reducing the long-term control plane overhead introduced between the controller and the switches. To achieve this goal, we propose a machine learning technique that utilizes two variations of Reinforcement Learning (RL) algorithms—the first of which is a traditional RL-based algorithm, while the other is deep reinforcement learning-based. Emulation results using the RL algorithm show around 60% improvement in reducing the long-term control plane overhead and around 14% improvement in the table-hit ratio compared to the Multiple Bloom Filters (MBF) method, given a fixed size flow table of 4KB.
- M. Casado, M. J. Freedman, J. Pettit, J. Lou, N. McKeown, and S. Shenker. 2007. Ethane: Taking control of the enterprise. ACM SIGCOMM Computer Communication Review 37, 4 1--12. Google ScholarDigital Library
- N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner. 2008. OpenFlow: Enabling innovation in campus networks. ACM SIGCOMM Computer Communications Review 38, 2 73--78. Google ScholarDigital Library
- Open Networking Foundation. {Online}. Available: https://www.opennetworking.org, February 2017.Google Scholar
- S. Kandula, S. Sengupta, A. Greenberg, P. Patel, and R. Chaiken. 2009. The nature of data center traffic: Measurement 8 analysis. In Proceedings of the 9th ACM SIGCOMM Conference on International Measurement (IMC’09). ACM, New York, 202--208. Google ScholarDigital Library
- T. Benson, A. Akella, and D. A. Maltz. 2010. Network traffic characteristics of data centers in the wild. In Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement. ACM, Melbourne, 267--280. Google ScholarDigital Library
- B. S. Lee, R. Kanagavelu, and K. M. M. Aung. 2013. An efficient flow cache algorithm with improved fairness in software-defined data center networks. In Proceedings of the 2013 IEEE 2nd International Conference on Cloud Networking (CloudNet). IEEE, San Francisco, 18--24.Google ScholarCross Ref
- B. Lantz, B. Heller, and N. McKeown. 2010. A network in a laptop: Rapid prototyping for software-defined networks. In Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks. ACM, Monterey, 19:1--19:6. Google ScholarDigital Library
- Mininet -- an instant virtual network on your laptop (or other PC). {Online}. Available: http://www.mininet.org/, February 2017.Google Scholar
- R. Challa, Y. Lee, and H. Choo. 2016. Intelligent eviction strategy for efficient flow table management in openflow switches. In Proceedings of the 2016 IEEE NetSoft Conference and Workshops (NetSoft). IEEE, Seoul, 312--318.Google Scholar
- TCAMs and Openflow: What every practitioner must know. {Online}. Available: https://www.sdxcentral.com/articles/contributed/sdn-openflow-tcam-need-to-know/2012/07/, 2012.Google Scholar
- N. Katta, O. Alipourfard, J. Rexford, and D. Walker. 2014. Infinite cacheflow in software-defined networks. In Proceedings of the 3rd Workshop on Hot Topics in Software Defined Networking. ACM, Chicago, 175--180. Google ScholarDigital Library
- S. Banerjee and K. Kannan. 2014. Tag-in-tag: Efficient flow table management in SDN switches. In Proceedings of the 10th International Conference on Network and Service Management (CNSM) and Workshop. IEEE, Rio de Janeiro. 109--117.Google Scholar
- S. Veeramani, M. Kumar, and S. N. Mahammad. 2013. Minimization of flow table for TCAM based openflow switches by virtual compression approach. In Proceedings of the 2013 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS). IEEE, Kattankulathur, 1--4.Google Scholar
- H. Zhu, M. Xu, Q. Li, J. Li, Y. Yang, and S. Li. 2015. MDTC: An efficient approach to TCAM-based multidimensional table compression. In Proceedings of the 2015 IFIP Networking Conference (IFIP Networking). IEEE, Toulouse, 1--9.Google Scholar
- S. Luo, H. Yu, and L. M. Li. 2014. Fast incremental flow table aggregation in SDN. In Proceedings of the 2014 23rd International Conference on Computer Communication and Networks (ICCCN). IEEE, Shanghai, 1--8.Google Scholar
- M. Anan, A. Al-Fuqaha, N. Nasser, T. Mu, and H. Bustam. 2016. Empowering networking research and experimentation through software-defined networking. Journal of Network and Computer Applications 70, July 140--155. Google ScholarDigital Library
- M. Lu, W. Deng, and Y. Shi. 2016. TF-idletimeout: Improving efficiency of TCAM in SDN by dynamically adjusting flow entry lifecycle. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, Budapest, 2681--2686.Google Scholar
- X. N. Nguyen, D. Saucez, C. Barakat, and T. Turletti. 2016. Rules placement problem in openflow networks: A survey. In IEEE Communications Surveys 8 Tutorials 18, 2 1273--1286.Google Scholar
- R. S. Sutton and A. G. Barto. 1998. Reinforcement Learning: An Introduction. MIT Press. Google ScholarDigital Library
- S. Marsland. 2014. Machine Learning, 2nd edition. CRC Press.Google Scholar
- V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. 2013. Playing Atari with deep reinforcement learning. ArXiv Preprint, ArXiv: 1312.5602.Google Scholar
- V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, and S. Petersen. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 529--533.Google Scholar
- Google DeepMind. {Online}. Available: https://deepmind.com/. February 2017.Google Scholar
- J. Schmidhuber. January 2015. Deep learning in neural networks: An overview. Neural Networks 61, 85--117. Google ScholarDigital Library
- Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. 2014. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia. ACM, Orlando, 675--678. Google ScholarDigital Library
- C. Lee, Y. Nakagawa, K. Hyoudou, S. Kobayashi, O. Shiraki, and T. Shimizu. 2015. Flow-aware congestion control to improve throughput under tcp incast in datacenter networks. In Proceedings of the IEEE 39th Annual Computer Software and Applications Conference. IEEE, Taichung, 155--162. Google ScholarDigital Library
- POX Controller, {Online}. Available: https://github.com/noxrepo/pox.Google Scholar
- H. A. A. Al-Rawi, M. A. Ng, and K.-L. A. Yau. 2015. Application of reinforcement learning to routing in distributed wireless networks: A review. Artificial Intelligence Review 43, 3 381--416. Google ScholarDigital Library
- R. Desai and B. P. Patil. 2015. Cooperative reinforcement learning approach for routing in ad hoc networks. In Proceedings of the 2015 International Conference on Pervasive Computing (IEEE ICPC 2015). IEEE, Pune, 1--5.Google Scholar
- J. Solanki and A. Chauhan. 2015. A reinforcement learning network based novel adaptive routing algorithm for wireless ad-hoc network. International Journal of Science Technology 8 Engineering 1, 12 135--142.Google Scholar
- S. C. Lin, I. F. Akyildiz, P. Wang, and M. Luo. 2016. QoS-aware adaptive routing in multi-layer hierarchical software defined networks: A reinforcement learning approach. In Proceedings of the 2016 IEEE International Conference on Services Computing (SCC). IEEE, San Francisco, CA, 25--33.Google Scholar
- S. Wang, H. Liu, P. H. Gomes, and B. Krishnamachari. 2017. Deep reinforcement learning for dynamic multichannel access. In Proceedings of the International Conference on Computing, Networking and Communications (ICNC). IEEE, Silicon Valley, 257--265.Google Scholar
Index Terms
- SDN Flow Entry Management Using Reinforcement Learning
Recommendations
Performance Analysis of SDN/OpenFlow Controllers: POX Versus Floodlight
Software-Defined Networking (SDN) is an emerging network architecture that is adaptable, dynamic, cost-effective, and manageable. The SDN architecture is a form of network virtualization where the network controlling functions and forwarding functions ...
DDT: A Reinforcement Learning Approach to Dynamic Flow Timeout Assignment in Software Defined Networks
AbstractOpenFlow-compliant commodity switches face challenges in efficiently managing flow rules due to the limited capacity of expensive high-speed memories used to store them. The accumulation of inactive flows can disrupt ongoing communication, ...
Using Transfer Learning to Speed-Up Reinforcement Learning: A Cased-Based Approach
LARS '10: Proceedings of the 2010 Latin American Robotics Symposium and Intelligent Robotics MeetingReinforcement Learning (RL) is a well-known technique for the solution of problems where agents need to act with success in an unknown environment, learning through trial and error. However, this technique is not efficient enough to be used in ...
Comments