ScaleDRL: A Scalable Deep Reinforcement Learning Approach for Traffic Engineering in SDN with Pinning Control

Penghao Sun, Zehua Guo, Julong Lan, Junfei Li, Yuxiang Hu, Thar Baker

Research output: Contribution to journalArticlepeer-review

Abstract

As modern communication networks become more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Traditional methods usually design a fixed model of the network traffic and solve an objective function to get a TE policy, which cannot ensure the solution efficiency.

The emerging Deep Reinforcement Learning (DRL) together with the Software-Defined Networking (SDN) technologies provide us with a chance to design a model-free TE scheme through Machine Learning (ML). However, existing DRL-based TE solutions are all faced with a scalability problem, i.e., the solution cannot be applied to large networks. In this paper, we propose to combine the control theory and DRL technology to achieve an efficient network control scheme for TE. The proposed scheme ScaleDRL employs the idea from the pinning control theory to select a subset of links in the network and name them critical links. Based on the traffic distribution information collected by the SDN controller, we use a DRL algorithm to dynamically adjust a set of link weights for the critical links. Through a weighted shortest path algorithm, the forwarding paths of the network flows can be dynamically adjusted using the dynamic link weights. The packet-level simulation shows that ScaleDRL reduces the average end-to-end transmission delay by up to 39% compared to the state-of-the-art DRL-based TE scheme in different network topologies.
Original languageEnglish
Article number107891
JournalComputer Networks
Volume190
DOIs
Publication statusPublished - 25 Feb 2021

Cite this