Multi-Agent Task Offloading Approach based on Software Defined-Networking in Vehicular Fog Networks
Subject Areas : New technologies in distributed systems and algorithmic computingkobra behravan 1 , fc حسینی سنو 2 , Nazbanou Farzaneh Bahalgardi 3 , mohsen jahanshahi 4
1 - Department of Computer Engineering, CT.C., Islamic Azad University, Tehran, Iran
2 - Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
3 - Department of Computer Engineering, Imam Reza International University, Mashhad, Iran
4 - Department of Computer Engineering, CT.C., Islamic Azad University, Tehran, Iran
Keywords: Vehicular Fog Computing (VFC), Task Offloading, Software Defined Networking (SDN), Federated Reinforcement Learning (FRL),
Abstract :
Vehicular fog computing (VFC) been recognized as an effective architecture to address rising demands in smart vehicles. Fog servers deployed on moving or parked vehices can provide spatio-temporal heterogeneity of computational resources that capable of accomplishing computation and deadline intensive-tasks beyond the capacity which is embedded inside the vehicles. In multi-tier VFC architecture, vehicles can share idle low cost resources to increases acceptance tasks. However, the main issue is choosing optimal destination fog server for executing hard deadline tasks at each time slot. Therefore, in this work proposed a federated multi-agent deep Q-learning task offloading approach to provide collaborative learning and fast convergence. This approach improves privacy of data for agents and reduces average response time, energy consumption and processing cost in vehicular networks. Software-defined networking (SDN)-based architecture can provide flexibility, scalability, programmability, and overall network knowledge. With the help of SDN control plane configuration, the network can not only adapt to dynamic network changes, but also respond to emergency situations. Therefore, SDN technology integrated with federated reinforcement learning (FRL) method increases programmability, centralized network management, and dynamic configuration. The results show that the proposed method reduces the average response time, average energy consumption, and average economic cost of performing tasks in the network and will increase the successful performance of high-priority tasks in vehicular fog networks.
[1] M. K. Farimani, S. Karimian-Aliabadi, R. Entezari-Maleki, B. Egger, and L. Sousa, "Deadline-aware task offloading in vehicular networks using deep reinforcement learning," Expert Systems with Applications, vol. 249, p. 123622, 2024.
[2] O. Akyıldız, F. Y. Okay, İ. Kök, and S. Özdemir, "Road to efficiency: Mobility-driven joint task offloading and resource utilization protocol for connected vehicle networks," Future Generation Computer Systems, vol. 156, pp. 157-167, 2024.
[3] X. Zhang, Y. Zhu, C. Wang, J. Cao, Y. Chen, and J. Wang, "Multi-agent reinforcement learning for vehicular task offloading with multi-step trajectory prediction," CCF Transactions on Pervasive Computing and Interaction, vol. 6, no. 2, pp. 101-114, 2024.
[4] J. Shi, J. Du, J. Wang, J. Wang, and J. Yuan, "Priority-aware task offloading in vehicular fog computing based on deep reinforcement learning," IEEE Transactions on Vehicular Technology, vol. 69, no. 12, pp. 16067-16081, 2020.
[5] S. Zhou, Y. Sun, Z. Jiang, and Z. Niu, "Exploiting moving intelligence: Delay-optimized computation offloading in vehicular fog networks," IEEE Communications Magazine, vol. 57, no. 5, pp. 49-55, 2019.
[6] Z. Zhou, H. Liao, X. Wang, S. Mumtaz, and J. Rodriguez, "When vehicular fog computing meets autonomous driving: Computational resource management and task offloading," IEEE Network, vol. 34, no. 6, pp. 70-76, 2020.
[7] S. Misra and S. Bera, "Soft-VAN: Mobility-aware task offloading in software-defined vehicular network," IEEE Transactions on Vehicular Technology, vol. 69, no. 2, pp. 2071-2078, 2019.
[8] A. A. Khadir and S. A. H. Seno, "SDN-based offloading policy to reduce the delay in fog-vehicular networks," Peer-to-Peer Networking and Applications, vol. 14, no. 3, pp. 1261-1275, 2021.
[9] H. M. Birhanie, M. A. Messous, S.-M. Senouci, E.-H. Aglzim, and A. M. Ahmed, "MDP-based resource allocation scheme towards a vehicular fog computing with energy constraints," in 2018 IEEE Global Communications Conference (GLOBECOM), 2018, pp. 1-6: IEEE.
[10] V. Sethi and S. Pal, "FedDOVe: A Federated Deep Q-learning-based Offloading for Vehicular fog computing," Future Generation Computer Systems, vol. 141, pp. 96-105, 2023.
[11] F. L. Lewis and D. Vrabie, "Reinforcement learning and adaptive dynamic programming for feedback control," IEEE circuits and systems magazine, vol. 9, no. 3, pp. 32-50, 2009.
[12] Q. Qi et al., "Knowledge-driven service offloading decision for vehicular edge computing: A deep reinforcement learning approach," IEEE Transactions on Vehicular Technology, vol. 68, no. 5, pp. 4192-4203, 2019.
[13] H. Guo, J. Liu, J. Ren, and Y. Zhang, "Intelligent task offloading in vehicular edge computing networks," IEEE Wireless Communications, vol. 27, no. 4, pp. 126-132, 2020.
[14] S. Memon and M. Maheswaran, "Using machine learning for handover optimization in vehicular fog computing," in Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, 2019, pp. 182-190.
[15] J. Shi, J. Du, J. Wang, and J. Yuan, "Deep Reinforcement Learning-Based V2V Partial Computation Offloading in Vehicular Fog Computing," in 2021 IEEE Wireless Communications and Networking Conference (WCNC), 2021, pp. 1-6: IEEE.
[16] N. E. H. Boubaker, K. Zarour, N. Guermouche, and D. Benmerzoug, "Optimizing Workflow Offloading and Migration under Timed Constraints in Fog and Cloud Computing," Journal of Grid Computing, vol. 23, no. 1, p. 10, 2025.
[17] J. Zhao, M. Kong, Q. Li, and X. Sun, "Contract-based computing resource management via deep reinforcement learning in vehicular fog computing," IEEE Access, vol. 8, pp. 3319-3329, 2019.
[18] B. Shabir, A. U. Rahman, A. W. Malik, R. Buyya, and M. A. Khan, "A federated multi-agent deep reinforcement learning for vehicular fog computing," The Journal of Supercomputing, pp. 1-27, 2022.
[19] J. Xue, L. Wang, Q. Yu, and P. Mao, "Multi-agent deep reinforcement learning-based partial offloading and resource allocation in vehicular edge computing networks," Computer Communications, vol. 234, p. 108081, 2025.
[20] P. Dai, Y. Huang, K. Hu, X. Wu, H. Xing, and Z. Yu, "Meta Reinforcement Learning for Multi-task Offloading in Vehicular Edge Computing," IEEE Transactions on Mobile Computing, 2023.
[21] K. Zhang, J. Cao, and Y. Zhang, "Adaptive digital twin and multiagent deep reinforcement learning for vehicular edge computing and networks," IEEE Transactions on Industrial Informatics, vol. 18, no. 2, pp. 1405-1413, 2021.
[22] S.-C. Lin, K.-C. Chen, and A. Karimoddini, "SDVEC: software-defined vehicular edge computing with ultra-low latency," IEEE Communications Magazine, vol. 59, no. 12, pp. 66-72, 2021.
[23] Y. Hou, Z. Wei, R. Zhang, X. Cheng, and L. Yang, "Hierarchical task offloading for vehicular fog computing based on multi-agent deep reinforcement learning," IEEE Transactions on Wireless Communications, 2023.
[24] X. Ma, L. Liao, Z. Li, R. X. Lai, and M. Zhang, "Applying federated learning in software-defined networks: A survey," Symmetry, vol. 14, no. 2, p. 195, 2022.
[25] M. U. Thomas, "Queueing systems. volume 1: Theory (leonard kleinrock)," SIAM Review, vol. 18, no. 3, pp. 512-514, 1976.
[26] B. Yang et al., "Edge intelligence for autonomous driving in 6G wireless system: Design challenges and solutions," IEEE Wireless Communications, vol. 28, no. 2, pp. 40-47, 2021.
[27] M. Li, J. Gao, L. Zhao, and X. Shen, "Deep reinforcement learning for collaborative edge computing in vehicular networks," IEEE Transactions on Cognitive Communications and Networking, vol. 6, no. 4, pp. 1122-1135, 2020.
[28] Z. Ning and L. Xie, "A survey on multi-agent reinforcement learning and its application," Journal of Automation and Intelligence, vol. 3, no. 2, pp. 73-91, 2024.
[29] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, "Communication-efficient learning of deep networks from decentralized data," in Artificial intelligence and statistics, 2017, pp. 1273-1282: PMLR.
[30] L. Bracciale, M. Bonola, P. Loreti, G. Bianchi, R. Amici, and A. Rabuffi, "CRAWDAD dataset roma/taxi," ed: Version 2014-07-17, 2014.
[31] S. Imambi, K. B. Prakash, and G. Kanagachidambaresan, "PyTorch," in Programming with TensorFlow: solution for edge computing applications: Springer, 2021, pp. 87-104.
[32] I. Daubechies, R. DeVore, S. Foucart, B. Hanin, and G. Petrova, "Nonlinear approximation and (deep) ReLU networks," Constructive Approximation, vol. 55, no. 1, pp. 127-172, 2022.
