AI-Augmented Resource Management in Edge, Fog, and Cloud Computing Systems

Main Article Content

Rahul Sankrityayan

Abstract

Artificial Intelligence (AI) is redefining resource management across edge, fog, and cloud computing systems by enabling dynamic, predictive, and autonomous decision-making. This paper explores emerging AI-augmented strategies designed to optimize latency, energy consumption, workload distribution, and quality of service (QoS). Traditional heuristic-based algorithms, while foundational, often fall short in handling heterogeneous, dynamic environments characterized by variable loads and tight latency constraints. AI models—ranging from Support Vector Machines (SVM) and reinforcement learning (RL) to clustering and regression techniques—have demonstrated superior adaptability through workload prediction, anomaly detection, and optimized resource provisioning. For instance, dynamic resource allocation using ML approaches like k-means clustering for anomaly detection and RL for task placement have shown promising results in fog/edge contexts ScienceDirect. Frameworks such as ENORM have tackled auto-scaling edge resources, reducing latency by 20–80% and network traffic by up to 95% arXiv. Survey studies underscore the evolution of fog/resource management solutions and AI-driven enhancements across computing layers arXiv+2arXiv+2. While AI methods enable real-time scaling, predictive scheduling, and QoS-aware load balancing, challenges persist—most notably regarding scalability, interpretability, and computational overhead. This paper synthesizes state-of-the-art approaches, outlines architectural workflows, evaluates benefits and constraints, discusses implementation results, and proposes future directions to harness AI for next-generation, energy-efficient, and low-latency distributed resource management.

Article Details

Section

Articles

How to Cite

AI-Augmented Resource Management in Edge, Fog, and Cloud Computing Systems. (2022). International Journal of Research Publications in Engineering, Technology and Management (IJRPETM), 5(6), 7719-7722. https://doi.org/10.15662/IJRPETM.2022.0506002

References

1. Hong, C.-H., & Varghese, B. (2018). Resource Management in Fog/Edge Computing: A Survey. arXiv. arXiv

2. Toczé, K., & Nadjm-Tehrani, S. (2018). A Taxonomy for Management and Optimization of Multiple Resources in

Edge Computing. arXiv. arXiv

3. Kimovski, D., Ijaz, H., Surabh, N., & Prodan, R. (2018). An Adaptive Nature-inspired Fog Architecture

(SmartFog). arXiv. arXiv

4. Wang, N., Varghese, B., Matthaiou, M., & Nikolopoulos, D. S. (2017). ENORM: A Framework For Edge NOde

Resource Management. arXiv. arXiv

5. Federated Learning research (2017–2018): Early developments focused on communication-efficient,

privacy-preserving distributed learning strategies. Wikipedia entry summary. Wikipedia

6. OpenFog Consortium (2017). Reference architecture and standardization efforts. Wikipedia summary. Wikipedia

7. Elasticity in Computing: Foundational cloud computing concept on autonomic resource adaptation. Wikipedia.

Wikipedia

8. Agarwal, Yadav & Yadav (2016), Xu et al. (2018), Pawar & Wagh (2012), Zahid et al. (2018): Various dynamic

and nature-inspired load balancing and resource allocation algorithms in fog computing. PMC summary. PMC

9. Fog Robotics (2017–2018): Architectural synergy of fog computing with robotics for low-latency, distributed

processing. Wikipedia entry.