Deep Q-Learning-Based Adaptive API Integration Framework for Dynamic Workflow Reconfiguration in Microservices Architectures

Authors

  • Raja M Hussain Solutions Engineer Author

Keywords:

Deep Q-Learning, microservices, dynamic workflows, API integration, reinforcement learning, adaptive systems, service orchestration

Abstract

Microservices architectures, while offering scalability and flexibility, face challenges in dynamically reconfiguring workflows in response to fluctuating runtime conditions such as service latency, API failure, or workload surges. Traditional rule-based and static integration mechanisms are ill-suited for real-time adaptive decision-making. This study introduces a novel Deep Q-Learning-based adaptive API integration framework designed to autonomously reconfigure workflows in microservices systems. By leveraging reinforcement learning, particularly deep Q-networks (DQNs), the system learns optimal integration strategies under varying performance metrics and context shifts. The framework is validated using a simulated e-commerce environment where dynamic service selection and workflow reconfiguration are critical. Experimental results demonstrate that our approach significantly improves response time, fault tolerance, and system throughput compared to static and heuristic-based strategies.

References

Canfora, G., Di Penta, M., Esposito, R., & Villani, M. (2005). A framework for QoS-aware binding and re-binding of composite web services. Journal of Systems and Software, 81(10), 1754–1769.

Kodi, D. (2024). Automating Software Engineering Workflows: Integrating Scripting and Coding in the Development Lifecycle . Journal of Computational Analysis and Applications (JoCAAA), 33(4), 635–652.

Yu, T., Wang, Y., & Jin, H. (2018). QoS-driven dynamic service composition using heuristic-based approaches. Future Generation Computer Systems, 79, 273–284.

Moustafa, R., Medhat, H., & Elgazzar, K. (2020). Dynamic orchestration of microservices using policy-based management. Software: Practice and Experience, 50(4), 520–537.

Kodi, D. (2024). Data Transformation and Integration: Leveraging Talend for Enterprise Solutions. International Journal of Innovative Research in Science, Engineering and Technology, 13(9), 16876–16886. https://doi.org/10.15680/IJIRSET.2024.1309124

Liu, S., Chen, H., & Liu, X. (2020). Predictive service selection using supervised learning in microservices. IEEE Transactions on Services Computing, 13(6), 1093–1105.

Zhang, Y., Li, Y., & Tang, L. (2021). Reinforcement learning for resource allocation in microservice-based cloud systems. IEEE Access, 9, 78835–78847.

Kodi, D. (2024). Performance and Cost Efficiency of Snowflake on AWS Cloud for Big Data Workloads. International Journal of Innovative Research in Computer and Communication Engineering, 12(6), 8407–8417. https://doi.org/10.15680/IJIRCCE.2023.1206002

Huang, J., Sun, Y., & Tang, Z. (2022). Adaptive service composition using Q-learning for IoT workflows. Sensors, 22(3), 782.

Chen, X., Wang, Z., & Xu, Z. (2019). Reinforcement learning for service composition and optimal selection in cloud environments. Journal of Network and Computer Applications, 125, 103–113. https://doi.org/10.1016/j.jnca.2018.10.009

Al-Dhuraibi, Y., Paraiso, F., Djarallah, N., & Merle, P. (2018). Elasticity in cloud computing: State of the art and research challenges. IEEE Transactions on Services Computing, 11(2), 430–447. https://doi.org/10.1109/TSC.2017.2711009

Kodi, D. (2023). A Pythonic Approach to API Data Management: Fetching, Processing, and Displaying Data for Business Intelligence. International Journal of Emerging Research in Engineering and Technology, 4(2), 33–42. https://doi.org/10.63282/3050-922X/IJERET-V4I2P104

Li, C., Fan, X., Zhang, Y., & Yang, Z. (2020). DRL4FMS: Deep reinforcement learning for flexible microservice scheduling in cloud environments. Future Generation Computer Systems, 110, 873–886. https://doi.org/10.1016/j.future.2019.10.028

Wang, S., Liu, Z., Xu, Y., & Xu, X. (2021). Context-aware dynamic microservice orchestration using deep reinforcement learning. Concurrency and Computation: Practice and Experience, 33(5), e5912. https://doi.org/10.1002/cpe.5912

Nastic, S., Ranjan, R., Truong, H. L., & Dustdar, S. (2017). SLA-aware resource orchestration for large-scale cloud services. Computer, 50(11), 52–59. https://doi.org/10.1109/MC.2017.4041351

Kodi, D. (2023). Optimizing Data Quality: Using SSIS for Data Cleansing and Transformation in ETL Pipelines. Library Progress International, 43(1), 192–208.

Mukesh, V., Joel, D., Balaji, V. M., Tamilpriyan, R., & Yogesh Pandian, S. (2024). Data management and creation of routes for automated vehicles in smart city. International Journal of Computer Engineering and Technology (IJCET), 15(36), 2119–2150. doi: https://doi.org/10.5281/zenodo.14993009

Basiri, A., Hammad, A., Bagheri, H., & Williams, L. (2022). Self-adaptive microservices architecture using reinforcement learning. Empirical Software Engineering, 27(4), 1–32. https://doi.org/10.1007/s10664-021-10034-2

Downloads

Published

2024-08-03

How to Cite

Raja M Hussain. (2024). Deep Q-Learning-Based Adaptive API Integration Framework for Dynamic Workflow Reconfiguration in Microservices Architectures. INTERNATIONAL JOURNAL OF ENGINEERING AND TECHNOLOGY RESEARCH & DEVELOPMENT, 5(2), 18-24. https://ijetrd.com/index.php/ijetrd/article/view/IJETRD_05_02_004