MOBILE ROBOT CONTROL USING DEEP REINFORCEMENT LEARNING ALGORITHM | Hà | TNU Journal of Science and Technology

MOBILE ROBOT CONTROL USING DEEP REINFORCEMENT LEARNING ALGORITHM

About this article

Received: 08/06/25                Revised: 26/11/25                Published: 26/11/25

Authors

Pham Thi Thu Ha Email to author, University of Economics - Technology for Industries

Abstract


This study presents the problem of controlling mobile robots used in many fields: industry, medicine, transportation, civil, etc. Mobile robots meet the need for intelligent control for intelligent navigation in flat environments, nonlinear environments applying deep learning algorithms. The article uses the programming research method with the ROS robot operating system, combined with the implementation of automatic intelligent navigation for robots in the process of positioning robots in flat environments and uncertain - nonlinear environments. On that basis, this study applies to establish simultaneous mapping - SLAM. The research results using the ROS programming tool, in the Gazebo environment, have confirmed that the control algorithm is always updated from the map, operating environment, robot control position. All obstacles have been calculated trajectories for robots in automatic intelligent navigation, avoiding obstacles safely without encountering any obstacles in the moving journey. This study is meaningful in contributing to improving the automation efficiency and applicability of mobile robots in complex moving environments.

Keywords


Mobile robot; ROS; SLAM; Gazebo; Artificial intelligence

References


[1] P. D. Nguyen, Advanced Control Theory, Science and Engineering Publishing House, (in Vietnamese), Hanoi, Vietnam, 2018.

[2] C. Q. Hoang, V. H. Dao, V. A. Nguyen, and C. B. Le, Electric drive systems in Robots. People's Army Publishing House, (in Vietnamese), Hanoi, Vietnam, 2020.

[3] T. T. N. Vu, X. L. Ong, and H. N. Tran, Enhanced learning in automatic control with Matlab simulink, Hanoi Polytechnic Publishing House, (in Vietnamese), Hanoi, Vietnam, 2020.

[4] S. H. Le, D. C. Le, and H. V. Nguyen, Industrial Robots Syllabus, Ho Chi Minh City National University Publishing House, (in Vietnamese), Ho Chi Minh City, Vietnam, 2017.

[5] L. Joseph and J. Cacace, Mastering ROS for Robotics Programming, vol. 2: Design, build, and simulate complex robots using the Robot Operating System, Packt Publishing Ltd., UK., 2018.

[6] F. Guo, H. Yang, and X. Wu, “Model-based deep learning for low-cost IMU dead reckoning of wheeled mobile robot,” IEEE Trans. Ind. Electron., no. 1, pp. 7531–7541, 2023, doi: 10.1109/TIE.2023.3301531.

[7] Y. Li, “Deep reinforcement learning,” in ICASSP 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), Calgary, AB, Canada, 2018, pp. 15-20.

[8] S. P. Thale, “ROS based SLAM implementation for Autonomous navigation using Turtlebot,” ITM Web of Conferences, vol. 32, no. 5, 2020, Art. no. 01011, doi: 10.1051/itmconf/20203201011.

[9] H. X. Dong, C. Y. Weng, C. Q. Guo, and H. Y. Yu, “Real-time avoidance strategy of dynamic obstacles via half model-free detection and tracking with 2D Lidar for mobile robots,” IEEE/ASME Trans. on Mechatronics, vol. 26, no. 4, pp. 2215 – 2225, August 2021.

[10] R. K. E. A. Megalingam, “ROS based autonomous indoor navigation simulation using SLAM algorithm,” Int. J. Pure Appl., vol. 7, pp. 199-205, 2018.

[11] D. Kozlov, “Comparison of Reinforcement Learning Algorithms for Motion Control of an Autonomous Robot in Gazebo Simulator,” International Conference on Information Technology and Nanotechnology (ITNT), IEEE Explore, 2021, pp. 1-5, doi: 10.1109/ITNT52450.2021.9649145.

[12] H. T. Tran and T. T. H. Pham, “Controlling mobile robot in flat environment taking into account nonlinear factors applying artificial intelligence,” Bulletin of Electrical Engineering and Informatics, vol. 13, no. 5, pp. 3737-3745, October 2024, doi: 10.11591/eei.v13i5.7818.

[13] H. Lee, J. Kim, and J. Lee, “Resource Allocation in Wireless Networks with Deep Reinforcement Learning: A Circumstance-Independent Approach,” IEEE Syst. J., vol. 14, pp. 2589–2592, 2020.

[14] K. M. Othman and A. B. Rad, “A Doorway Detection and Direction (3Ds) System for Social Robots via a Monocular Camera,” Sensors, vol. 20, pp. 2477-2489, 2020.

[15] K. Nolan, “Optitrack,” 2025. [Online]. Available: https://www.optitrack.com/. [Accessed June 15, 2025].




DOI: https://doi.org/10.34238/tnu-jst.13008

Refbacks

  • There are currently no refbacks.
TNU Journal of Science and Technology
Rooms 408, 409 - Administration Building - Thai Nguyen University
Tan Thinh Ward - Thai Nguyen City
Phone: (+84) 208 3840 288 - E-mail: jst@tnu.edu.vn
Based on Open Journal Systems
©2018 All Rights Reserved