MOBILE ROBOT CONTROL USING DEEP REINFORCEMENT LEARNING ALGORITHM
About this article
Received: 08/06/25                Revised: 26/11/25                Published: 26/11/25Abstract
Keywords
Full Text:
PDF (Tiếng Việt)References
[1] P. D. Nguyen, Advanced Control Theory, Science and Engineering Publishing House, (in Vietnamese), Hanoi, Vietnam, 2018.
[2] C. Q. Hoang, V. H. Dao, V. A. Nguyen, and C. B. Le, Electric drive systems in Robots. People's Army Publishing House, (in Vietnamese), Hanoi, Vietnam, 2020.
[3] T. T. N. Vu, X. L. Ong, and H. N. Tran, Enhanced learning in automatic control with Matlab simulink, Hanoi Polytechnic Publishing House, (in Vietnamese), Hanoi, Vietnam, 2020.
[4] S. H. Le, D. C. Le, and H. V. Nguyen, Industrial Robots Syllabus, Ho Chi Minh City National University Publishing House, (in Vietnamese), Ho Chi Minh City, Vietnam, 2017.
[5] L. Joseph and J. Cacace, Mastering ROS for Robotics Programming, vol. 2: Design, build, and simulate complex robots using the Robot Operating System, Packt Publishing Ltd., UK., 2018.
[6] F. Guo, H. Yang, and X. Wu, “Model-based deep learning for low-cost IMU dead reckoning of wheeled mobile robot,” IEEE Trans. Ind. Electron., no. 1, pp. 7531–7541, 2023, doi: 10.1109/TIE.2023.3301531.
[7] Y. Li, “Deep reinforcement learning,” in ICASSP 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), Calgary, AB, Canada, 2018, pp. 15-20.
[8] S. P. Thale, “ROS based SLAM implementation for Autonomous navigation using Turtlebot,” ITM Web of Conferences, vol. 32, no. 5, 2020, Art. no. 01011, doi: 10.1051/itmconf/20203201011.
[9] H. X. Dong, C. Y. Weng, C. Q. Guo, and H. Y. Yu, “Real-time avoidance strategy of dynamic obstacles via half model-free detection and tracking with 2D Lidar for mobile robots,” IEEE/ASME Trans. on Mechatronics, vol. 26, no. 4, pp. 2215 – 2225, August 2021.
[10] R. K. E. A. Megalingam, “ROS based autonomous indoor navigation simulation using SLAM algorithm,” Int. J. Pure Appl., vol. 7, pp. 199-205, 2018.
[11] D. Kozlov, “Comparison of Reinforcement Learning Algorithms for Motion Control of an Autonomous Robot in Gazebo Simulator,” International Conference on Information Technology and Nanotechnology (ITNT), IEEE Explore, 2021, pp. 1-5, doi: 10.1109/ITNT52450.2021.9649145.
[12] H. T. Tran and T. T. H. Pham, “Controlling mobile robot in flat environment taking into account nonlinear factors applying artificial intelligence,” Bulletin of Electrical Engineering and Informatics, vol. 13, no. 5, pp. 3737-3745, October 2024, doi: 10.11591/eei.v13i5.7818.
[13] H. Lee, J. Kim, and J. Lee, “Resource Allocation in Wireless Networks with Deep Reinforcement Learning: A Circumstance-Independent Approach,” IEEE Syst. J., vol. 14, pp. 2589–2592, 2020.
[14] K. M. Othman and A. B. Rad, “A Doorway Detection and Direction (3Ds) System for Social Robots via a Monocular Camera,” Sensors, vol. 20, pp. 2477-2489, 2020.
[15] K. Nolan, “Optitrack,” 2025. [Online]. Available: https://www.optitrack.com/. [Accessed June 15, 2025].
DOI: https://doi.org/10.34238/tnu-jst.13008
Refbacks
- There are currently no refbacks.





