Mobile Robot Map-Free Navigation

Video 2.

Figure 1. Methodology for Training and Deployment

The DRL model was trained in simulation in a constrained racetrack at physical limits and transferred zero-shot to the real-world for out-of-distribution generalization to exploration in unstructured terrain, navigation to objects in multi-light conditions, and dynamic obstacle avoidance.

Video 1. Visual Simultaneous Localization and Mapping (VSLAM)

Video 3.

Autonomous Mobile Robots (AMRs) of all forms, which comprise wheeled vehicles, quadrupeds, and humanoids, have the potential to be valuable tools for a variety of tasks in applications that span agriculture, manufacturing, disaster response, Search and Rescue (SAR), military operations, and extraterrestrial planetary exploration. Operation in new and dynamically changing environments without prior maps and those that are GPS-denied, such as caves and lava tubes on Mars, unfamiliar buildings, contested military regions, or areas affected by natural disasters such as fires or earthquakes facilitates the greatest utility. State-Of-The-Art (SOTA) map-free navigation methods routinely utilize modular techniques that incorporate Simultaneous Localization and Mapping (SLAM) to both estimate the robot's state and construct a map of the environment for collision-free trajectory planning and control. The construction and maintenance of an explicit human interpretable map introduces a point of failure that renders these systems fragile. Linear and angular velocities beyond a low threshold induce mapping errors due to sensor blur and subsequent feature matching inaccuracies, causing the navigation stack to collapse, necessitating the robot to operate at inefficiently low speeds to maintain localization.

References

[1] S. Sivashangaran and A. Eskandarian, “Deep Reinforcement Learning for Autonomous Ground Vehicle Exploration Without A-Priori Maps," Advances in Artificial Intelligence and Machine Learning, 3 (2), 1198-1219, Jun. 2023. (Link) (Preprint)

[2] S. Sivashangaran, A. Khairnar and A. Eskandarian, “Exploration Without Maps via Zero-Shot Out-of-Distribution Deep Reinforcement Learning,” arXiv preprint arXiv:2402.05066, Feb. 2024. (Link)

[3] S. Sivashangaran, A. Khairnar and A. Eskandarian, “AutoVRL: A High Fidelity Autonomous Ground Vehicle Simulator for Sim-to-Real Deep Reinforcement Learning," IFAC-PapersOnLine, vol. 56, no. 3, pp. 475-480, Dec. 2023. (Link) (Preprint)

[4] S. Sivashangaran and A. Eskandarian, “XTENTH-CAR: A Proportionally Scaled Experimental Vehicle Platform for Connected Autonomy and All-Terrain Research," Proceedings of the ASME 2023 International Mechanical Engineering Congress and Exposition. Volume 6: Dynamics, Vibration, and Control. New Orleans, LA, USA, Oct. 29–Nov. 2, 2023. V006T07A068. American Society of Mechanical Engineers. (Link) (Preprint)

Previous
Previous

Humanoid Robot AI

Next
Next

Autonomous Racing