ETH Zurich deploys RIDGEBACK to advance autonomous navigation through reinforcem
2020-12-14

1.jpg


How can robots continue to promote innovation? One of the solutions is to create robots and platforms to intelligently adapt to their collected data, deployment environment, and implementation process. Through each step of the learning method, the robot should be able to autonomously evolve its own functions and algorithms, and may even develop faster and better than humans.

 

Quickly adapt

A team in the Autonomous Systems Laboratory of the Swiss Public Research University Zurich Federal Institute of Technology (Einstein's alma mater) is dedicated to creating robots and intelligent systems that can do this. Operate autonomously in a complex and diverse environment. Based on Ridgeback and equipped with Franka Emika Panda, the team is studying a series of different scenarios to develop best practices on how robots "learn". They focus on the mechatronics design and control of the system. These systems will automatically adapt to different situations and deal with uncertain and dynamic daily environments. They are most interested in developing robot concepts through practical tests, whether on land, in the air or in the water. In the process of adding intelligence to automatic navigation, they are committed to developing novel methods and tools such as perception, abstraction, mapping and path planning.


 The research team is led by a group of experienced researchers, engineers and professors. The purpose of their project is to explore the possibilities and limitations of using reinforcement learning to train a neural network in simulation. The neural network controls all the degrees of freedom of the mobile manipulator (ie the motion of the manipulator’s joints and the basic platform) and deploys it in a real Robot. To this end, they began to use a robot that can only move on a plane, and feed the data from two 2D LiDAR scans directly to the network so that the control agent can understand its environment.



Simplify the simulation process

However, their pursuit of whole-body trajectory planning and control is not without challenges. The method itself is very complicated, because for example, sampling-based methods can have multiple collision-free joint configurations. In addition, these methods suffer from the curse of dimensionality or need to understand the needs of the entire environment during the planning stage. Another method is model predictive control, which solves the optimal control problem in a horizontal backward manner.


Currently, the core shortcoming of this method is that they are either limited in obstacle avoidance, or they fall into a local minimum (due to their limited field of view), or they only use the kinematics model of the robot. However, with the help of a new method designed by the team, they laid the foundation for the controller, which overcomes these problems while maintaining very low computing requirements, so it can run in real-time on low-end devices.



2.jpg


In order to realize their project, the Autonomous System Lab used both our Ridgeback mobile base platform and the URDF file we provided on the pavilion page. Taken together, because Ridgeback is an off-the-shelf ROS compatible product, it simplifies simulation verification and facilitates rapid deployment.


Another important Ridgeback feature adopted by the team is the omnidirectional drive setting of the platform, which can move in any direction instantly. In this way, they were able to find a set of hyperparameters that would cause convergence during the training process and cause an agent to be deployed on Ridgeback. In addition, the team was able to continue training while slowly reducing the maximum speed in the y direction to zero, so that agents could be deployed on the differential drive platform. The Ridgeback is then further used to simulate this platform by closing the movement in the y direction. If you are interested in such a configuration, you can follow our tutorial here.


Some of the operations RoyalPanda participates in include driving to a given set point (and actively maintaining its position) with an accuracy of about 4 cm. In addition, it also performs gripping exercises, the goal is to detect when the set point is reached, close the clamp and set a new set point. On a static basis, they also tested the grasping process, but this time used reinforcement learning methods.



Ridgeback can be tested quickly and efficiently

The research team knew that if they wanted to actively test real-world applications, they would have to go beyond the simulation environment. Therefore, Ridgeback provided them with a test platform and a non-differential driver solution. The out-of-the-box compatibility of Ridgeback and ROS is particularly beneficial to them here.


Julien Kindle, one of the main researchers of the project, believes that Ridgeback is essential to the completion of the work: "The Clearpath Ridgeback platform is very convenient from the beginning (that is, the creation of a simulation) to the end (the deployment on the actual system). .) because it has ready-made ROS compatibility. In this way, we can focus on our work without having to develop our own drivers and models for ROS.” In addition, the team also benefits from the platform’s startup and operating speed , Available URDF files describing the robot, its kinematics and dynamics characteristics, and the smooth driving provided by Ridgeback. Insert simulation link here


Therefore, Ridgeback uses a triple technical approach to verify the team’s research:


1. Deploy their neural network.


2. Use the URDF file we provide to model in PyBullet simulation.


3. Verify the agent in gazebo through Ridgeback's ROS-Gazebo simulation package.


Use RRTConnect for trajectory tracking in MoveIt


Assemble the project

However, Ridgeback is only the basic platform for this ambitious product. The team installed Beiyang Lidar for the robot and installed the Franka Emika Panda robot arm on it. They chose Franka Emika because they provided a stable ROS driver package for their robot, so it was very easy to combine it with Ridgeback. Next, use the visual inertial sensor for positioning (to calculate the set point of the end effector). Finally, they connected a remote safety stop device to the safety stop circuit of Ridgeback and Panda to protect the surrounding robots and personnel.


Although Clearpath initially designed the team's Ridgeback to handle the two-arm upper torso manipulator, the Zurich ETH team subsequently stepped in to further adapt it to panda. First, they designed their own installation platform, added it to the Ridgeback platform and connected the arm to the platform. In addition, they used a power source on the base (big black box) and connected it directly to an optional 24V to 230V inverter inside Ridgeback. From a coding point of view, it is easy to combine Ridgeback and Panda's URDF files (also with ROS drivers). The only tricky thing is that when they implement the code (you can find their RL training here), they have to manually add inertia terms and Gazebo elements in URDF (taken from this repository).


Their research has achieved great success! The team can deploy reinforcement learning agents in the form of neural networks, which can be deployed on real robots in various corridor environments after simulation training. By using automatic domain randomization, they were able to slowly increase the complexity of the simulation to improve the speed and robustness of convergence, and enable the agent to better understand the actual situation. Papers based on their findings have been submitted to IROS and RA-L, and they hope to see their work published soon. However, the document is currently also available online here.


They have used a similar combination named Royal Yumi in other projects, which is an adapted version with Ridgeback and ABB YuMi. One example includes acquiring and carrying applications in an unstructured indoor environment. You can read the paper here.


The research team includes Julien Kindle, Dr. Fadr Furrer, Dr. Tonci Novkovic, Dr. Jen Jen Chung, Roland. Professor Roland Siegwart and Dr. Juan Nieto.


To learn more about our omnidirectional indoor mobile platform Ridgeback, please visit our website.


http://www.clearpathcn.com/


Donghu Robot Laboratory, 2nd Floor, Baogu Innovation and Entrepreneurship Center,Wuhan City,Hubei Province,China
Tel:027-87522899,027-87522877

Technical Support

Post-Sale
Video
ROS Training
Blog

About Jingtian

About Us
Join Us
Contact Us

Cooperation and consultation

Business cooperation: 18062020215

18062020215@qq.com

Pre sales technical support:

Tel 13807184032


Website record number:鄂ICP备17004685号-1 | Technical Support | Contact Us | Terms of Service and Privacy | Map