Xuanwu 10 mobile grabbing robot helps solve challenging tasks
2020-11-23

When discussing the future of robotics and artificial intelligence, the conversation usually revolves around the anxiety of substitutes. Will robots eventually make many manual tasks and responsibilities redundant? However, some other people who are more optimistic about the future of robot integration work believe that the potential for collaboration between robots and humans is huge.

1.jpg

Xuanwu 10, integrated UR collaborative robotic arm on Ridegeback

Lend a helping hand

The Space and Terrestrial Autonomous Robot System (STARS) Laboratory at the University of Toronto’s Institute of Aerospace Research is one such team dedicated to bringing robots out of the laboratory and into the real world to help individuals and their daily lives. Mobile manipulators can help alleviate tasks that are often too dangerous, repetitive, tedious or even impossible for humans to complete. In an ambitious project where our Ridgeback platform is combined with a collaborative robotic arm, the STARS laboratory team is exploring various applications for mobile manipulation tasks in the human environment.


Like many academic teams, the STARS laboratory is composed of lecturers and students, and is led by Professor Jonathan Kelly. His research includes Trevor Ablett, Abinav Ge The assistance of several students including Abhinav Grover, Oliver Limoyo, Filip Marić and Philippe Nadeau. The team is working together to develop the most advanced machine learning technology to improve the functionality of traditional robotic systems (static robot manipulators) so that the robot can complete challenging mobile operations tasks. To this end, they are studying various combinations of model-free, model-based, imitation, and reinforcement learning.


However, the biggest challenge facing this ambitious approach is that humans have extraordinary intelligence, insight, and dexterity. In fact, even for the most advanced robots, tasks that seem basic to humans are difficult to achieve. In fact, researchers are still not 100% sure how humans can effectively perform such a wide range of tasks. Therefore, the STARS laboratory team is actively exploring how to make the machine as efficient and versatile as a human in a day.



Decompose the basalt system

In their project, the omnidirectional Ridgeback serves as the basis for dexterous human safety/cooperative robot arms (such as UR10), dexterous grippers (Robotiq 3-finger grippers) and force torque sensors (Robotiq FT sensors), as well as Clearpath The pre-installed ROS software. With the out-of-the-box ready-to-use system, the team was able to focus on the programming aspects of the problem without having to worry about developing a powerful hardware system from scratch. Without Ridgeback, they would have to buy and integrate each robot component separately, or they would have to design their own platform from scratch using exposed electronic components. In their research, they found that this option is not feasible or cost-effective.


Another big concern of the STARS Lab team is wasted time. Since their main goal is in programming, they know that they don't want to risk encountering problems in their initial design, engineering, and construction. In other words, they need something reliable. As Jonathan Kelly, head of the STARS laboratory, said: "Clearpath has previously provided many integrated systems to other laboratories at the University of Toronto, and these machines have been successfully used in various research projects." Therefore, due to the powerful Ridgeback With high build quality, little hardware maintenance, and Clearpath’s extensive technical support, the team was able to focus on its own research.


2.png

Demonstrate mobile robotic arm to summer camp

Human competition

Let us delve into some of the specifications of its system testing. The STARS team uses a model-free reinforcement learning strategy (Machine Learning), based on the measurement of easily obtained sensor data (from camera RGB and depth images, end effector position, gripper), to generate end effectors on real Ridgeback hardware Speed command position, force torque value). This allows the team to see the results of their algorithm training on real hardware, not just in simulation. In another experiment, they developed a forward prediction model that can predict future images (in terms of appearance) based on a given {image, action, next image} tuple data set.


In order to complete most of the tests, STARS Lab focuses on sensors, which require no additional settings and can interact with objects more like humans. As they discovered, visual sensing alone is not sufficient for many tasks (for example, insertions that require strict tolerances). This is why in the end-to-end learning method, they rely on the use of arm joint encoders, cameras mounted on the sensor mast and force torque sensors (with their payload as a grabber) and any task-related objects.


2.jpg

Xuanwu 10 unboxing photos in the STARS team

Through their research, the team successfully established their imitation learning framework (that is, actively working to replicate the process based on collected data) that has been applied to humans who use off-the-shelf VR controllers to remotely operate robots. In addition, since the concept has been verified in their Ridgeback project, they also believe that the application can be extended to a variety of different mobile operation settings and scenarios. This provides a variety of possibilities for different applications and methods to challenge the current mobile manipulators and robot teleoperations. This led them to start building core industry partnerships to capitalize on their findings. They also recently published their findings at the IEEE/RSL International Conference on Intelligent Robots and Systems (IROS). , Showing that their model-based prediction model can be used in conjunction with Kalman filters to improve the performance of noisy visual data. The team is always looking to improve its theory and develop new and exciting ways to push robotics technology beyond its limits.


To learn more about the ongoing work of the STARS laboratory, please visit its website. (https://starslab.ca/).


 


To learn more about our Ridgeback platform and how it can improve your next project, you can learn more on the Clearpath website. 

http://www.clearpathcn.com


Donghu Robot Laboratory, 2nd Floor, Baogu Innovation and Entrepreneurship Center,Wuhan City,Hubei Province,China
Tel:027-87522899,027-87522877

Technical Support

Post-Sale
Video
ROS Training
Blog

About Jingtian

About Us
Join Us
Contact Us

Cooperation and consultation

Business cooperation: 18062020215

18062020215@qq.com

Pre sales technical support:

Tel 13807184032


Website record number:鄂ICP备17004685号-1 | Technical Support | Contact Us | Terms of Service and Privacy | Map