Industrial Robots: From Perception to Motion
Industrial robotics requires knowledge in many engineering domains, including mechanical design, perception, decision making, control design, and embedded systems. Learn about the key components and steps to complete a pick-and-place robot workflow.
Pick-and-place robots include basic initialization steps for perceiving environment, followed by two main steps, such as identifying parts and executing a pick-and-place workflow. Compared to traditional industrial robot tasks that define all objects in the workspace area beforehand, the scan-and-build environment process using computer vision is important for picking high-mix parts and flexible operations. Computer Vision Toolbox™ and Deep Learning Toolbox™ are used for object detection. The next task is to instruct the robot to move from an initial configuration to a desired joint configuration by avoiding collisions with obstacles in the scene. The motion planning consists of path planning and trajectory generation, providing smooth joint trajectories and solving the practical industrial robotics task. In this video, you’ll see the bi-directional rapidly exploring random trees (RRT) algorithm introduced for an industrial robot manipulator application. The path provided by path planner constitutes the input to the trajectory generation. Then, the trajectory generator will generate a time-based control sequence for how to follow a path given constraints over time. You’ll also learn how Stateflow® can be used to schedule the high-level tasks and step from task to task for the pick-and-place workflow.
For more information on the tools used in this video, refer to the following resources:
- Find out more about programming robots in MATLAB and Simulink
- Watch the Designing Robot Manipulator Algorithms
- Watch the Trajectory Planning for Robot Manipulators
- Try out the Pick-and-Place Workflow using Point-Cloud Processing and RRT Path Planning
- Try Robotics System Toolbox
In this video, I'm going to discuss about designing industrial robot application specifically pick-and-place a robot application development with MATLAB. During the course of this video I'm going to cover the following topics.
First, I'm going to start talking about industrial robot and each applications. I will then discuss about the key component and step to complete pick-and-place of robot workflow-- such as scan and build environment, detect and classify the part, and path planning, and trajectory generation. I will conclude by pointing additional available resources.
Here we are seeing a simulation on the left and corresponding video on the right of a robot arm performing pick-and-place. Pick-and-place automation can speed up the process of picking up the part or item and placing them in another location for last-mile industry application. Typically this application use an advanced perception system and autonomous elbow region to identify, grasp, and move the object from one place to another.
Pick-and-place robot in general or include the element of the perception, planning, and control. Global manipulator can autonomously detect a special object based on camera input, and plan a path to pick up the object.
As you can think of, it involved several different technologies-- such as robotics, optimization, computer vision, machine learning, controller logic, and so on. To begin with designing robot application we can start with a system that we design that is representing the component that we would need an interaction between them.
For this case, we would need object detector and crash file from perception module, motion planner and state controller, simulator or the load bar to prototype and iterate on and test each performance.
I further broke down the whole pick-and-place workflow in the slide. This flowchart explain the detail over how the load arm manipulator interact with object. It can consist of a basic initialization step for perceiving environment and followed by two main steps-- such as identify the part and executing pick-and-place task.
The first key step easier to perceive the environment. By leveraging 3D scanning and point-cloud processing from the computer vision, autonomous industrial robot were needed to respond to the dynamic environment, such as a change in the part location or dynamic obstacles. Compared to the traditional pick-and-place task, where everything is known beforehand, these steps are very important for picking high mix part as well as for flexible automation.
This approach will overcome limitation in traditional industrial robot applications. Therefore, before start the pick-and-place task, robot move the predefined workspace area to scan the scene and capture a set of point-cloud of the environment using onboard attached sensor.
Once the robot has scanned the workspace area, the captured point-cloud are further processed to be encoded as collision meshes so that robot can easily identify the obstacle and part during the pass planning. The process from point-cloud to collision mesh is shown here. Several point-cloud processing tools are used from Computer Vision Toolbox in MATLAB-- such as a point-cloud transform, point-cloud emerging and segmenting point-cloud into cluster. Finally you can create collision mesh object from resulting point-cloud segment then Path Printer will understand these mesh as obstacle to avoid.
We now need to detect and classify the object so the robot can know which object to pick up. We can create a set of image data for training. In here we move around a robot and capture images streams from many different camera perspectives. Then we can use Deep Learning to classify and detect the object or the image.
The image can be labeled using Image Labeling app from Computer Vision ToolBox. So that you can create training data step for detection model. The detect function from Computer Vision ToolBox can give you the image position of the bounding box over the detected object along with their classes.
In this recycling robot example, within the gazebo simulator we have two class system object-- bottom and can-- placed on different location of the table. We use simulated camera feed from the robot and applied pretrained Deep Learning model to detect recyclable part. The model takes camera frame as input and then output the location of the object and the type of recycling it requires.
Robot now know the obstacle to avoid the collision, and which object to pick up, and to where to placing them. Next test is to instruct the robot to move from initial computation to desired joint computation by operating collision to pick up the object and to place the object.
In order to move the local artifact to reach the object, then we need to motion planner. Let's dive into Path Planning, and the Trajectory Generation providing smooth joint trajectory solving practical robotic task. Robot to pick and place objects in an environment may require Path Planning algorithm to find the collision free path for the robot from one configuration to another computation. It is a pure geometric description of motion. Taking into account obstacle avoidance, and negotiating a complex scene globally.
It takes initial configuration, final configuration, and environment as an input. What makes the problem interesting is the constraint that needs to be satisfied. Example of constraint are robot join limit, and obstacle. Path Planner will find the collision pre-joint trajectory from starting configuration, to goal configuration. Depending on the characteristic of the application, environment, and the robot you use, you can start it either using optimization theory, or sampling based Path Planners.
In here, I would like to introduce to you Bi-directional repeatedly exploding inventory algorithm for robot manipulator. Bi-directional RRT planner create two tree as specified as start, and goal configurations. It will need to specify the sum properties. First maximum connection distance between planning the configuration. And an optional, connect to heuristic to potentially increase your speed. To extend the each tree from start, and goal configurations, the planner generates a random configuration. And if valid, which means no collision with the environment, take a step from the listed node based on max connection distance property.
After extension the planner attempts to connect between two trees. Invalid configuration, or the connection that collides with the environment are not added to the tree. When EnableConnectHeuristic property is true, this disables limit on max connection distance property, and connect collision pre-two three directly whenever two nodes see each other. When environment is a less crowded, the connect to heuristic is useful for shorter planning times.
When we set the enable connect the heuristic property to false, we limit extension distance to connecting between two three, to the value of a max connection distance. This will result in higher successive rate of finding a valid plan, but may need two long paths. We can use a path shortening function to shorten the specified path by learning the randomizer shortening strategy. For example, this is the initial pass. You can select two non-adjacent edges, and select the intermediate configuration on the edge we selected. Try to connect them. If not valid, skip adding this edge. But we repeat with another two non-adjacent edges. If valid, add this edge. And then, we can delete another longer edge.
In this slide, I'm showing one of the examples that showcase how to use bi-directional RRT function for global manipulator. As I discussed, there are several properties to tune your robot path so that you can make a short path, or to improve path planning time. I'm showing here two paths resulting from larger, and smaller max connection distance respectably, when placing the object over the wall. The computation time of the planning-- the planner is proportional to the number of configurations generated.
In order to improve the planning time, consider increasing validation distance, or decreasing max connection distance. The path provided by path planner, constitutes the input to the trajectory generation. Then trajectory generator would generate time-based sequences, or how to follow the path given constraint such as position, velocity, and acceleration by connecting way-point by class of the polynomial function locally. So trajectory is a description of how to follow the paths of time. We can know now what is the difference between path planning, and trajectory generation.
There are several ways to create trajectory that interpolate join configuration of time. Trapezoidal velocity trajectory for example, is piece-wide trajectory of constant acceleration, zero acceleration, and constant deceleration. This leads to trapezoidal for a while. It is relatively easy to implement to invalidate against the requirement such as position, speed, and acceleration limit. You can also interpolate between two way-points using various order over polynomials. The most common order used in practice are cubic polynomial, and quintic polynomial. Polynomial trajectories are useful for continuously stitching together a segment with zero, or non-zero velocity, and acceleration because acceleration profiles are smooth unlike with trapezoidal velocity trajectories.
So we will need to continuously go through the state of detecting objects, picking them up, and placing them in the correct staging area. In order to do that, stabler chart can be used to schedule the high level task, and step from task to task for pick and place of workflow. Finally, by putting all steps together into this example, robot identified part, and recycled them into two pins. In this example, robot is learning on Gazebo simulator with a simulated depth camera mounted on the robot.
We use bi-directional RRT path planner, and trapezoidal velocity profile for trajectory generation. This example uses multiple ajoint toolboxes. Robotic System Toolbox is used to model, and simulate the manipulator. ROS toolbox is used for connecting MATLAB to the Gazebo simulator. Computer Vision Toolbox, and Deep Learning Toolbox are used for object detection using Point-Cloud-Processing. It starts to scan the environment for building the scene with Point-Cloud-Processing. Once a complete scene of the working area is available from scan, and build environment step, robot can plan a path to pick, and place the part to recycling bin. The robot continues working until the old part has been placed. Recycle robot example shown here, can be a prequel to different scenario as well, such as welding, and assembling.
In this video, with covered picking in little application development with a MATLAB. We have many examples to get started with. I listed here, some of the examples on manipulator motion planner, trajectory generation and following, and collision checking from Robotics Toolbox that you can try out. I encourage you to visit Robotic Systems Toolbox web-page. We have a seminar multiyear that are published on our website to help accelerate your development effort. In addition, we have a short video, and GitHub repository to help you ramp up on several topics. I would ask you to explore these resources, and see if these can help with your application. Thank you for your attention.
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.