Detecting Grasp Points For Grasping Sorghum Stalks


The goal of this project was to detect grasp points for grasping Sorghum Stalks. Once the stalks were grasped, a Penetrometer was used to measure stalk strength. Fig.1. shows the detected grasp-points.




Fig. 1. (a) shows the stalk images. (b) is the point cloud with segmented stalks and detected grasp-points

System Design

  • The mobile platform shown in Fig.2 (a), was developed by Timothy Mueller-Sim and the 3-DOF gripper was developed by Merritt Jenkins.

  • The Platform had a Carnegie Robotics Multisense S7 stereo vision camera, as shown in Fig2 (a).

  • The point-cloud generated by the camera can be seen in Fig2 (b)





Fig. 2. (a) Multisense S7, used to collect the point-clouds . (b) is the collected-point cloud and (c) is the robotic platform.

  • The rectified images from the camera are used to train a  Conditional GAN to segment out the stalks in the image.

  • The adversarial training routine, as shown in Fig. 3, generates masks with better edges as, the discriminator forces the generator to make more realistic looking images.

Fig. 3. The Generator takes the stalk image as input and  learns to produce images with stalks segmented. The Discriminator tries to predict whether the input that it sees is the real image with stalks segmented or the generated image. The Generator's objective is to maximize the Discriminator's loss.

  • At deployment  the trained generator is used to convert the RGB images into corresponding images such that the stalks are painted red.


Fig. 4. The trained Generator, generating the image with stalks masked.

  • The segmented stalks are then projected onto the registered point cloud, produced by the Multisense camera.

  • These segmented points are used to detect the grasp points on the stalks.

  • The point-cloud with the segmentation mask overlaid is then filtered such that only the segmented point remain.

  • These points are then projected onto slices in y-axis, which are 20 cm apart.

  • If the density is above a threshold, the point is chosen as a valid grasp point.

Fig. 5. The segmented stalks, being projected onto the point-cloud.






Fig. 6. (a) The segmented stalks in the point cloud, (b) The points left after removing non-stalk points, (c) The result of flattening the points along z-axis for false positive removal and (d) the projected grasp-points onto the point-cloud.





Fig.7. (a) This figure shows the detected grasp-points and (b) shows the gripper gripping the stalk at the grasp-point.

  • Once the points are detected, the robot can go grasp them.

Process Flow Diagram And Results


Fig.8. This figure show the whole process flow diagram for stalk grasp-point detection. The rectified image from the multi-sense is sent to the trained Generator. The segmented stalks are then projected onto the point-cloud. Then, the point-cloud is filtered for non-stalk points and the grasp-points and the remaining points are used to detect grasp points.

This video shows the whole pipeline along with some in-field experiments.

This video shows the robot grasping the stalks.

  • The resulting architecture had a Grasping Accuracy of 74.13% with a stalk detection F1 score of 0.90.