Learning To Prune Grape Vines

Objective

  • Manipulation in agricultural environments is challenging. This was an exploratory project which I worked on in my free time. The objective was to explore the potential use of Deep Reinforcement Learning for robotic agricultural manipulation.

  • In this project, I trained a Neural Network agent to control a 7DOF arm. The system detected buds on a grape-vine and the NN controller made the arm reach the buds to snip them.

ppo_arm_real.png
ppo_arm_sim_edited.jpg

Fig.1 (a) The robot reaching the bud using the final trained policy and (b) the robot reaching the bud in simulation.

(a)

(b)

 

Training the arm

  • The arm was controlled by Neural-Network with 2 hidden layers, with 64 neurons each.

  • The input to the neural network was the state of the joints, position of the goal and the position of an obstacle.

  • For simplicity, I just assumed one single obstacle roughly between the goal and the end-effector.

ppo_arm.png

Fig.2  The Neural Network policy takes the joint position,  goal and obstacle locations as inputs and outputs the joint torques as actions.

 
mujoco.png
  • The Neural Network was trained in Mujoco simulation.

  • I used Proximal Policy Optimization with adaptive KL divergence penalty for training the NN given the reward.

  • The reward was a linear combination of distance of the end-effector from the goal and the sum of torques at each joint.

Fig.3 The policy is trained in Mujoco Environment. The two green boxes represent goal position and the obstacle.

 
  • This the arm at 100 time-steps. It's like meh!

  • At n = 400,000, the arm learns to move towards the goal but still doesn't quite reach it -_-.

  • At about 100,000 time steps the arm is sentient!

 

Detecting and Localizing Buds

  • A custom stereo imager, built by Dr. Silwal was used to collect high resolution stereo images of the vine. The cameras have a global shutter and are set yo to have high intensity flashes and low exposure time to help reduce the background noise.

left_stereo_edited.jpg

Disparity Map

stereo_cam.png
right_stereo_edited.jpg
disparity.png

Fig.4. From top to bottom, this figure shows the camera, left and right stereo images and the resulting disparity map.

detected.png
  • A version of Single Shot Detector, customized to detect only small objects with square aspect ratio was trained to detect buds.

Fig.5. This figure shows the detection from the Faster-RCNN. Each box surrounds a bud.

  • Once the 2D locations of the buds was found, the Disparity Map was used to map them onto the Point-Cloud. Resulting in this awesome looking 3D Point-Cloud of a dormant grape vine, with the buds shown by blue spheres.

  • Once we have the trained controller and the detection pipeline, we can then reach the buds! The first part shows the arm in simulation and the second part show the same policy being deployed on hardware.

Results

  • The arm was able to reach the goal with a success rate of 86.6% in simulation and 81.8% when deployed on hardware.

  • The adjoining table shows the precision and recall for detector's performance.

prec.png