
Stalk-Net
Why We're Great >
A generic deep-learning based pipeline for stalk width-estimation. Faster-RCNN based region proposals are used for semantic segmentation of stalks. Stereo Imagery is used to get metric width.
Stalk-Net
Why We're Great >
A generic deep-learning based pipeline for stalk width-estimation. Faster-RCNN based region proposals are used for semantic segmentation of stalks. Stereo Imagery is used to get metric width.
Unsupervised Depth Estimation with GANs
Why We're Great >
A generic deep-learning based pipeline for stalk width-estimation. Faster-RCNN based region proposals are used for semantic segmentation of stalks. Stereo Imagery is used to get metric width.
Autonomous Navigation In Vineyards
Objective
The objective of this project is to develop a software stack for autonomous navigation of a Clearpath Warthog in vineyards. The robot uses a suite of sensors, like LIDAR, RTK-GPS, IMU for localization and obstacle avoidance. This is an ongoing project, where the final objective is to drive just using images.

Fig.1. The platfrom used is a Clearpath Warthog. data from RTK_GPS, IMU and Wheel Encoders was fused for estimating robot pose.
Sensors Used

Fig.2. The GPS sensor used. Same model of GPS was used both rover station and base station. The GPS comes standard with a RS-232 serial port for easy radio communication between the two GPSs.
R.T.K. GPS
SwiftNav's Real Time Kinematic GPS system was used for getting the robot's global position with respect to the base-station. The system produced, robot's X,Y and Z position. For the purposes of navigation in the field, a planar motion model was assumed, hence just the X and Y values from the GPS were used.
Wheel Encoders And IMU
But, just using the RTK system is not enough as the rover might either loose radio contact with the base station or loose satellite connection. For this reason data from IMU and Wheel Encoders was fused with the GPS data, to get better odometry.
The Warthog comes installed with a wheel encoder and an IMU, consisting of a gyroscope, accelerometer and magnetometer. This sensor outputs the following readings.
-
Roll :
-
Pitch :
-
Yaw :
-
Accelerations :
-
Angular velocities :
The wheel encoders encode the robot's forward velocity X and its yaw rate, .










.png)
Fig.3. The readings from the IMU and Wheel encoders are transformed from robot's coordinate frame to the global coordinate frame and then fused with the GPS readings. It can be seen that only Yaw and X,Y information is used. This is because, planar motion model is assumed.
Field Deployment
This video shows the Warthog autonomously collecting laser-scans of dormant grape-vines at per-determined GPS locations, in a vineyard in Erie, New-York.
Future Work
Future work involves obstacle avoidance based on LIDAR cost-maps and extension of this work for vision based navigation.