Programming Team Lead: Abishalini Sivaraman

Programming Overview

A significant portion of the code coordinates the communication between the main computer, the sensors, the cameras, and the thrusters. The main computer - an Intel NUC - and an Arduino Mega communicate using a serial interface. Hardware constraints dictate us to connect the pressure sensor and the thrusters to the Arduino Mega. Data from the pressure sensor and the thrusters are extracted by the Arduino Mega and sent to the NUC. All other sensors and cameras are connected directly to the NUC. At the beginning of the main program on the NUC, separate threads are created, each gathering data from a different device or sending thruster commands to the Mega.

Once the data is gathered in the main program, each sensor data is evaluated to determine any necessary changes to functionality of the thrusters. For example, if the orientation data from the AHRS shows that the AUV is angled too far in one direction and may be about to flip upside down, the side thrusters are turned on in a way that corrects back to the expected orientation. Likewise, the pressure sensor is used to detect the depth of the AUV in the water. If the AUV is about to accidently breach the surface, the up/down thrusters are immediately turned on to force the AUV back down. The data from the thrusters themselves are the most crucial because they reveal whether or not the thrusters themselves are about to overheat, in which case the thrusters are all turned completely off.

Image Processing

IMAGE DETECTION

To complete to obstacles, the AUV will need to detect images using computer vision. In order to do this, three approaches are being considered: background subtraction, template matching, and SURF/SIFT algorithms.

Background subtraction involves extracting an image’s foreground for further processing, and is most useful when the background is static. The background for the rear-facing camera features the bottom of the pool which remains fairly consistent throughout the shallow end of the pool. As such, this approach would be effective for the line-following task.

Template matching is a technique that identifies parts of an image that match with an existing template image. This method would be useful for following a line that branches off or curves, as well as identifying markers for the various obstacles throughout the pool. Eigenspaces can be implemented to account for different conditions such as perspective, illumination, and color contrast that will vary within the images.

Speeded Up Robust Features (SURF) and Scale-Invariant Feature Transform (SIFT) are based on the same principles, but execute each step differently. The algorithms involve detecting the image, providing a description of an image feature, and comparing the descriptors obtained from different images. Analysis has shown that SURF is three times faster than SIFT with comparable performance. SURF and SIFT can handle images that blur or rotate, making them ideal for the front cameras that will experience motion blur as the robot traverses the pool.

STEREOVISION

Stereo vision is the process of creating 3d images from two or more digital images obtained from a series of cameras. We are using two front-facing cameras to obtain different views of the field and these images are compiled together to form a more accurate 3d representation of the field which helps us get more information about the depth of the plane. Stereovision is essential to estimate the location of objects.

Thruster Control

Using the data gathered from the cameras and the current positioning of the AUV in space, the AUV calculates the most direct path to the object of interest in the camera frame. This path is then reduced to a unit vector in the direction that the AUV needs to travel based on the distance to the object calculated through the image processing. The AUV then uses a control and feedback loop to align its heading with the correct path, turning the thrusters on and off to control the surge, heave, and sway of the AUV.

Kalman Filters

The task of tracking moving objects in a dynamic background proves to be complex due to change in orientation of the object, partial and complete object occlusion, varying lighting conditions, camera motion and unwanted noise added to the camera feed. The objective of tracking an object for our AUV is accomplished by deploying the use of Kalman filters. The Kalman filter provides a recursive solution to predict the state of the object from the past values and noisy measurement data. The Kalman filter can get close to accurate predictions after few iterations of its prediction and update steps.

In our algorithm, the Kalman filter also tries to adjust for the movement of the camera. This adjustment can be done by increasing the dimensions of the state variable to compensate the image frame positions and velocity with respect to the object or it can be done by adding the velocity of the camera to the kinematic equations incorporated to predict the future location of the object. In this way, we have tried to use the Kalman filter for image stabilization.